Track body movements better with Raspberry Pi and fisheye cameras

Researchers at Carnegie Mellon University have developed a body tracking system using Raspberry Pi Compute Module 4 and handheld cameras.

ControllerPose body trackin
Raspberry Pi Compute Module 4 slots nicely onto the side of the controller

Traditionally, VR systems only track your body by following the movements picked up by a headband and handheld controllers. It then guesses what your torso, legs, and feet are likely to be doing, based on how you’re moving your head and hands. There are expensive add-ons you can buy to attach to your hips and feet, which would enhance your VR experience — but even if you shell out for these, there are still limitations.

How does ControllerPose track your body better?

ControllerPose uses fisheye lens cameras mounted on handheld controllers to look back at and capture a fuller view of the user’s body and movements. The fisheye cameras pick up two 185-degree views of the user’s body, which are stitched together by the software to achieve an even wider view. Images are processed locally by Raspberry Pi Compute Module 4 running Google’s AI development platform Coral.

All of this, combined with the usual head and hand data picked up from the headset and handheld controllers, achieves pose estimates accurate to within 8.59cm.

What can we use better body tracking for?

The team has created a few leg-centric VR games to put their creation to good use. There’s a Human Tetris game in which you have to contort your body to fit though differently shaped gaps. Feet Saber sees players batting away objects with not just their hands (which are holding lightsabers) but also their feet. And a hockey goalie experience does what it says on the tin.

ControllerPose body tracking

I, for one, welcome the challenge to compete against people who can no longer phone in their Just Dance performances. Now we can track whether you are actually doing the lower body choreography or not. This is a serious business.

Congratulations to the research team: Karan Ahuja, Vivian Shen, Cathy Fang, Nathan Riopelle, Andy Kong, and Chris Harrison. You can read their full research paper here.

No comments

Comments are closed