“Talk to the hand.” Gesture-based technology startup SoftKinetic is flipping that phrase on its head by allowing people to let their hands, and other body parts, do the talking to their computers and TVs.

SoftKinetic, a Brussels-based technology firm with offices in Silicon Valley and Korea, is about to get a lot of visibility as consumers will have the opportunity to use the technology it licenses to major OEMs. Here’s why.

One of the company’s core technologies is a 3D image sensor that calculates the distance of every point in a scene, using a specialized technique called “time-of-flight.” The sensor comprehensively maps every pixel in 3D by measuring the time it takes for light to reach and return from each point in a scene.

SoftKinetic also makes software that analyzes data produced by such 3D cameras, both its own and those made by others — including stereoscopic and structured light cameras.

SoftKinetic DepthSense 325
SoftKinetic’s DepthSense 325 camera was recently adopted by Creative Labs.

Slap these together and you get a small camera device that looks something like a webcam. Once connected to a laptop, it immediately transforms the PC into one that provides a perceptual, gesture-based experience.

That lets you interact with it, open files, grab objects and more — all with your hand. Rather than replacing a keyboard and mouse, the camera complements them. Raise your hand in the middle of typing and the PC immediately starts “listening” to what your fingers or hand has to say.

The camera can also be integrated directly into the bezel of a laptop, or a TV, so the viewer’s body can supplement a remote control. This can be used for casual, full-body gaming, and in therapeutic settings for patients recovering from physical ailments.

SoftKinetic full-body gesture recognition technology
Embedded in a TV, the SoftKinetic camera tracks players while filtering out the background.

But Aren’t Tablets the Future of Computing?

Using the power of GPUs, SoftKinetic has adapted its existing algorithms to an NVIDIA Tegra processor-based tablet running Android. At NVIDIA’s Emerging Companies Summit earlier this year, they demoed a 3D camera running on the tablet and more recently on the NVIDIA SHIELD. The result was an amazing, console-like gesture experience, but in a much smaller device, as shown in the video below.

The GPU’s potential for this technology doesn’t end there, according to Michel Tombroff, CEO of SoftKinetic. Most 3D cameras today have low resolution — VGA at the max. As resolution increases, the computing becomes intense. This is where GPUs get busy: Many repetitive algorithms are used to refine measurements and extract patterns from images. These need to be parallelized or the main processor won’t be able to keep up.

Tracking specific movements of a finger and other body part is harder to parallelize, and thus not as easy to accelerate. However, filtering each pixel and removing unneeded artifacts are very computing-intensive tasks — and this is the largest portion of work SoftKinetic does for 3D imaging.

In the future, SoftKinetic expects to get into phones and tablets, and embedded devices such as cars. Looking down from the ceiling or rearview mirror of a car, the camera will keep an eye on the position of a driver to take measures in case, say, a person nods off. It will also aid in the navigation and control of a car via a driver’s hand signals. Yet another way we’ll let our hands do the talking in the future.