5 Wild Ways Startups Are Using GPUs

by Alain Tiquet

Google and Baidu dropped some big ideas about deep learning at our GPU Technology Conference last month.

But keynote addresses from the two search giants weren’t the only show in town. Five startups took the stage in the “Show & Tell” event at GTC’s Emerging Companies Summit to demonstrate how they’re using GPUs in bold new ways.

They’re hoping to be part of a now-grand tradition of other ECS participants that have gone on to glory. For example, Oculus was acquired by Facebook for $2 billion and Natural Motion was bought by Zynga for $527 million.

Check out the five below for an early look at technology that could help change the world:

Herta Security — Barcelona-based Herta may be a small operation, but it’s big in the world of facial recognition. It’s developed the world’s fastest facial recognition system and delivering results in crowded environments to customers in the security and marketing industries.

At Show & Tell, CEO Javier Rodriguez Saeta revealed his company’s technology was used at the Golden Globe Awards to nab party crashers and stalkers. For advertisers trying to reach a specific audience, Herta’s system can identify parameters such as gender, approximate age, use of glasses, and facial expressions.

Watch Herta’s system scan faces from recorded video at 12X real-time speeds. It looks right through makeup and beards to identify the actors underneath:

Watch the replay of Saeta’s presentation.

Paracosm — Paracosm CEO Amir Rubin doesn’t want you to have to imagine playing Quidditch with Harry Potter. He wants you to host the game in your living room.

Paracosm screenshot
Paracosm’s rover ran circles, and other pathways, in a lunar demo at GTC.

Based in Massachusetts, Paracosm uses depth sensors in advanced phones and tablets, like Google’s Tegra-powered Project Tango tablet, to capture the dimensions of interior spaces. It stitches these images into a 3D map that corresponds in one-to-one fashion with the real world. With maps like these, machines can navigate our world as well as people can.

And developers can create novel immersive experiences. Imagine guided tours of museums that react to the movement of patrons. Or robots mapping caverns or other planets that are too dangerous for humans to explore. And one day perhaps an augmented reality game that blends your living room with virtual versions of Potter and crew.

See the bot’s-eye view of Paracosm’s lunar rover demo at GTC. Watch the replay of Rubin’s presentation.

Jibo — A multi-tasker extraordinaire for the home, the cuddly Jibo family robot can take pictures or video; alert you to calendar items, voicemails and incoming texts; read books and play games with the kids; manage home automation; video conference; and more.

Jibo is packed with tech, as you’d imagine, including Wi-Fi, stereo vision, a microphone array and tactile senses on its body. But it’s also a deep learning demon, with natural language understanding and machine learning so it can perceive the world, make decisions and learn from its experience.

As a development platform, Jibo awaits applications that stretch the imagination. Learn more about the Cambridge, Mass.-based startup’s mascot in this video:

And watch the replay of the presentation by Jibo Founder and Chief Scientist Cynthia Breazeal.

Clarifai — In the future, you won’t need to tag and sort images. Artificial intelligence will do it for you, near instantly. Using the power of deep learning, Clarifai’s image recognition technology sorts through millions of images at lightning-fast speed to change the nature of visual search.

The New York startup’s latest trick is real-time video analysis. CEO Matthew Zeiler dropped a URL to a 3.5-minute-long video of outdoor scenery into the Clarifai engine. Ten seconds later all the scenes were scanned, identified and associated with predicted tags.

Clarifai screenshot
Clarifai’s technology uses deep learning to automatically tag images.

The entire clip, and every other one added to a database, was now sortable frame by frame. Looking for the scene from the forest or the mountains? Or the mountains with snow or without? It’s all at your fingertips.

Clarifai’s tech also understands the shades of meaning with human language. The tag “jaguar” will pull up the car and various kinds of the cat, so you can explore the world visually. Try it yourself with Clarifai’s online demo.

Watch the replay of Zeiler’s presentation.

Mirriad — In an age when skipping commercials has never been easier, London-based Mirriad aims to make paid sponsorship attractive again.

Its computer vision technology relies on 21 algorithms running in parallel to tailor ad placement in video. The tech turns 2D video into 3D data, figuring out factors such as how the camera is moving and what’s in foreground versus background. It then inserts into the video 3D ad placements that exist and react within the scene as if they were there at the time of filming.

Mirriad’s technology can place ads, and adjust them. So, for example, when a series goes into syndication across dozens of countries, ads can be changed to the local brands and languages.

Check out Mirriad’s work here:

And watch the replay of the presentation by Mirriad CEO Mark Popkiewicz.