Reading a food label. Navigating a crosswalk. Recognizing a friend. These tasks are easy for most people, but can be difficult for those who are visually impaired.
To bring more independence to the lives of people with limited sight, a new wearable device called Horus uses GPU-powered deep learning and computer vision to help them “see” by describing what its users are looking at.
Its maker, a Swiss startup called Eyra, announced this week at GTC DC in Washington that the device will soon be available through an early access program in Italy. Alpha testers describe Horus as a life-changing device, said Saverio Murgia, CEO and co-founder of Eyra.
Horus Trials Set for January
Eyra has started trials of Horus with the Italian Union of Blind and Partially Sighted People. (It also supports English and Japanese.) Feedback from early testers, most of whom will receive the device in January, will be used to improve the device before wider release later this year.
Worn like a headset, the device uses an NVIDIA Tegra K1 for GPU-accelerated computer vision, deep learning and sensors that process, analyze and describe images from two cameras.
The headset uses bone conduction instead of going through the ear canal, so users can hear the verbal descriptions even in noisy environments. The battery and GPU are housed in a box that’s roughly the size of a smartphone, and the device will cost roughly $2,000.
Wearable Device for Blind and Visually Impaired
There’s no question that the award sped Eyra’s progress from prototype to product, Murgia said. The publicity alone was a plus for its recruiting efforts.
“Everyone is trying to hire deep learning engineers,” Murgia said. “Getting publicity at GTC made a big difference in our success attracting candidates.”
Deep Learning Makes a Difference
Since then, the company has worked to make the device smaller, faster and more stable. Eyra also became part of our Inception Program, a “virtual incubator” that assists startups advancing artificial intelligence and data science.
Murgia said Eyra used our Tegra K1s GPUs, cuDNN and our CUDA parallel computing platform to speed up training its deep neural network to identify imagery. The company also used the Tegra K1s and CUDA for inference – that is, deploying the trained network in the real world.
“Seeing the faces of people who try Horus for the first time drives our passion,” Murgia said. “It shows we’re making a real difference in people’s lives.”