Startup Aids Visually Impaired with Guided Service Powered by AI

Aira helps people navigate through everyday situations with smart glasses.
by Scott Martin

It all started with a blind communications professional.

Brothers Sujeeth and Suman Kanuganti, tech industry veterans, wondered whether they could harness Google Glass and AI to develop an OnStar-like service (the operator assistance for auto emergencies) for a new friend who is visually impaired.

Today that has morphed into Aira, an AI-guided service used on smart glasses to help people with impaired vision navigate and perform many daily tasks easier.

“We can detect barcodes, identify products, read product labels and provide a voice-first experience,” said Sujeeth Kanuganti, Aira co-founder and CTO, previously a Cisco engineer.

Suman, previously at Intuit, is CEO of Aira, a San Diego-based company and member of the NVIDIA Inception virtual accelerator program.

AI for Eyes

Airports are a slog for most. But imagine being a blind person trying to get through security to find the gate in an airport to catch a plane. Trying to shop and find products on the shelf of a retailer is similarly difficult.

Aira’s founders focused on solving these types of everyday problems.

Through its AI-powered platform and smart glasses, Aira connects people to a network of service agents for assistance. Agents can see what the blind person sees, enabling assistance in just about anything.

Aira’s Horizon Smart Glasses sport a forward-facing camera and audio capabilities to guide the blind, linking to a smartphone via a USB cord to tap into GPS, connectivity and the Aira app.

The camera can capture video of what the user is facing, and Aira’s AI assistant, Chloe, can help identify images, such as a box of cereal at the grocery store or labels and words on prescription medication bottles.

Chloe can also handle other simple tasks locally on the device. For instance, it can enable users to get audio-guided assistance with capturing an image of a check and then lining it up with a bank app to deposit into their bank account by smartphone.

To help customers with more complex tasks, human agents are accessible at a tap of a button on the glasses or app. The agents communicate directly with Aira users via phone, providing real-time visual information with audio support.

Unique AI Training

The more complex image recognition and natural language processing tasks are sent to Aira’s convolutional neural networks and recurrent neural networks running inference on NVIDIA GPUs at AWS. Aira began training on an NVIDIA TITAN V GPU and now uses an array of RTX 2080 Ti GPUs for training, allowing it to develop its deep neural networks on massive amounts of data.

And Aira has an extraordinarily well-labeled set of data, according to Sujeeth Kanuganti.

A lot of models built for object recognition are trained over images taken from ImageNet, Instagram images or other public sources. There’s no comparison to training from the real-world annotated by Aira’s service agents, said Sujeeth. The startup has amassed 3 million minutes of service that’s been annotated.

“This will make it more sophisticated than other technologies out there,” he said.

Aira’s Access service is available for free at many locations, including all Walgreens, Wegmans and AT&T stores, as well as more than 30 domestic and international airports. Aira Access partners pick up the tab for agent-guided assistance, which can be accessed via smart glasses or smartphone app.

Aira is available in all 50 U.S. states and in six countries, around the clock every day.

Photo credit: Aira