By day, Akira Fukabori and Kevin Kajitani worked at Japan’s largest airlines holdings company, but by evening the friends liked to develop new concepts. Last year they convinced the company’s board of directors to take a big leap on one of their ideas: robots as a service.
The aerospace engineers enlisted a partner, Charith Fernando, a roboticist, and soon had spun out a robotics company from All Nippon Airways owner ANA Holdings.
Formed in 2020, Tokyo-based avatarin now has more than a hundred telepresence robots deployed globally, including on-demand robots installed at four museums across Japan.
Robots as a service, or RaaS, is an emerging business model that minimizes the costs and commitment for businesses to deploy robots. Using this model, avatarin offers its robots to businesses, which make them available like on-demand scooters.
avatarin’s robot, dubbed the “newme,” allows people to book a robot ticket for a set time, date and place. Relying on the NVIDIA Jetson edge AI platform for compact supercomputing, newme can be steered remotely from a home computer, providing a low-latency, high-definition tour of sights, like museums and aquariums.
“This is just another step in the sharing economy, like on-demand scooters, providing consumers virtual access to mobility and our customers higher utilization of robotic resources,” said Kajitani, COO of avatarin.
Parent company ANA Holdings has sky-high ambitions — like putting telepresence robots on space missions. The firm early on sponsored an XPRIZE challenge to boost the work of Kajitani and Fukabori, CEO of avatarin.
As a traditional conglomerate, ANA Holdings is making an unusually bold bet on robots as a service, offering a glimpse at the future of corporate innovation and robotics.
The stakes are high. The global robotics market was estimated at $27.7 billion in 2020, a figure that is forecast to reach $74.1 billion by 2026, according to research firm Mordor Intelligence.
Robots as a Service
Demand for robots is growing across industries. That has only accelerated, spurred by workforce shortages from COVID-19 lockdowns, according to Mordor. Robots are being deployed to minimize human-to-human contact and lessen COVID-19 risks, whether for healthcare, food delivery or manufacturing.
avatarin’s newme robots are installed at museums in Japan, such as the Venetian Glass Museum in Hakone, Kanagawa Prefecture, to help accommodate visitors who can’t physically make it to the museum. The robots have a front-facing LED screen so that navigators can appear as an avatar for interactions with people.
The newme sports front-facing 2K stereo cameras for depth perception and streaming visuals to bring remote users lifelike views of places and people. They also have a foot facing navigation camera to help people navigate the robots around places. The video is processed on the NVIDIA Jetson Xavier NX for crisp visuals at 60 frames per second for virtual interactions and AI tasks.
The robot can go six hours on a full charge, thanks to the energy efficiency of the Jetson Xavier NX, which can process as much as 21 trillion operations per second at just 15 watts.
avatarin is running robot service pilots with partners in retail, tourism and education.
The robot can get around on its own, as well, so it can autonomously return to its charging station to juice up for the next user.
avatarin enables this autonomy with simultaneous localization and mapping (SLAM) techniques for the robots so that they can generate their own indoor maps of environments for navigating.
The company had to switch from a CPU-based system to NVIDIA GPUs to support SLAM and future AI ambitions for newme. “The higher frame rates of using NVIDIA GPUs on SLAM make it better to get accurate point cloud information for these maps,” said Fernando, avatarin’s CTO.
SLAM enables robots to use their sensors to build maps — or point clouds — as they move about. And they can compare new sensor data with collected sensor data, using algorithms, to locate their position on the map being created live.
Deploying SLAM is a data-intense, multistage process that requires alignment of sensor data using a variety of algorithms and the parallel processing capabilities of GPUs.
“We see a future where people can virtually experience almost any destination in the world, or even the universe or the metaverse, with newme, possibly communicating in real time in just about any language using AI,” said Fukabori.