What Is Simultaneous Localization and Mapping?

SLAM is a commonly used method to help robots map areas and find their way.
by Scott Martin

To get around, robots need a little help from maps, just like the rest of us.

Just like humans, bots can’t always rely on GPS, especially when they operate indoors. And GPS isn’t sufficiently accurate enough outdoors because precision within a few inches is required to move about safely.

Instead they rely on what’s known as simultaneous localization and mapping, or SLAM, to discover and map their surroundings.

Using SLAM, robots build their own maps as they go. It lets them know their position by aligning the sensor data they collect with whatever sensor data they’ve already collected to build out a map for navigation.

Sounds easy enough, but it’s actually a multi-stage process that includes alignment of sensor data using a variety of algorithms well suited to the parallel processing capabilities of GPUs.

There are many forms of SLAM, which has been around since the 1980s. For the purpose of this article, we’ll focus on its application within NVIDIA Isaac for robotics.

Sensor Data Alignment 

Computers see a robot’s position as simply a timestamp dot on a map or timeline.

Robots continuously do split-second gathering of sensor data on their surroundings. Camera images are taken as many as 90 times a second for depth-image measurements. And LiDAR images, used for precise range measurements, are taken 20 times a second.

When a robot moves, these data points help measure how far it’s gone relative to its previous location and where it is located on a map.

Motion Estimation

In addition, what’s known as wheel odometry takes into account the rotation of a robot’s wheels to help measure how far it’s traveled. Inertial measurement units are also used to gauge speed and acceleration as a way to track a robot’s position.

All of these sensor streams are taken into consideration in what’s known as sensor fusion to get a better estimate of how a robot is moving.

Kalman filter algorithms and particle filter algorithms — basically, a bunch of sophisticated math — that rely on sequential Monte Carlo methods can be used to fuse these sensor inputs.

Sensor Data Registration 

Sensor data registration, or the measurement between data points, can happen between two measurements or between a measurement and a map.

Using the NVIDIA Isaac SDK, developers can localize a robot with what’s known as scan-to-map matching. Also in the SDK is an algorithm from NVIDIA researchers called HGMM, or Hierarchical Gaussian Mixture Model, that can align two point clouds (a large set of data points in space) taken from different points of view.

Bayesian filters are applied to mathematically solve where the robot is located, using the continuous stream of sensor data and motion estimates.

GPUs for Split-Second Calculations

The mapping calculations described above happen 20-100 times a second, depending on the algorithms. This wouldn’t be possible to perform in real time without the processing power of NVIDIA GPUs. Ideal for robotics, Jetson AGX Xavier delivers 32 teraops of GPU workstation-like performance in a compact package.

The massive number-crunching task of aligning point clouds or depth images can be done on NVIDIA GPUs  as much as 20 times faster than with CPUs.

Think of Jetson Nano as a similarly giant performance leap for the maker crowd and others.

Visual Odometry for Localization 

Visual odometry is used to recover a robot’s location and orientation using video as the only input.

NVIDIA Isaac supports stereo visual odometry — two cameras — that works in real time to help guide on the location, tapping into a minimum of 30 frames per second. It’s available for use on all products powered by our compact Jetson supercomputing modules.

Using the stereo visual odometry capabilities that come standard in Isaac, robotics developers can accurately calculate a robot’s location and use this for navigation.

Visual odometry capabilities are packed into our Jetson Nano Developer Kit. (Visual odometry isn’t part of Isaac for SLAM just yet.)

Future development for Isaac on visual odometry will integrate it and elevate it to the level of SLAM. For now, SLAM is used as a check for map recovery of a robot’s location and orientation to eliminate errors in navigation from inaccurate visual odometry results.

Map Building for Localization

Maps can be created in three different ways.

One way is for mapping algorithms to be run on the Jetson device while somebody supervises and drives the robot manually.

A second way is to have the Isaac application on the robot to stream data to the Isaac application running the mapping algorithms on a workstation.

But a third, and recommended, method is to record the lidar scan and odometry data to a file using Isaac’s handy recorder widget. That way mapping can be done offline using the logmapping application. This method allows tuning parameters of the mapping algorithms for optimized maps without driving a robot around again and again.

For creating maps to localize and navigate, 2019.1 version of the NVIDIA Isaac SDK supports and uses OpenSlam’s Gmapping and Google’s Cartographer algorithms.

The modularity of Isaac enables users to also integrate other third-party libraries of their choice or plug in their own implementations. Isaac feeds 2D range scan data that is obtained using a lidar or depth camera to these mapping algorithms. Isaac also supplies odometry information computed using wheel speeds, inertia measurement unit data and computer vision.

Occupancy Grid (for LiDAR SLAM)

As a robot perceives its surroundings using LiDAR or cameras, Isaac creates an occupancy grid map of the robot’s environment with the resolution determined by the user. This 2D “local map” provides information, whether each cell in the map is blocked or free, so that the robot can plan its navigation path accordingly.

Well-constructed occupancy grids from Isaac are key to fast, natural and reliable obstacle-avoidance in the Isaac navigation stack.