Fast Autonomous Exploration

In scenarios where it might be dangerous or impossible for humans to navigate, like search and rescue, it would be great if we could send robots out to autonomously explore the environment. MIT researchers develop a state of the art exploration algorithm and use the RACECAR to verify the algorithm experimentally.

In scenarios where it might be dangerous or impossible for humans to navigate, like search and rescue, it would be great if we could send robots out to autonomously explore the environment. Completing this task may consist of a number of capabilities like creating a map or planning paths through the space. Among these critical components is the ability to decide where to move next. Researchers Zhengdong Zhang, Trevor Henderson, Vivienne Sze and Sertac Karaman created a new state of the art algorithm to answer this exploration problem and used the RACECAR to verify their algorithm experimentally.

Problem Setup

Before delving into the solution, let's formalize the problem. A robot is placed in an unknown environment and as it moves through that environment it creates a map. Our goal is to come up with an exploration strategy (i.e. a method to decide where to move next) so that the robot explores the space as fast as possible.

We will assume that robot maps the space with some sort of device that can measure distance. For example a lidar, a sonar, or an rgb-d camera. In the animation on the right we show the simulated RACECAR driving around and mapping. The rainbow dots represent the distances that the car is detecting using lidar.

Since the RACECAR drives on the ground, we can represent its map with a picture, just like a normal map. If the car could fly we would need a 3D map - but let's not get into that right now. We will be using an occupancy grid as shown on the right. Each pixel on the map represents a point in space and has an associated "occupancy probability" represented by the shade of the pixel. Light pixels are most likely free, drive-able space, dark pixels are most likely obstacles, and the grey pixels are unknown.

Anytime the range sensor makes a measurement this information as well as the car's position is used to update the map. A completed map is one where all of the (reachable) pixels in the map become known as either free or occupied. Completing a map in the shortest amount of time is the goal of an exploration strategy.

A Brief History of Exploration Strategies

The exploration problem has been studied for several decades now. The earliest attempts to solve it are based on "frontier exploration" (Yamauchi, 1997). In these strategies, the robot plans paths to the frontier, the boundary between free and unknown space. This method will explore the space eventually but it certaintly isn't the fastest way to do it. For example, because the robot uses range measurements, it doesn't necessarily need to drive all the way up to a frontier in order to observe what is beyond it. This is wasted time!

In recent years, more theoretically grounded methods have been developed based on the concept of mutual information (Julian, 2014). Mutual information is the expected amount of information that will be gained about one random process by observing another. The variables we are observing are range measurements. Range measurements made at different locations in the map will reveal different amounts of information in expectation. For example scanning in a well known area of the map will most likely reveal no new information. Scanning in a location that has a line of sight to a large number of unknown cells will most likely reveal a lot of information. Using this metric, the robot can decide to drive to the location that will maximize the amount of information it expects to gain

Let be M be a map and Z be a range measurement made from a particular location that intersects cells 1 through n of the map. The mutual information between the map and the range measurement can be computed using the following formula:

Without going into the details of this equation (they are here if you want them), it is visible by inspection that computing this formula is computationally complex. The double loop makes computation take quadratic time. That's extremely slow - and its only for a single range measurement! Computing the mutual information for all possible scanning locations in the map would take quartic time. Faster, information metrics like Cauchy-Schwartz Quadratic Mutual Information have been proposed (Charrow, 2015), but these metrics don't have provable guarantees.

FSMI: Fast Shannon Mutual Information

The contribution made by Zhang et al. is an algorithm that computes the mutual information formula above in linear time. Using some clever rearranging the double loop over n^2 elements can be made into a loop of n * Delta elements for some small constant Delta as shown in the formula below.

Again, we won't go into the details, but you can find them in the full paper here. Computing mutual information in this form is orders of magnitude faster than the original formulation. It's even faster than approximate methods like CSQMI.

RACECAR Experiments

We tested our new algorithm in the real world using the RACECAR. In the video below, the image in the top right depicts the map and a set of potential paths that the car is deciding between. Bright green paths are selected because they have the highest mutual information reward. In the bottom right, we show the mutual information surface. Bright spots have the highest mutual information. The car is in a motion capture room, which gives it perfect localization. The path planning is done using RRT* (Karaman, 2011) using Reeds-Shepp curves (Reeds, 1990) as the steering function. Control is done using pure pursuit (Coulter, 1992).

Back to Research