The current generation of autonomous drone navigation and flightpath planning systems are almost too precise, demanding hundreds of measurements be taken so that the UAV knows exactly where it is in space at any given moment. And if those readings are off by even a little, then the drone is in for an impact. What’s more, all that data collection is computationally intensive — especially for smaller drones where the space and weight capacities are limited.
The new NanoMap system from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), however, strikes the right balance between accuracy and speed. With it, drones can navigate heavily populated areas — think forests or Amazon fulfillment centers — at up to 20 mph. Simply put, the system doesn’t sweat the details.
Unlike other common mapping systems, such as simultaneous localization and mapping (SLAM), which are data intensive and difficult to maintain at real-time, the NanoMap uses depth-sensing to measure just the drone’s immediate surroundings. This enables the drone to understand generally where it is in relation to obstacles and anticipate how it will need to change course to avoid them.
“The key difference to previous work is that the researchers created a map consisting of a set of images with their position uncertainty rather than just a set of images and their positions and orientation,” says Sebastian Scherer, a systems scientist at Carnegie Mellon University’s Robotics Institute, wrote in an MIT release. “Keeping track of the uncertainty has the advantage of allowing the use of previous images even if the robot doesn’t know exactly where it is and allows in improved planning.”
This uncertainty is surprisingly helpful. Without working the factor into its modeling, MIT’s test drone would crash roughly 25 percent of the time whenever it drifted more than 5 percent away from where it expected to be. But by incorporating that uncertainty, the MIT team was able to reduce crashes to just 2 percent of its flights.