NASA AI will steer by landmarks – on the moon

Just as familiar landmarks can give travelers a sense of direction when their smartphones lose their lock on GPS signals, a NASA engineer is teaching a machine to use features on the moon’s horizon to navigate across the lunar surface.
“For safety and science geotagging, it is important for explorers to know exactly where they are as they explore the lunar landscape,” said Alvin Yew, a research engineer at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. “Equipping an onboard device with a local map will support any mission, whether robotic or human.”
NASA is currently working with industry and other international agencies to develop a communications and navigation architecture for the Moon. LunaNet will bring “Internet-like” capabilities to the Moon, including location services.
However, explorers in some regions on the lunar surface may need overlapping solutions obtained from multiple sources to ensure safety if communication signals are unavailable.
“It’s critical to have reliable backup systems when we’re talking about human exploration,” Yew said. “The motivation for me was to enable lunar crater exploration, where the entire horizon would be the crater rim.”
Yew started with data from NASA’s Lunar Reconnaissance Orbiter, specifically the Lunar Orbiter Laser Altimeter (LOLA). LOLA measures inclinations, lunar surface roughness, and generates high resolution topographic maps of the moon. Yew plans to train an artificial intelligence to recreate features on the lunar horizon as they would appear to an explorer on the lunar surface using LOLA’s digital elevation models. Those digital panoramas can be used to correlate known boulders and ridges with those visible in photographs taken by a rover or astronaut, providing accurate location identification for any given region.
“Conceptually, it’s like going outside and trying to figure out where you are by examining the horizon and surrounding landmarks,” Yew said. “While a ballpark location estimate may be easy for a person, we want to demonstrate on-the-ground accuracy down to less than 30 feet (9 meters). This accuracy opens the door to a wide range of mission concepts for future exploration.”
Using LOLA data efficiently, a handheld can be programmed with a local subset of terrain and elevation data to save memory. According to work published by Goddard researcher Erwan Mazarico, a lunar rover can see up to about 180 miles (300 kilometers) at most from any unobstructed spot on the Moon. Even on Earth, Yew’s location technology can help explorers in terrain where GPS signals are obstructed or subject to interference.
Yew’s geolocation system will utilize the capabilities of GIANT (Goddard Image Analysis and Navigation Tool). Developed primarily by Goddard engineer Andrew Liounis, this optical navigation instrument previously double-checked and verified navigation data for NASA’s OSIRIS-REx mission to collect a sample from asteroid Bennu (see CuttingEdge, Summer 2021).
Unlike radar or laser ranging tools that pulse radio signals and light at a target to analyze the returning signals, GIANT analyzes images quickly and accurately to measure the distance to and between visible landmarks. The portable version is cGIANT, a derivative library of Goddard’s autonomous Navigation Guidance and Control (autoGNC) system that provides mission autonomy solutions for all stages of spacecraft and rover operations.
Combining AI interpretations of visual panoramas against a known model of a moon or planet’s terrain could provide a powerful navigational tool for future explorers.
By Karl B. Hille
NASA’s Goddard Space Flight Center in Greenbelt, Md.
Disclaimer: AAAS and EurekAlert! is not responsible for the accuracy of news releases posted on EurekAlert! by contributing institutions or for the use of any information by the EurekAlert system.