Imagine you are on a plane with two pilots, a human and a computer. Both have their “hands” on the controllers, but they are always looking for different things. If they are both paying attention to the same thing, the human is in charge. But if the human gets distracted or misses something, the computer quickly takes over.
Meet Air-Guardian, a system developed by researchers at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). As modern pilots grapple with an avalanche of information from multiple monitors, especially during critical moments, Air-Guardian acts as a proactive co-pilot; a partnership between human and machine, rooted in the understanding of attention.
But how does it determine attention, exactly? For humans, it uses eye tracking, and for the neural system, it relies on so-called “saliency maps,” which indicate where attention is directed. Maps serve as visual guides highlighting key regions of an image, helping to grasp and decipher the behavior of complex algorithms. Air-Guardian identifies early signs of potential risks with these attention markers, instead of only intervening during safety violations like traditional autopilot systems.
The broader implications of this system extend beyond aviation. Similar cooperative control mechanisms could one day be used in cars, drones and a broader spectrum of robots.
“An interesting feature of our method is its differentiability,” says MIT CSAIL postdoctoral fellow Lianhao Yin, lead author of a new study. article on Air-Guardian. “Our cooperative layer and the entire end-to-end process can be trained. We specifically chose the continuous-depth causal neural network model because of its dynamic characteristics in attention mapping. Another unique aspect is adaptability. The Air-Guardian system is not rigid; it can be adjusted according to the demands of the situation, ensuring a balanced partnership between man and machine.
In field testing, the pilot and system made decisions based on the same raw images when navigating to the target waypoint. Air-Guardian’s success was evaluated based on the cumulative rewards earned during the flight and the shorter journey to the waypoint. The Guardian reduced the risk level of flights and increased the success rate of navigating to target points.
“This system represents the innovative human-centered, AI-based approach to aviation,” adds Ramin Hasani, MIT CSAIL research affiliate and inventor of liquid neural networks. “Our use of liquid neural networks provides a dynamic and adaptive approach, ensuring that AI does not simply replace human judgment but complements it, leading to improved safety and collaboration in the sky.”
Air-Guardian’s true strength lies in its fundamental technology. Using an optimization-based cooperative layer using the visual attention of humans and machines, as well as liquid and closed continuous-time (CfC) neural networks, known for their prowess in deciphering relationships of cause and effect, it analyzes incoming images for vital information. Additionally, the VisualBackProp algorithm identifies the focal points of the system in an image, ensuring a clear understanding of its attention maps.
For future mass adoption, it is necessary to refine the human-machine interface. Comments suggest that an indicator, such as a bar, might be more intuitive to indicate when the guardian system is taking control.
Air-Guardian heralds a new era of safer skies, providing a reliable safety net for times when human attention falters.
“The Air-Guardian system highlights the synergy between human expertise and machine learning, contributing to the goal of using machine learning to augment pilots in challenging scenarios and reduce operational errors,” says Daniela Rus, Andrew (1956) and Erna Viterbi Professor of Electricity. Engineering and Computer Science at MIT, director of CSAIL and lead author of the paper.
“One of the most interesting results of using a measure of visual attention in this work is the possibility of allowing earlier interventions and greater interpretability by human pilots,” explains Stephanie Gil, professor assistant professor of computer science at Harvard University, who was not involved in the study. work. “This presents a great example of how AI can be used to work with a human, lowering the barriers to building trust using natural communication mechanisms between the human and the AI system.”
This research was partially funded by the U.S. Air Force (USAF) Research Laboratory, the USAF Artificial Intelligence Accelerator, Boeing Co., and the Office of Naval Research. The results do not necessarily reflect the views of the U.S. Government or the USAF.