NYU Researchers Unveil New Edge in Self-Driving Car Communication

In a significant advancement for autonomous vehicles, NYU Tandon School of Engineering researchers have developed a method enabling self-driving cars to share learned experiences about road conditions indirectly, promoting a safer and more intelligent driving ecosystem.

Researchers led by NYU Tandon School of Engineering have developed an innovative way for self-driving vehicles to share knowledge about road conditions indirectly, significantly enhancing their ability to learn from one another while maintaining data privacy. The research, led by doctoral student Xiaoyu Wang, was presented in a paper at the Association for the Advancement of Artificial Intelligence Conference on Feb. 27, 2025.

Traditionally, autonomous vehicles can only exchange knowledge during brief, direct encounters, limiting their adaptability to new environments.

However, this new method, known as Cached Decentralized Federated Learning (Cached-DFL), allows vehicles to train their artificial intelligence (AI) models locally and share these models with others, even without frequent direct interaction.

Shared Experiences on the Road

“Think of it like creating a network of shared experiences for self-driving cars,” Yong Liu, a professor in NYU Tandon’s Electrical and Computer Engineering Department, who supervised the research, said in a news release. “A car that has only driven in Manhattan could now learn about road conditions in Brooklyn from other vehicles, even if it never drives there itself. This would make every vehicle smarter and better prepared for situations it hasn’t personally encountered.”

This new approach doesn’t require a central server. Instead, vehicles within 100 meters of each other use high-speed, device-to-device communication to exchange trained models, not raw data.

Cars can also relay models received from prior encounters, allowing valuable information to permeate the network far beyond immediate interactions.

Efficient and Secure Learning

Each vehicle can store up to 10 external models and updates its AI every 120 seconds. The system ensures outdated information doesn’t degrade performance by automatically discarding older models based on a staleness threshold.

The researchers simulated the system using Manhattan’s street grid. The virtual vehicles, traveling at around 14 meters per second, showed substantial improvement in learning efficiency compared to standard decentralized methods, which falter when vehicles rarely meet. 

“It’s a bit like how information spreads in social networks,” Liu added. “Devices can now pass along knowledge from others they’ve met, even if those devices never directly encounter each other.”

This multi-hop transfer mechanism reduces the limitations of direct model-sharing approaches, enabling learning to propagate efficiently across the fleet.

Impact and Future Applications

The ability to share knowledge about varying road conditions, signals and obstacles while keeping data private is a gamechanger for connected vehicles, especially in complex urban environments. Faster vehicle speeds and more frequent communication sessions enhance learning outcomes, while out-of-date models are quickly discarded to maintain accuracy.

Beyond cars, Cached-DFL has potential applications in other networked systems of smart mobile agents, such as drones, robots and satellites, paving the way for robust, decentralized learning and achieving swarm intelligence.

The team’s code and a detailed technical report have been made publicly available, allowing further exploration and development within the field.

Guojun Xiong and Jian Li of Stony Brook University and Houwei Cao of New York Institute of Technology contributed to the study.

This significant advancement marks a pivotal moment for the future of autonomous technology, opening up new possibilities for safer, more efficient and highly adaptive self-driving vehicles.