Researchers at NYU Tandon School of Engineering have developed an AI system, SeeUnsafe, to improve road safety by analyzing traffic camera footage for collisions and near-misses.
New York City’s vast network of traffic cameras captures countless hours of video every day, creating a treasure trove of data that, until now, has been challenging to fully utilize. That’s set to change with a groundbreaking development from researchers at New York University (NYU) Tandon School of Engineering. Their new artificial intelligence system, SeeUnsafe, aims to enhance road safety by automatically identifying collisions and near-misses in extensive traffic footage.
Published in the journal Accident Analysis and Prevention, this innovative research has already earned the New York City’s Vision Zero Research Award, aligning with the city’s road safety priorities.
Senior author Kaan Ozbay, a professor in the Department of Civil and Urban Engineering and director of NYU Tandon’s C2SMART center, presented the study at this year’s Research on the Road symposium on Nov. 19.
SeeUnsafe leverages pre-trained AI models to understand both visual data and text, making it one of the first applications of multimodal large language models for analyzing long-form traffic videos.
“You have a thousand cameras running 24/7 in New York City. Having people examine and analyze all that footage manually is untenable,” Ozbay said in a news release. “SeeUnsafe gives city officials a highly effective way to take full advantage of that existing investment.”
The AI system addresses a critical gap in traffic safety management — resource limitations in analyzing vast amounts of video footage. By identifying where and when incidents occur, SeeUnsafe allows transportation agencies to pinpoint hazardous intersections and conditions needing intervention before severe accidents happen.
“Agencies don’t need to be computer vision experts. They can use this technology without the need to collect and label their own data to train an AI-based video analysis model,” added co-author Chen Feng, an associate professor at NYU Tandon and a co-founding director of the Center for Robotics and Embodied Intelligence.
Tested on the Toyota Woven Traffic Safety dataset, SeeUnsafe outperformed other models, correctly classifying traffic incidents 76.71% of the time and identifying involved road users with success rates as high as 87.5%.
This significant accuracy means the system can provide actionable insights into traffic safety, potentially preventing accidents by informing timely interventions like improved signage, better signal timings and redesigned road layouts based on near-misses and collision patterns rather than waiting for accidents to happen.
The system’s ability to generate road safety reports with natural language explanations allows it to describe factors like weather conditions, traffic volume and specific movements leading to near-misses or collisions.
Despite some limitations, such as sensitivity to object tracking accuracy and challenges under low-light conditions, the researchers believe SeeUnsafe lays a crucial foundation for further AI advancements in road safety.
Source: NYU Tandon School of Engineering

