New Attack Manipulates AI Vision Systems, Exposing Critical Vulnerabilities

Scientists at NC State University have developed RisingAttacK, a technique that manipulates AI vision systems, revealing critical vulnerabilities. This advancement could impact autonomous vehicles, medical diagnostics and more, highlighting the need for robust AI security measures.

Researchers at North Carolina State University have unveiled a new method that exposes vulnerabilities in artificial intelligence (AI) vision systems, enabling the control of what these systems “see.” Named RisingAttacK, this technique is effective in manipulating the most widely deployed AI computer vision systems, raising significant implications for security and safety across various sectors.

RisingAttacK specifically targets “adversarial attacks,” where data fed into an AI system is manipulated to alter its perception. Such attacks are particularly concerning for applications like autonomous vehicles and medical diagnostics, where erroneous interpretations could lead to real-world dangers.

“We wanted to find an effective way of hacking AI vision systems because these vision systems are often used in contexts that can affect human health and safety — from autonomous vehicles to health technologies to security applications,” co-corresponding author Tianfu Wu, an associate professor of electrical and computer engineering at NC State, said in a news release. “That means it is very important for these AI systems to be secure. Identifying vulnerabilities is an important step in making these systems secure since you must identify a vulnerability in order to defend against it.”

RisingAttacK operates through a series of steps aimed at making minimal changes to an image to manipulate what the AI perceives. Initially, it identifies all visual features in the image and determines which are most crucial for the attack’s objectives.

“For example, if the goal of the attack is to stop the AI from identifying a car, what features in the image are most important for the AI to be able to identify a car in the image?” Wu added.

Subsequently, RisingAttacK analyzes how sensitive the AI system is to changes in these key features. This targeted manipulation means that while two images might appear identical to humans, the AI could be deceived into making different identifications.

“The end result is that two images may look identical to human eyes, and we might clearly see a car in both images. But due to RisingAttacK, the AI would see a car in the first image but would not see a car in the second image,” added Wu.

Tests showed RisingAttacK’s effectiveness across four prominent vision AI programs: ResNet-50, DenseNet-121, ViTB and DEiT-B. The researchers are now exploring how this method could affect other AI systems, including large language models.

“While we demonstrated RisingAttacK’s ability to manipulate vision models, we are now in the process of determining how effective the technique is at attacking other AI systems, such as large language models,” Wu added. “Moving forward, the goal is to develop techniques that can successfully defend against such attacks.”

The findings will be presented on July 15 at the International Conference of Machine Learning in Vancouver, Canada.

For the benefit of the research community, the researchers have made RisingAttacK publicly available on GitHub, allowing others to test their neural networks for vulnerabilities.

Source: North Carolina State University