Approximately one year ago the AutoSec newsletter reported on so-called remote phantom attacks where the ADASs and autopilots of semi/fully autonomous vehicles considered depth-less objects (phantoms) as real. This kind of attacks is referred to as phantom attacks and can be exploited by attackers by projecting a phantom to trick vehicles’ perception systems.

Recently, researchers from the Ubiquitous System Security Lab of Zhejiang University and from the Security and Privacy Research Group of University of Michigan have developed a way to blind autonomous vehicles to obstacles by using audio signals.

In their paper: “Poltergeist: Acoustic Adversarial Machine Learning against Cameras and Computer Vision”, the researchers highlight that:

”Autonomous vehicles increasingly exploit computer-vision based object detection systems to perceive environments and make critical driving decisions.” And: “…to increase the quality of images, image stabilizers with inertial sensors are added to alleviate image blurring caused by camera jitters”.

According to the paper such a trend opens for new attack surfaces and a system-level vulnerability is identified: …” resulting from the combination of the emerging image stabilizer hardware susceptible to acoustic manipulation and the object detection algorithms subject to adversarial examples.” The researchers hence performed real world attacks against a commercial camera product with an image stabilization system, which in this case was a smartphone (Samsung S20) in a moving vehicle. The vehicle was then driven through the acoustic signal injection attacks, which caused blurred images that tricked the machine learning system into ignoring obstacles in its way.

“The blur caused by unnecessary motion compensation can change the outline, the size, and even the color of an existing object or an image region without any objects,” the team found, “which may lead to hiding, altering an existing object, or creating a non-existing object.” The researchers categorized these as: Hiding Attacks (HA), Creating Attacks (CA), and Altering Attacks (AA). According to the research team, this is a new class of attack which they call AMpLe – a backronym for “injecting physics into adversarial machine learning.”

Even though the team were not able to perform the attack against a real-world autonomous car, the researchers claim in the paper: “While it’s clear that there exist pathways to cause computer vision systems to fail with acoustic injection, it’s not clear what products today are at risk. Rather than focus on today’s nascent autonomous vehicle technology, we model the limits in simulation to understand how to better prevent future yet unimagined autonomous vehicles from being susceptible to acoustic attacks on image stabilization systems.”

Written by Joakim Rosell

Facebooktwitterredditlinkedinmail