The impact of adversarial attacks on autonomous vehicles

News

February 1, 2021

The future envisions an increasing number of autonomous vehicles that will significantly affect the transportation sector to build a smart society. Futurist Adah Parris, Håkan Schildt, and Jean Rose from Scania, with writer and journalist Leigh Alexander, came together in Scania’s future room for a round table talk about the impact of autonomous vehicles on our lives [1]. The discussion highlighted that traditional transportation systems do not include children and older people in the best possible scenario. Autonomous vehicles will open new opportunities to support these age groups equally. For instance, children may be able to go to school without needing someone to drive them. Such use cases can solve some significant challenges of today’s world. However, this also means that these autonomous vehicles must be extremely safe and reliable for them to use.

Vehicle safety is strongly related to vehicle security, which means automation services should be secure from external attackers. Recently, AI researchers discovered that by adding small black and white stickers to stop signs in the road, attackers could make them invisible to autonomous vehicles that use computer vision algorithms to navigate. The above-described scenario is an example of an evasion attack by directly modifying the testing sample. Thus, this attack has a straightforward way to bypass a defense. However, the attacker needs to have access to the testing material. Another form of adversarial attack is the data poisoning attack, which is executed intelligently by modifying training samples. An attacker needs to access the training data for this kind of attack. Machine learning-based applications are often re-trained to adapt to new underlying data distribution changes. An example is the Intrusion Detection Systems (IDSs) that are often re-trained on samples collected during the latest network operations. In this case, an attacker can poison the training data by injecting carefully designed instances to compromise the entire learning process.

There is an increasing number of new forms of adversarial attacks, ranging from adversarial inference, trojans, to backdoor attacks [2] on machine learning applications. Such adversarial attacks are a ticking time bomb unless security solutions emerge to combat such attacks. Now, what are the main challenges to defend them? Attacks aimed at machine learning models cannot be discovered just by searching for a snippet of code causing the vulnerability. There is also no precise patch for fixing such bugs. The attack phenomena might result from a combined effect of thousands of hyperparameters. So, traditional security enforcement won’t be of much value.

There is a growing trend of research in defending machine learning-based adversarial attacks. In 2020, there are more than a thousand papers submitted to the Arxiv preprint server on machine learning-based adversarial attacks. It is also on the priority list of the top AI conferences such as NeurIPS and ICLR, and cybersecurity conferences such as DEFCON, Black Hat, Usenix etc. The research shows substantial progress in the research community involving various adversarial sample training, random switching mechanisms, and insights from neuroscience, but it remains an open issue.

Ironically, despite the alarm already been set by the research community, there is little impact on the growing automated vehicle industry focusing on tracking adversarial vulnerabilities for their real-world applications. However, it is really about time that a flaw in the learning pattern fools our automated vehicles to misinterpret road signs and more.

[1] How does artificial intelligence constitute an important driving force towards sustainability? What questions does this vision of the future raise? Available at: https://www.scania.com/se/sv/home/future-room/automation.html
[2] How to attack Machine Learning (Evasion, Poisoning, Inference, Trojans, Backdoors). Available at: https://towardsdatascience.com/how-to-attack-machine-learning-evasion-poisoning-inference-trojans-backdoors-a7cb5832595c

Written by Nishat Mowla

Related Articles

Related

Data Spaces Symposium 2024

Data Spaces Symposium (DSS) 2024 took place at Darmstadtium, Frankfurt between March 12-14. Some key highlights from the event:  - Strategic Insights from European Commission Speakers:European Commission speakers provided strategic insights into the Data Act and the...

read more
Facebooktwitterredditlinkedinmail