
A report by the European Union Agency for Cybersecurity (ENISA) and the Joint Research Centre (JRC) looks at cybersecurity risks connected to Artificial Intelligence (AI) in autonomous vehicles and provides recommendations for mitigating them.
By removing the most common cause of traffic accidents – the human driver – autonomous vehicles are expected to reduce traffic accidents and fatalities. However, they may pose a completely different type of risk to drivers, passengers and pedestrians.
Autonomous vehicles use artificial intelligence systems, which employ machine-learning techniques to collect, analyse and transfer data, in order to make decisions that in conventional cars are taken by humans. These systems, like all IT systems, are vulnerable to attacks that could compromise the proper functioning of the vehicle.
A new report by ENISA and JRC sheds light on the cybersecurity risks linked to the uptake of AI in autonomous vehicles, and provides recommendations to mitigate them.
“When an insecure autonomous vehicle crosses the border of an EU Member State, so do its vulnerabilities. Security should not come as an afterthought, but should instead be a prerequisite for the trustworthy and reliable deployment of vehicles on Europe’s roads,” said EU Agency for Cybersecurity Executive Director Juhan Lepassaar.
“It is important that European regulations ensure that the benefits of autonomous driving will not be counterbalanced by safety risks. To support decision-making at EU level, our report aims to increase the understanding of the AI techniques used for autonomous driving as well as the cybersecurity risks connected to them, so that measures can be taken to ensure AI security in autonomous driving,” said JRC Director-General Stephen Quest.
Vulnerabilities of AI in autonomous vehicles
The AI systems of an autonomous vehicle are working non-stop to recognise traffic signs and road markings, to detect vehicles, estimate their speed, to plan the path ahead. Apart from unintentional threats, such as sudden malfunctions, these systems are vulnerable to intentional attacks that have the specific aim to interfere with the AI system and to disrupt safety-critical functions.
Adding paint on the road to misguide the navigation, or stickers on a stop sign to prevent its recognition are examples of such attacks. These alterations can lead to the AI system wrongly classifying objects, and subsequently to the autonomous vehicle behaving in a way that could be dangerous.
Recommendations for more secure AI in autonomous vehicles
In order to improve the AI security in autonomous vehicles, the report contains several recommendations, one of which is that security assessments of AI components are performed regularly throughout their lifecycle. This systematic validation of AI models and data is essential to ensure that the vehicle always behaves correctly when faced with unexpected situations or malicious attacks.
Another recommendation is that continuous risk assessment processes supported by threat intelligence could enable the identification of potential AI risks and emerging threats related to the uptake of AI in autonomous driving. Proper AI security policies and an AI security culture should govern the entire supply chain for automotive.
The automotive industry should embrace a security by design approach for the development and deployment of AI functionalities, where cybersecurity becomes the central element of digital design from the beginning. Finally, it is important that the automotive sector increases its level of preparedness and reinforces its incident response capabilities to handle emerging cybersecurity issues connected to AI.
Further Information
ENISA Threat Landscape on Artificial Intelligence – 2020 Report
ENISA Good Practices for Security of Smart Cars – 2019 Report
Fachartikel

Cloudflare enthüllt ein komplexes KI-System zur Abwehr automatisierter Angriffe

Vorsicht vor der Absicht-Ökonomie: Sammlung und Kommerzialisierung von Absichten über große Sprachmodelle

Sicherung von SAP BTP – Die Grundlagen: Stärkung des Unternehmens ohne Einbußen bei der Sicherheit

Ghost Ransomware: Diese Sofortmaßnahmen sind nötig

Sichere und schnelle Identitätsprüfung: So gelingt effizientes Onboarding
Studien

Digitaler Verbraucherschutz: BSI-Jahresrückblick 2024

Keyfactor-Studie: 18 Prozent aller Online-Zertifikate bergen Sicherheitsrisiken – Unternehmen müssen handeln

Gartner-Umfrage: Nur 14% der Sicherheitsverantwortlichen können Datensicherheit und Geschäftsziele erfolgreich vereinen

Zunehmende Angriffskomplexität

Aufruf zum Handeln: Dringender Plan für den Übergang zur Post-Quanten-Kryptographie erforderlich
Whitepaper

Adversarial Machine Learning: Eine Klassifizierung und Terminologie von Angriffen und Gegenmaßnahmen

HP warnt: „Ich bin kein Roboter“ CAPTCHAs können Malware verbreiten

Sysdig Usage Report zeigt: 40.000-mal mehr maschinelle als menschliche Identitäten – eine Herausforderung für die Unternehmenssicherheit

eBook: Cybersicherheit für SAP
