
ETSI has recently released ETSI GR SAI 005, a report which summarizes and analyses existing and potential mitigation against threats for AI-based systems. Setting a baseline for a common understanding of relevant AI cyber security threats and mitigations will be key for widespread deployment and acceptance of AI systems and applications. This report sheds light on the available methods for securing AI-based systems by mitigating known or potential security threats identified in the recent ENISA threat landscape publication and ETSI GR SAI 004 Problem Statement Report. It also addresses security capabilities, challenges, and limitations when adopting mitigation for AI-based systems in certain potential use cases.
Artificial intelligence has been driven by the rapid progress of deep learning and its wide applications, such as image classification, object detection, speech recognition and language translation. Therefore, ETSI GR SAI 005 focuses on deep learning and explores the existing mitigating countermeasure attacks.
ETSI GR SAI 005 describes the workflow of machine learning models where the model life cycle includes both development and deployment stages. Based on this workflow, the report summarizes existing and potential mitigation approaches against training attacks (i.e. mitigations to protect the machine learning model from poisoning and backdoor attacks) and against inference attacks, including those from evasion, model stealing, and data extraction. Mitigation approaches are firstly summarized as model enhancement and model-agnostic, and then grouped by their rationales.
Due to the rapid evolvement of attack technology for AI-based systems, existing mitigations can become less effective over time, although their approaches and their rationales remain in place. In addition, most of the approaches presented stem from an academic context and make certain assumptions, which need to be considered when these approaches are applied in practice. ETSI GR SAI 005 intends to serve as a securing AI technical reference for the planning, design, development, deployment, operation, and maintenance of AI-based systems. In future, more research work needs to be done in the area of automatic verification and validation, explainability and transparency, and novel security techniques to counter emerging AI threats.
Download the report CLICKING ON THIS LINK.
Fachartikel

LIVE WEBINAR: Verschlüsselter und einfacher Datentransfer per E-Mail oder Datenraum

Cybersecurity: Endlich Schluss mit dem Tool-Wahnsinn

SD-WAN: Warum DDI der Schlüssel zu effizientem Management ist

(Keine) Cyberrisiken (mehr) im erweiterten Zulieferer-Netzwerk

Wie Managed Service Provider (MSP) Daten vor Ransomware-Angriffen schützen sollten
Studien

Cybersicherheit: Unternehmen unterschätzen Risiken durch Partner und Lieferanten

IBM „Cost of a Data Breach“- Studie 2022: Verbraucher zahlen den Preis, da die Kosten für Datenschutzverletzungen ein Allzeithoch erreichen

Gestohlene Zugangsdaten sind im Dark Web günstiger als ein Döner

Jedes zweite Fertigungsunternehmen rechnet mit Zunahme von Cyberangriffen – bei weiterhin lückenhafter Cybersicherheit

Hybride Arbeitsmodelle: Firmware-Angriffe nehmen signifikant zu
Whitepaper

Trellix Threat Labs Report: ein Blick auf die russische Cyber-Kriminalität

Q1 2022 Lage der IT-Sicherheit: 57 % aller Sicherheitsvorfälle gehen auf bekannte Netzwerkschwachstellen zurück

Ransomware-Vorfälle nehmen weiter zu

DDOS-Angriffe in Deutschland sinken, aber neue fokussiere Angriffstechniken werden weiterentwickelt
