
ETSI has recently released ETSI GR SAI 005, a report which summarizes and analyses existing and potential mitigation against threats for AI-based systems. Setting a baseline for a common understanding of relevant AI cyber security threats and mitigations will be key for widespread deployment and acceptance of AI systems and applications. This report sheds light on the available methods for securing AI-based systems by mitigating known or potential security threats identified in the recent ENISA threat landscape publication and ETSI GR SAI 004 Problem Statement Report. It also addresses security capabilities, challenges, and limitations when adopting mitigation for AI-based systems in certain potential use cases.
Artificial intelligence has been driven by the rapid progress of deep learning and its wide applications, such as image classification, object detection, speech recognition and language translation. Therefore, ETSI GR SAI 005 focuses on deep learning and explores the existing mitigating countermeasure attacks.
ETSI GR SAI 005 describes the workflow of machine learning models where the model life cycle includes both development and deployment stages. Based on this workflow, the report summarizes existing and potential mitigation approaches against training attacks (i.e. mitigations to protect the machine learning model from poisoning and backdoor attacks) and against inference attacks, including those from evasion, model stealing, and data extraction. Mitigation approaches are firstly summarized as model enhancement and model-agnostic, and then grouped by their rationales.
Due to the rapid evolvement of attack technology for AI-based systems, existing mitigations can become less effective over time, although their approaches and their rationales remain in place. In addition, most of the approaches presented stem from an academic context and make certain assumptions, which need to be considered when these approaches are applied in practice. ETSI GR SAI 005 intends to serve as a securing AI technical reference for the planning, design, development, deployment, operation, and maintenance of AI-based systems. In future, more research work needs to be done in the area of automatic verification and validation, explainability and transparency, and novel security techniques to counter emerging AI threats.
Download the report CLICKING ON THIS LINK.
Fachartikel

Cyberkrieg aus den Schatten: Verschleierungstechniken als ultimative Waffe

Kubernetes und Container im Visier: Die aktuelle Bedrohungslage im Überblick

Forscher entdecken universellen Trick zur Umgehung von Sicherheitsvorgaben bei KI-Chatbots

Phishing-Angriffe über OAuth: Russische Hacker zielen auf Microsoft 365 ab

Ransomware-Banden setzen auf professionelle Geschäftsmodelle
Studien

Quantencomputer: Eine wachsende Bedrohung für Cybersicherheit und Unternehmensstabilität

Zwischen Aufbruch und Alarm: Künstliche Intelligenz stellt Europas Datenschutz auf die Probe

DefTech-Startups: Deutschland kann sich derzeit kaum verteidigen

Gartner-Umfrage: 85 % der CEOs geben an, dass Cybersicherheit für das Unternehmenswachstum entscheidend ist

Studie: Mehrheit der beliebten Chrome-Erweiterungen mit riskanten Berechtigungen
Whitepaper

Forschungsbericht: Trends im Bereich Erpressung und Ransomware

Internet unter Beschuss: Über 1.000 bösartige Domains pro Tag

Google warnt vor zunehmender Raffinesse bei Cyberangriffen: Angreifer nutzen verstärkt Zero-Day-Exploits zur Kompromittierung von Systemen

FBI: USA verlieren 2024 Rekordbetrag von 16,6 Milliarden US-Dollar durch Cyberkriminalität

EMEA-Region im Fokus: Systemangriffe laut Verizon-Report 2025 verdoppelt
Hamsterrad-Rebell

Das CTEM-Framework navigieren: Warum klassisches Schwachstellenmanagement nicht mehr ausreicht

Cybersicherheit im Mittelstand: Kostenfreie Hilfe für Unternehmen

Anmeldeinformationen und credential-basierte Angriffe

Vermeiden Sie, dass unbekannte Apps unnötige Gefahren für Ihre Organisation verursachen
