
ETSI has recently released ETSI GR SAI 005, a report which summarizes and analyses existing and potential mitigation against threats for AI-based systems. Setting a baseline for a common understanding of relevant AI cyber security threats and mitigations will be key for widespread deployment and acceptance of AI systems and applications. This report sheds light on the available methods for securing AI-based systems by mitigating known or potential security threats identified in the recent ENISA threat landscape publication and ETSI GR SAI 004 Problem Statement Report. It also addresses security capabilities, challenges, and limitations when adopting mitigation for AI-based systems in certain potential use cases.
Artificial intelligence has been driven by the rapid progress of deep learning and its wide applications, such as image classification, object detection, speech recognition and language translation. Therefore, ETSI GR SAI 005 focuses on deep learning and explores the existing mitigating countermeasure attacks.
ETSI GR SAI 005 describes the workflow of machine learning models where the model life cycle includes both development and deployment stages. Based on this workflow, the report summarizes existing and potential mitigation approaches against training attacks (i.e. mitigations to protect the machine learning model from poisoning and backdoor attacks) and against inference attacks, including those from evasion, model stealing, and data extraction. Mitigation approaches are firstly summarized as model enhancement and model-agnostic, and then grouped by their rationales.
Due to the rapid evolvement of attack technology for AI-based systems, existing mitigations can become less effective over time, although their approaches and their rationales remain in place. In addition, most of the approaches presented stem from an academic context and make certain assumptions, which need to be considered when these approaches are applied in practice. ETSI GR SAI 005 intends to serve as a securing AI technical reference for the planning, design, development, deployment, operation, and maintenance of AI-based systems. In future, more research work needs to be done in the area of automatic verification and validation, explainability and transparency, and novel security techniques to counter emerging AI threats.
Download the report CLICKING ON THIS LINK.
Fachartikel

ChatGPT bei der Arbeit nutzen? Nicht immer eine gute Idee

Das Aktualisieren von Software-Agenten als wichtige Praktik der Cyberhygiene auf MSP-Seite

Kosteneinsparungen und Optimierung der Cloud-Ressourcen in AWS

CVE-2023-23397: Der Benachrichtigungston, den Sie nicht hören wollen

Wie sich kleine und mittlere Unternehmen proaktiv gegen Ransomware-Angriffe wappnen
Studien

Studie zeigt 193 Millionen Malware-Angriffe auf Mobilgeräte von Verbrauchern im EMEA-Raum

2023 State of the Cloud Report

Trotz angespannter Wirtschaftslage: die Security-Budgets steigen, doch der IT-Fachkräftemangel bleibt größte Hürde bei Erreichung von Security-Zielen

BSI-Studie: Viele Software-Produkte für Onlineshops sind unsicher

Wie Cloud-Technologie die Versicherungsbranche revolutioniert
Whitepaper

Aufkommende Trends in der externen Cyberabwehr

Cyber-Sicherheit für das Management – Handbuch erhöht Sicherheitsniveau von Unternehmen

Aktueller Datenschutzbericht: Risiko XXL am Horizont

Vertrauen in die Lieferkette durch Cyber-Resilienz aufbauen

TXOne Networks und Frost & Sullivan veröffentlichen Jahresbericht 2022 über aktuelle Cyberbedrohungen im OT-Bereich
Unter4Ohren

Optimierung der Cloud-Ressourcen und Kosteneinsparungen in AWS

DDoS – der stille Killer

Continuous Adaptive Trust – mehr Sicherheit und gleichzeitig weniger mühsame Interaktionen

Datenschutz und -kontrolle in jeder beliebigen Cloud bei gleichzeitiger Kostensenkung, Reduzierung der Komplexität, Verbesserung der Datenverfügbarkeit und Ausfallsicherheit
