Navigation and service

Project P464

The security of artificial intelligence (AI) systems is an active and fast-moving research area. Many questions and problems regarding the systematic safeguarding and testing of AI-systems are still open. Within the project "Security of AI Systems: Fundamentals", the BSI conducted three studies to address this complex of topics.

The goals were to record the current state of research, to derive recommendations for actions, and to sensitize readers to open research problems. The studies are used by the BSI as a basis for consulting services and for development of evaluation criteria for AI systems.

  1. Security of AI-Systems: Fundamentals - Adversarial Deep Learning This study deals with the security of neural networks. One result of the study is a literature-based action guide in which measures against evasion, poisoning, backdoor and privacy attacks as well as methods of certification and verification of neural networks are presented and evaluated regarding limitations.
  2. Security of AI-Systems: Fundamentals - Provision or use of external data or trained models The study also deals with security properties of neural networks. However, the focus is on situations where data or models are provided to or retrieved from external sources. For example, one main topic is the so-called "transfer learning". Based on the literature and practical investigations, open challenges in the research area are identified. In addition, the study contains recommendations for improving the security of machine learning systems and the associated development processes.
  3. Security of AI-Systems: Fundamentals Security Considerations for Symbolic and Hybrid AI The publication deals with security properties of symbolic AI and hybrid AI systems (a combination of symbolic and other AI methods, e.g. neural networks). The study captures the current state of research of these methods with a focus on security-related aspects and presents it in a structured way.