Navigation and service

Artificial Intelligence

Thanks to increasing computing power, new algorithms and growing volumes of data, artificial intelligence (AI) and AI systems have established themselves as solutions for a number of use cases in recent years. After taking hold in text analysis, translation and image/voice recognition, AI has now also found its way into safety-relevant fields such as autonomous driving. To ensure that AI applications are secure and always serve the common good, the Federal Office for Security in Information Technology (BSI) conducts basic research and develops requirements, test criteria and testing methodologies to meet specific practical needs.

Opportunity or threat?

Three important points where AI and IT security intersect are currently the subject of extensive research:

  • IT security for AI: how can AI systems be attacked? Which attack scenarios result from the use of AI, and what are their ramifications? How can AI systems be demonstrably protected against AI-specific attacks?
  • IT security with AI: how can AI systems improve IT security? Which countermeasures can be designed to be more efficient, and which new tactics could be developed?
  • AI-driven attacks: which new threats to IT systems are presented by the AI methods used in attack tools? Which known attacks are likely to be improved by the use of AI? How can such attacks be detected and countered?

Questions of this kind are now the focus of efforts among both the BSI and standardisation bodies and expert groups around the world. These include DIN, ETSI, ITU, ENISA and the HLEG at the European Commission. As Germany's federal cyber security agency, the BSI contributes its expertise here.

Artificial intelligence: a key point of focus at the BSI

Many teams within the BSI are working on the topic of AI by looking at a range of aspects and contexts. This work involves both specific AI systems, such as those used for traffic sign recognition, and general AI methods such as deep neural networks. As a result of its AI activities, the BSI has produced an overview document that presents problems, measures and areas requiring action for the secure, robust and transparent use of AI.

Sicherer, robuster und nachvollziehbarer Einsatz von KI

The BSI, the TÜV association and the Fraunhofer Heinrich-Hertz-Institute (HHI) jointly organize a yearly workshop series on the auditability of AI systems, together with internationally renowned experts. The goal is to regularly assess the current state of the auditability of AI systems in safety- and security-critical domains as a basis for prioritising future work. The results of the individual workshops have been jointly summarized in whitepapers by experts from BSI, TÜV, HHI and all speakers of the workshop:

Towards Auditable AI Systems (2021)

Towards Auditable AI Systems (2022)

Centre of Excellence for Artificial Intelligence

Existing standards for conventional IT systems cannot be directly transferred to modern AI systems due to their specialised structure. This makes things more difficult for consumers, companies or public authorities when they need to assess the security of AI systems for a specific use case. In order to be able to issue security certificates and make such audits possible, qualified auditors require a set of appropriate and reliable test criteria, methodologies and tools.

Coordinating their creation is the responsibility of the BSI's Centre of Excellence for Artificial Intelligence, which was set up in 2019. The centre covers multiple aspects of AI, including the robustness, security, reliability, integrity, transparency, explainability, interpretability, fairness and non-discrimination of AI systems, as well as the quality of data (which plays a central role for machine learning). It also serves the federal government as a central point of contact for AI issues.

An initial result of its work is the AI Cloud Service Compliance Criteria Catalogue (AIC4). This provides professional users with an initial basis for assessing the security of cloud-based AI services for their specific use cases based on a key set of aspects. These criteria are revised regularly to maintain or increase their relevance and reflect the current state of the art in AI systems. Criteria for other use cases are also planned.

AI Cloud Service Compliance Criteria Catalogue (AIC4)

AI in security-critical areas such as automotive and biometrics

AI methods are seeing increasing use on an ever-larger scale in security-relevant tasks. Such applications include personal biometric identification and verification and core functions in autonomous driving. The specific framework conditions for these practical applications differ considerably.

In particular, the robustness of the methods used needs to be assessed against random changes in the input data. For autonomous driving, for example, traffic sign recognition needs to function reliably even in adverse weather conditions or when the camera or the signs themselves are damaged or soiled. To this end, the BSI has been working with the Association of Technical Inspection Agencies (VdTÜV) on requirements for AI systems in the transport sector since mid 2019.

BSI article on fundamental issues of IT security for AI applications (July 2020, Frontiers in big data, English)

Empirical robustness testing of AI systems for traffic sign recognition

The example of cryptography: how AI supports technical evaluation

The BSI also applies AI techniques to relevant questions in cryptography. Cryptographic methods and implementations are re-evaluated with an eye towards new attack vectors, and existing requirements are adjusted accordingly.

The BSI has already been able to showcase its capabilities at the Conference on Cryptographic Hardware and Embedded Systems (CHES). With the help of AI methods, a BSI team took first place in two individual disciplines in the CHES Challenge in 2018, and was the overall winner in 2020.

Studies by the BSI on the subject of Artificial Intelligence

In the field of artificial intelligence, the BSI is already looking into future technologies such as "Quantum Machine Learning". Broadly speaking, QML researchers try to harness and examine the potentials of quantum computing in the context of machine learning. As a first and fundamental work result on Quantum Machine Learning – State of The Art and Future Directions the BSI provides a comprehensive summary of the current state of the field.

In another project, the BSI is investigating how machine learning methods can be used to improve static code analysis methods. The results on the state of research in this area were summarized in the study Machine Learning in the Context of Static Application Security Testing - ML-SAST - final study.

In project 464 ('Security of AI-Systems: Fundamentals'), the BSI has assessed the current state of research in the field of "Security of AI-Systems" in several studies:

Definitions: artificial intelligence and AI systems

The BSI defines the term "artificial intelligence" as describing both the technology and the scientific discipline that comprise multiple approaches and techniques such as machine learning, machine reasoning and robotics.

AI systems are software and hardware systems that utilise artificial intelligence in order to behave "rationally" in the physical or digital dimension. Based on their perception and analysis of their environment, these systems act with a certain degree of autonomy in order to achieve certain goals.

These definitions are modelled on those of the High-Level Expert Group on AI (HLEG) at the European Commission.

Guide for Developers

With the description of practical attacks on AI, the BSI creates awareness among developers for possible weaknesses in AI applications. The publication "AI Security Concerns in a Nutshell" provides a concise and descriptive account of the current state of research on attacks against AI. Possible defences against attacks are also presented to the developers.

AI security concerns in a nutshell - Practical AI-Security guide

Generative AI

Generative AI models are capable of performing a variety of tasks that traditionally require creativity and human understanding. During training, they learn patterns from existing data and can subsequently generate new content such as texts, images, music, and videos that also follow these patterns. Large language models (LLM) represent a subset of generative AI and can be used in many use cases. This includes applications where text is processed and based on that, text outputs are generated, e.g., in the context of the translation, creation, and classification of texts. In addition to opportunities, the use of LLMs poses novel IT security risks and can amplify known IT security threats.

In its current version, the publication Generative AI Models - Opportunities and Risks for Industry and Authorities provides an overview of the opportunities and risks of LLMs as a subset of generative AI and suggests possible countermeasures to address these risks. Its audience are companies and authorities considering the integration of LLMs in their workflows to create a basic security awareness for these models and to promote their safe use; it can serve as a basis for systematic risk analysis. With the intended exploration of further subfields in generative AI, such as image or video generators, the publication is continuously updated.

How is AI changing the cyber threat landscape?

Recent developments in artificial intelligence (AI), particularly the introduction of large language models (LLMs), have far-reaching implications for cybersecurity. How is AI changing the cyber threat landscape? examines how AI is changing the cyber threat landscape by providing new tools to both attackers and defenders.

We find that AI, particularly LLMs, are lowering the barriers of entry for malicious activity. The use of AI increases the efficiency and reach of attacks, for example, for malware creation and manipulation, social engineering attacks, and advanced data analysis. These models enable more complex and higher quality attacks as well as faster development and wider distribution, thus changing the threat landscape.

At the same time, defenders may also benefit from productivity gains enabled by LLMs, such as improved capabilities to detect and analyze open source intelligence. The report highlights the two-fold effects of AI on the cyber threat landscape and emphasizes the need to adapt cybersecurity strategies in light of these new technological developments - particularly in terms of the scale and speed of countermeasures.