Navigation and service

Automated driving as an AI application area

Reliable artificial intelligence in the automotive sector

Road traffic presents a complex set of challenges for AI systems. These include traffic and environmental conditions as well as AI-specific security aspects. The BSI is conducting its own research on this topic and testing possible types of attack. This will provide an overview of the actual threat landscape and enable countermeasures and testing methodologies to be developed and exchanged in interdisciplinary and international expert groups.

The BSI's automotive situation overview (in german speech) provides an additional sector-specific assessment of the cyber security situation in the automotive sector, in terms of production and the vehicles themselves.

Introduction

The automation of driving functions in modern vehicles is steadily increasing. AI systems are becoming more and more prevalent in this field. Their tasks can vary greatly depending on the level of automation. Basic functionalities can include supporting a human driver, but also taking over individual driving functions or complete control of the vehicle.

So far, the systems are only able to work in certain environmental conditions and in specific driving situations, for example in congestion and gridlocked traffic on the motorway. Full automation has not yet been achieved by the beginning of 2023.

AI systems receive their input data from optical sensors such as cameras or a LiDAR laser measurement system, via radio connections or from storage media in the vehicle. Some current systems still use classic algorithms for pre-processing and post-processing of data. However, these are increasingly being replaced by AI systems. Both types of system either process the input data from different sources individually or merge them before processing. The outputs of the AI system can either be used to control actuators for lateral and longitudinal movement of the vehicle directly – specifically steering, brakes and accelerator pedals – or they can first be processed further by conventional IT systems. Information processing in the vehicle is illustrated in Fig. 1.

Processing of sensor data in the context of automated driving Processing of sensor data in the context of automated driving
Figure 1: Diagram showing the processing of sensor data in the context of automated driving. Details depend on technical implementation. In principle, the separate processing and actuator control steps can be included in the central AI system, as well as further sensors and actuators. Source: Federal Office for Information Security

Levels of automation

The widely accepted classification of automation levels comes from the SAE J3016 standard. This distinguishes between five levels, starting from level 0 which involves no automation at all. The complexity increases considerably with each transition from one automation level to the next.

  • Level 1 (Driver Assistance): In certain driving situations, the assistance system takes over steering or acceleration of the vehicle. The human in the driving seat is responsible for all other driving tasks.
  • Level 2 (Partial Automation): In certain driving situations, assistance systems take over steering and acceleration. The human in the driving seat is responsible for all other driving tasks.
  • Level 3 (Conditional Automation): In certain driving situations, an automated system takes over the driving. The human in the driving seat must intervene when prompted and take control of the vehicle.
  • Level 4 (High Automation): In certain driving situations, an automated system takes over the driving. Human intervention is not required in these driving situations.
  • Level 5 (Full Automation): An automated system takes over driving in all driving situations.

At the beginning of 2022, systems up to level 3 were already being used in standard operation in Germany. These can, for example, take over driving on the motorway at low speeds and in good weather with humans having to intervene when prompted. The legal prerequisites for the use of level 4 systems were established in 2021 by the German Act on Autonomous Driving. This allows autonomous vehicles to be used in defined operational areas, for example to provide shuttle transport.

Use cases

AI systems can be deployed in various use cases in the field of automated driving. Due to the structure of available input data, these use cases mainly revolve around image processing. For example, AI systems can use camera data to classify traffic signs and trigger appropriate decisions. Similarly, input data can be used to detect lanes as well as pedestrians or other road users and their movements. These capabilities can be used to perform complex driving functions such as lane keeping or avoiding obstacles. In addition to driving tasks, AI systems can also be used for other purposes such as detecting anomalies in the functionality of sensors and actuators to ensure that they are serviced promptly.

Alongside specifically safety-critical functions, AI systems can also be used for other purposes in automated driving. For example, vehicle occupants can interact directly with the vehicle using voice recognition processes. AI processes can also be used to improve the performance of conventional algorithms when planning routes from a starting point to a destination.

Challenges

AI systems used in automated driving encounter various challenges. These need to be considered in addition to the usual IT security issues.

For one thing, the systems are vulnerable to new types of attacks. The BSI has described these in its publication Sicherer, robuster und nachvollziehbarer Einsatz von KI (available in German speech). These attacks can even start during the training of an AI system by manipulating the underlying data (poisoning attacks). They are more likely to occur if data or pre-trained models from external sources are used. AI systems can also be attacked in active operation through targeted false inputs (adversarial attacks). Poisoning and adversarial attacks for the use case of traffic sign recognition are illustrated in Fig. 2 and 3. Both types of attacks can cause the AI systems to make wrong decisions with potentially serious consequences. To guarantee the security of automated driving functions, the AI systems need to demonstrate maximum resilience against such attacks.

Diagram of an adversarial attack Diagram of an adversarial attack
Figure 2: Diagram of an adversarial attack. An attacker can easily generate a perturbation that causes the AI system to recognise a stop sign as a 100 km/h speed limit sign. Source: Federal Office for Information Security
Diagram of a poisoning attack Diagram of a poisoning attack
Figure 3: Diagram of a poisoning attack. An attacker can deliberately cause misclassification during operation by inserting data points with a trigger (indicated by the yellow post-it here) with an incorrect label into the training dataset. For images without triggers, the AI model functions normally, making the manipulation difficult to detect during testing. Source: Federal Office for Information Security

At the same time, the AI systems, especially at higher levels of automation, also need to function reliably in normal operation in a wide range of environmental conditions. For full automation at level 5, functionality needs to be delivered in all environmental conditions. This is a very challenging task. Environmental conditions can vary greatly depending on the time of day, the season, the climate, the weather, the road surface and the surrounding area. Reliable systems have to cope with widely varying light incidence and reflections, as well as partial occlusion of vision – or even damaged and dirty traffic signs and sensors. The extent to which the reliability of AI systems can be tested in different environmental conditions was investigated by the BSI for the first time in 2020 in a case study using the example of traffic sign recognition.

In addition, in 2022 the BSI has started the development of generic requirements and corresponding testing methods and tools for auditing AI-based systems concerning the challenges described above in the project AIMobilityAuditPrep - Final Results AIMobilityAuditPrep - Final Results. The results of the project constitute a first step towards the creation of a Technical Guideline.

AIMobilityAuditPrep - Supplementary Results
AIMobilityAuditPrep - Overview Toolbox
AIMobilityAuditPrep - Toolbox software documentation

In addition to the challenges that directly affect AI systems, they can be indirectly affected by attacks on the vehicle's sensor technology. Such attacks can, for example, involve the projection of traffic signs onto the walls of buildings or displaying them on electronic billboards. Optical sensors identify these projected traffic signs as they would actual traffic signs made of metal. The AI systems need to be able to draw on contextual information to tackle this problem.

Networking with others and at events

The use of AI in the automotive sector is a highly practical and interdisciplinary topic that cannot be addressed solely on a theoretical level. The importance of physical components and environmental conditions in particular requires exchange with partners in the field. For this reason, the BSI has established an extensive network for exchanging information with the relevant authorities (KBA, BMDV) as well as companies in industry and research institutions. For example, since 2020, the BSI has organised an annual series of workshops together with the VdTÜV and Fraunhofer HHI.

The BSI is also involved in various national and international working groups dedicated to the IT security of AI processes in general or specifically in the automotive sector. Nationally, this is a joint working group between the BSI and the VdTÜV. Internationally, the BSI is a member of working groups at ETSI and ENISA. These exchange formats enable the BSI to address the challenges outlined above in the most effective way possible.