Navigation and service

Artificial Intelligence - the mysterious driverless car

What risks do driverless cars bring?

Verena, 39: In the future, she definitely wants a driverless car to provide a more relaxed approach to her hectic family life. But what if the car makes a bad decision? *

Summer holidays at last! We start south after breakfast. The journey will take around ten hours. Quite a long time, I think, especially with two small children. Fortunately thanks to its intelligent assistance systems, the car drives itself. Without getting tired, its sensors detect other road users and obstacles, preventing dangerous situations at an early stage. If a traffic jam is looming somewhere, it simply takes an alternative, faster route. So I look forward to arriving at our destination on time, safely and much more relaxed. Then something strange happens:

We are suddenly on a country road. I look out of the window, lost in thought. Quite unconsciously, I notice the sign with the speed limit of 60 kilometres per hour and check the speedometer out of habit. I can see the car is continuing to accelerate. The road is empty but what is happening? It takes me a few seconds, but I switch to "drive myself" and reduce the speed from 100 kilometres per hour to the speed limit. I prefer to drive the remaining three hours without the digital help. In the end, we arrive safely. This incident has made me very uneasy. I wonder how it could happen and how safe the system really is?

Artificial intelligence: it's hyper-accurate but not error-free

AI systems in autonomous vehicles process diverse data in parallel and in real time: the course of the road, road signs and traffic lights, as well as the movements of other road users and much more.

To do this, the AI system must learn to interpret this data. The developers train the artificial intelligence with very extensive data sets and test its decisions. As the tests progress, the AI system develops the ability to navigate safely in traffic, for example. It learns to react quickly to unforeseen events, such as the abrupt braking of another car.

In this video our BSI experts explain the opportunities and challenges that AI systems bring with them:


AI systems continuously collect a vast amount of sensor data. However, the immeasurable combination possibilities mean it is not possible to test and train every eventuality. This can be exploited by attackers. Or, situations can arise in which sensor data is unintentionally misinterpreted.

A test run on autonomous driving by US scientists in 2018 showed that an AI system was so confused by a note on a stop sign that it interpreted it as a speed limit instead (see example shown in graphic). As a result, the AI system did not issue a warning that it was seeing an unknown object. Instead, it chose one of the options it knew and reached the wrong decision. This is not intuitively comprehensible to the lay person but it is precisely where one AI system security problem lies: all decisions are not fully comprehensible so AI systems must be made verifiable.

Infografik Autonomes Fahren Infografik Autonomes Fahren
Source: Bundesamt für Sicherheit in der Informationstechnik

Autonomous driving must be secure and verifiable

Autonomous driving is not possible without AI. The advantages are obvious: AI systems can read, calculate and interpret comprehensive data simultaneously. When driving, AI systems also cannot be distracted by a smartphone, a radio or a snack. They do not tire as long as the power supply and the technology function properly. And the longer and more often they are used and process new data, the better and more precisely they work.

However, as described in the case study, such systems are not infallible in their calculations; attackers can also deliberately introduce false decision-making processes as backdoors by manipulating the training data. This is the IT downside of AI systems: their complexity makes it difficult to find those vulnerabilities and "blind spots" that can lead to serious wrong decisions.

For a secure digitisation of society, it is therefore necessary to further develop not only the technology itself, but also its verifiability and technical measures against targeted attacks and malfunctions. In addition, the BSI is also making the case for technical guidelines and standards as well as for political and legislative frameworks that will ensure the use of secure AI. In this way, AI technology can be truly useful to society in the long term in terms of care, medicine, on our roads or in many other areas.

Find out how an AI system is trained in our article "Artificial intelligence: can a driverless car really think?"

Stop sign misinterpretation study: Eykholt, Kevin and Evtimov, Ivan et. al. (2018) Robust Physical-World Attacks on Deep Learning Visual Classification; this paper was presented at CVPR 2018

*Fictitious use case