Navigation and service

Artificial Intelligence - access denied!

How do automatic recognition systems work?

Martin, 44: Is constantly travelling and likes it when artificial intelligence makes his everyday life easier. But is he aware of the risks? *

The alarm on my smartphone rings. I press the snooze button. Startled, I wake up: "Oh dear, when do I have that meeting this morning? I hope I've not overslept." I quickly reach for my smartphone. I am still very tired, but fortunately a glance at the display is enough to unlock it. I check the appointment. I still have some time, but I have to hurry because I'm also flying to the branch in France later. So I mustn't forget anything. I put my passport in my briefcase before a quick breakfast.

After the appointment, I drive to the airport. My passport has an integrated electronic ID so I can quickly pass through the gate. Once I arrive at my destination, I pass through the automated border control (Smart Border Gateway) without any delays thanks to the electronic ID card. It means I don't need to wait in line for the customs officer.

I take a taxi to my appointment with the head of the research department. I've to pick up a new prototype of a drug and personally bring it safely to Germany. We've not secured the patent yet so the project is top secret. The company building is therefore particularly well secured. I can only get into the lab with facial recognition and a fingerprint. Unfortunately, the system tells me that I am already in the lab and therefore cannot be identified. I don't understand and call security. How could this happen?

Artificial intelligence: hyper-accurate but open to mix-ups

On the business trip, Martin authenticated himself unambiguously several times using his biometric data such as face and fingerprint. These technologies are all based on artificial intelligence and show how accurate and helpful AI systems can be in everyday life. But what if an AI system was manipulated? What could have happened:

if a cyber-criminal gains access to the company network and thus to the AI system, they can see if the AI has a vulnerability (white-box attack). In a black box attack, on the other hand, the criminal does not have access to the company network. Instead they "practise" on a similarly functioning biometric system, develop an attack strategy based on vulnerabilities and then transfers this to the unknown system. In both cases, the criminal could then outsmart the recognition system, for example. As a result, the AI system does not recognise that another person has just gained access to the blocked area.

For example, anyone could put on extremely strangely patterned glasses and the AI system continues to see "Martin" (see infographic example). This can even mean that a woman is recognised as a man. The fraudster uses the vulnerability to find out the exact pattern that needs to be printed on the glasses, so that the AI system no longer correctly interprets what it "sees". This threat was discovered by a US research group in 2016.

Infografik KI-Biometrie Schwachstellen Infografik KI-Biometrie Schwachstellen
Source: Bundesamt für Sicherheit in der Informationstechnik

Biometric recognition systems must be secure and verifiable

Biometric characteristics such as fingerprints or facial features are always available and enable unique proof of identity. Unlike passwords, they cannot be lost or forgotten. Automated recognition systems of biometric identification features have been in use for some time, in smartphones or digital home assistants, for instance. Artificial intelligence has helped them to achieve a breakthrough, however, because the corresponding technologies are more robust and function more reliably.

Nonetheless AI systems can also have weaknesses that are not always detected at an early stage. They can be deliberately misled, for instance, like the system in this example. This can clearly have devastating consequences for affected individuals. For the strengths of AI systems to be used to benefit society, they must become secure and transparent. The Federal Office for Information Security (BSI) is working on these goals, among others, in a centre of excellence for artificial intelligence.

*Fictitious use case