Navigation and service

Artificial Intelligence – bringing you closer to the technology

A car, a computer, an entity? In the American 80s cult series "Knight Rider", the talking and high-tech car K.I.T.T. plays a key role. The Artificial Intelligence in K.I.T.T. speaks and makes independent decisions to help the main character Michael Knight in his fight for law and order. Not a single person in the series questions the ethical or moral aspects that are associated with the autonomy of Artificial Intelligence.

The high-tech car K.I.T.T. harnessed the power of what is referred to as strong AI, i.e. an artificial system with a general intelligence similar to that of humans. In contrast to the series, there weak AI systems currently exist. Weak AI is used as a means to efficiently solve specific tasks (e.g. playing chess, medical diagnoses or text translation). It has no other capabilities outside of the specific task it is designed to complete.

Use and training of AI systems

Artificial Intelligence (AI) is a generic term for methods that aim to automate decision-making processes that traditionally require human intelligence. However, implementing and using AI systems in the real world requires the discussion of essential issues: for example, can the algorithms used be trained to independently make ethically and morally acceptable decisions? The training of AI systems requires huge data sets and personal data is often processed in the course of operating AI systems. How can legally and technically sufficient protection of data be ensured?

In order to answer these questions, how AI systems function and make decisions systems must be made interpretable and evaluable. This is the only way for the advantages of this promising technology to be used safely in the future. In addition to the available training data, the basis for the adaptability of an AI system is the configuration of the underlying AI system. For example, this includes selected neural networks and various machine learning methods such as Deep Learning. The developer defines these framework conditions before training the system and adjusts them during training until the AI system performs the required task, e.g. face recognition, as expected.

Training AI systems requires huge amounts of data

AI systems require huge amounts of data to develop their independent problem-solving skills using machine learning, i.e. AI systems can only ever be as good as the data used to train them. Due to their inherent ability to make generalisations, AI systems can also process input data that is not included in the training data. However, the more the input data differs from the training data, the more likely it is that the AI system will misinterpret things.

For example, facial recognition only works in a precise and robust manner if the underlying AI system has been trained with a very high number of high-quality, diverse and wide-ranging facial images and if it has been tested repeatedly. For this, the developer often requires several iterations. Afterwards, the AI system can independently make decisions about new data sets and recognise faces, for example. So new types of input data can be processed reliably, AI systems must be re-trained with new data.

This would be the case, for example, if a facial recognition system that was previously only used for adults was now going to be used for facial recognition of children. The data collected while an AI system is in operation can be used for further improvements of the system, provided that data protection regulations allow this.

In addition to the available training data, the basis for the adaptability of an AI system is the configuration of the underlying AI system. For example, this includes selected neural networks and various machine learning methods such as Deep Learning. The developer defines these framework conditions before training the system and adjusts them during training until the AI system performs the required task, e.g. face recognition, as expected.

Infografik KI-maschinelles Lernen Infografik KI-maschinelles Lernen
Source: Bundesamt für Sicherheit in der Informationstechnik

AI systems can be trained using different data depending on the task. This includes image data for automatic facial recognition or health data for medical diagnosis, such as X-ray images. Another strength of AI systems is efficiently linking different data sources: for example, autonomous driving decisions require data about things like the status of the vehicle (e.g. speed), environmental conditions (weather, camera images, distance sensors) or the traffic conditions.

Generating new data from different data sources represents both an opportunity and a challenge for data protection. For example, this applies when personal data subject to a special level of protection can be derived from less sensitive metadata.

It rapidly becomes clear how wide-ranging and complex data packages required for machine learning can be. However, it is both the scope and the quality of the data that are vitally important in order to eliminate unwanted decision-making patterns in an AI system.

Misinterpretations of Artificial Intelligence

As is the case with any AI system, new data inputs can be misinterpreted. The likelihood of misinterpretations depends largely on the quality and quantity of the training data. On top of the risk of random malfunctions, perpetrators can specifically search for input data that then leads to (targeted) wrong decisions. For example, this can be done by creating a disturbance via image noise to an extent that is not detectable by a human but leads to a change in decision in AI systems (see graphic). This can result in a misinterpretation when recognising faces or when recognising street signs. Every AI system is vulnerable to this. Until now, these system misinterpretations have been difficult for external auditors and AI developers to understand.

Infografik KI-Biometrie Schwachstellen Infografik KI-Biometrie Schwachstellen
Source: Bundesamt für Sicherheit in der Informationstechnik

Discrimination against certain ethnic groups in AI decisions is also often caused by insufficient or unbalanced training datasets: in one case that arose, an AI based job application algorithm inadvertently discriminated against women. Based on the rather male-dominant candidate pool, the AI system deduced that men were simply more interested in the roles they were looking for. However, the AI system's conclusuion must be considered wrong in this instance as there are simply more men than women working in the IT industry. After this incident, the application algorithm was trained further with this information. This example illustrates the importance of making the decisions of intelligent AI systems transparent so interpretation gaps can be closed in advance.

Developing Artificial Intelligence securely

AI systems have huge potential. The objective for the secure use of AI systems should be fully understanding the decision-making process of a system and making it transparent in order to avoid misinterpretations during operation. To achieve this, the entire process of generating training data, machine learning and operation of the AI system must be considered.

In addition, there are questions that need to be answered regarding the information security of AI systems:

  • What data is transferred where it is used and where to?
  • What safeguards are in place to ensure that data cannot be taken from the AI system?
  • What safeguards are in place to ensure that the AI system does not accidentally release personal data?
  • How secure is personal data used by AI systems?
  • How can we make the decisions of AI systems transparent to users?
  • How will cybercriminals use AI to their advantage?

As a core technology of the future, all of these questions need to be answered in order to make using AI secure with appropriate guidelines and standards.

Many modern applications would not be possible without Artificial Intelligence:

  • Forecasts based on large amounts of data, e.g. weather, stock market prices, earthquakes
  • Context-based searches online
  • Virtual assistants
  • Biometric authentication with facial recognition and fingerprints
  • (Partially) autonomous driving for automatic control of the car in traffic
  • Adaptive secuirty against cyber attacks
  • Automated diagnosis of diseases based on imaging data (e.g. radiology)