Navigation and service

Diagram of a poisoning attack

Diagram of a poisoning attack Diagram of a poisoning attack
Figure 3: Diagram of a poisoning attack. An attacker can deliberately cause misclassification during operation by inserting data points with a trigger (indicated by the yellow post-it here) with an incorrect label into the training dataset. For images without triggers, the AI model functions normally, making the manipulation difficult to detect during testing. Source: Federal Office for Information Security