Figure 3: Diagram of a poisoning attack. An attacker can deliberately cause misclassification during operation by inserting data points with a trigger (indicated by the yellow post-it here) with an incorrect label into the training dataset. For images without triggers, the AI model functions normally, making the manipulation difficult to detect during testing.Source: Federal Office for Information Security
Use of cookies
The BSI does not save any personal data from our website visitors. Read more about our Privacy Policy