There are several regulations that formulate requirements for AI systems, namely the General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AI Act). These regulations raise the question on how to adequately assess AI products. Methods from one specific research area are often proclaimed to offer a solution: explainable artificial intelligence (XAI). XAI methods are beneficial for knowledge discovery in research and model optimization in industry. However, there are limitations and issues that make a secure and reliable use for assessment procedures and digital consumer protection questionable.
This whitepaper points out problems that arise when XAI methods are used to calculate post-hoc explanations. Furthermore, limitations of XAI methods for assessment procedures and the technical support of digital consumer protection that arise from these problems are identified. This publication is targeted at a professional audience with knowledge about the fundamentals of AI and experience with XAI methods. One goal is to make experts, who are participating in committee work towards a practical implementation of the AI Act, aware of the issues with post-hoc XAI methods.