Criteria Catalogue for AI Cloud Services – AIC4

The Criteria Catalogue for AI Cloud Services (AIC4) specifies minimum requirements for the secure use of machine learning methods in cloud services. It is intended as an extension of the C5 (Cloud Computing Compliance Criteria Catalogue). Its objective is to clearly set out the information security level achieved by an AI cloud service based on a standardised audit. Cloud customers can then use this security assessment as part of their own risk analysis. Cloud providers, auditors and cloud customers use the criteria catalogue (AIC4). Each of these parties has a duty to cooperate when it comes to information security.
The C5 catalogue formulates general minimum requirements for secure cloud computing that are relevant for every cloud service. The AIC4 criteria catalogue, meanwhile, includes additional special criteria that are also relevant if machine learning methods are used. AIC4 criteria address the areas of security and robustness, performance and functionality, reliability, data quality, data management, explainability and bias. They cover the entire lifecycle of AI services – that is, development, testing, validation and operation.
AI Cloud Service Compliance Criteria Catalogue (AIC4)
Contact
If you have questions about the AIC4 that are not addressed by the FAQs below, please contact us at
aicloudsecurity@bsi.bund.de
We also appreciate feedback on the content of the catalogue and its applicability.
FAQs
Applications
-
Audits in line with AIC4 examine cloud services that use machine learning methods. A precise definition of the term cloud service is given in the C5 criteria catalogue.
It should be noted that the field of machine learning (ML) is extremely large and ML methods can be used with varying intensity in different areas of cloud services. This raises the question of where AIC4 criteria should be used. In this regard, it should be noted that the AIC4 criteria are not regulatory in nature. Application of the catalogue is voluntary. To determine whether applying the AIC4 criteria makes sense, it may help to answer the following questions:
- Are ML methods used with a particular intensity or in areas that may have an impact on the information security of the cloud service in question?
- Do customers need information on how ML methods are used in order to assess the suitability of the cloudservice for their own use cases as part of risk analysis?
If the answer to either of these questions is yes, an application of the AIC4 criteria is worth considering.
The criteria were written with a focus on ML techniques such as boosting algorithms, random forests and deep neural networks. Typical application examples of these techniques include the classification or segmentation of images and the processing of natural language in the form of text or speech. Methods from the area of federated learning or reinforcement learning are not an explicit focus of AIC4 at present. Although parts of the criteria can also be applied to these methods, they require adaptations and extensions (e.g. with regard to data management in the case of federated learning). These topics will be addressed in the future as part of the revision of the catalogue.
If you know of specific application scenarios or ML methods in which the catalogue cannot be used effectively or have any other feedback on improving or expanding the criteria, please contact us. We want to continually refine, expand and improve the statements in the catalogue in a broad, iterative participation process.
-
Details on possible audits in line with AIC4 can be found in Chapter 4 of the catalogue. Since AIC4 is an extension of C5, audits that are similar to C5 are possible (see the related FAQs in C5).
-
The AIC4 criteria cover the entire AI service lifecycle, meaning from development, testing and validation to provisioning and monitoring. A proper audit of the criteria covers all the relevant processes and control measures of a given cloud service provider and documents the results in detail in an audit report. An audit report provides users with a basis for assessing the suitability of the AI service in question and its associated AI methods with regard to information security for the respective application.
If a cloud provider provides up-to-date reporting in accordance with AIC4, this can be a suitable tool for customers (if properly utilised) to assess the security of the provider's AI service against the background of the intended application scenario. However, this suitability depends on the application scenario, the qualification and independence of the selected auditors and the quality and completeness of the audit report.
The existence of an audit report alone does not necessarily imply that aspects such as robustness, performance, reliability, data quality, explainability or bias are appropriate for the application at hand. It is up to users to take advantage of the transparency created by the audit report and conduct their own risk evaluation of whether the level of information security of the AI service is sufficient, taking into account the intended application. Any other residual risks must be borne by the customer, who will be responsible for them should they occur. The BSI recommends requesting AIC4 and C5 audit reports initially and then regularly (e.g. annually) from a cloud provider and analysing them in more detail. It is important to stress that the BSI is not currently involved in either the selection of auditors or audits themselves and does not review the reports produced. Therefore, it is crucial that customers subject audit reports to their own in-depth analysis.
Role of the BSI
-
The BSI formulates the criteria in terms of their subject matter and illustrates a possible approach to applying them in an audit. Audits themselves are, however, not regulated by the BSI. The BSI is therefore not involved in the selection of auditors or the actual auditing process. Audits in line with AIC4 are not reported to the BSI, nor are the audit reports submitted. Audit reports are thus not evaluated by the BSI.
-
The BSI does not make such recommendations as a matter of principle. Recommendations on the requirements auditors should meet are specified in Chapter 4 of AIC4.
Differences in scope
-
The C5 catalogue formulates general minimum requirements for secure cloud computing that are relevant for every cloud service. The AIC4 criteria catalogue, meanwhile, includes additional special criteria that are also relevant if machine learning methods are used. AIC4 therefore represents an extension of the subject matter covered by C5. Instead of replicating the C5 criteria, AIC4 only presents criteria specific to AI. In doing so, it may expand on the content covered by C5 criteria in some cases.
Example: The C5 OIS-06 criterion formulates requirements for the implementation of systematic risk management. AIC4 assumes that risk management of this kind has been implemented by a given cloud service provider and requires, in AIC4 criterion SR-02, that AI-specific attacks be considered in this risk management. In this sense, the C5 catalogue represents both the overarching framework for and a prerequisite of AIC4. In principle, however, the AIC4 criteria could also be combined with other comparable criteria catalogues for cloud services.
In order to assess the information security of a cloud service, a customer usually needs a C5 audit report that covers traditional IT security and IT management, as well as an AIC4 audit report that covers AI-specific risks. Combining these two reports is the only way to achieve a holistic and comprehensive assessment of information security.
-
The High-Level Expert Group on AI (HLEG) describes ALTAI as a flexible tool for helping companies assess the risks of AI systems and take appropriate measures (see https://ec.europa.eu/). Here, ALTAI follows a holistic approach that takes into account ecological, ethical, technical and legal (data protection) aspects. ALTAI is a list of questions that cannot be formally checked in the form of an audit.
AIC4 is limited to criteria that concern the information security of cloud services in which AI is used. Cloud services present a special case in that cloud providers and cloud customers each have to carry out their own risk analyses and take their own measures to ensure information security. However, cloud customers do not usually have the information required to evaluate the safeguards a cloud provider has put in place. This is precisely where AIC4 seeks to provide transparency. A proper AIC4 audit produces an audit report that enables the users of cloud services to assess the suitability of the AI service in question in terms of its information security for each application scenario.
All this means that ALTAI and AIC4 are not competitors, but mutually complementary resources. An AIC4 audit report provides companies that integrate external cloud services into their own applications with a basis for answering related questions from ALTAI that deal with aspects of information security. Without a corresponding audit report, it would be difficult to reliably answer the questions that fall within the responsibility of a cloud provider. At the same time, ALTAI takes into account many aspects that go beyond technical issues and the AIC4 criteria.
Further development
-
The AIC4 is intended to help discuss German and European aspects of evaluating trustworthy AI-based cloud applications not only with national but also with international partners and to contribute to international standardisation. For this reason, we have decided not to update the current version, as the international process for developing the test criteria and test methods is still ongoing and highly dynamic. BSI welcomes any feedback regarding the usability and usefulness of the AIC4 and any suggestions for improvement. Please do not hesitate to contact us.
-
There is currently no certification procedure that certifies whether a provider meets the AIC4 criteria. To learn more, please see the FAQs on the C5 page. Nevertheless, the AIC4 is intended to contribute to the development of European certification schemes.
Substance of the AIC4 criteria
-
An audit in line with AIC4 does not make any statement as to whether personal data (e.g. in the form of training or input data) is processed or protected in a legally compliant manner within the scope of the AI service in question.
However, the AIC4 catalogue does contain criteria that place requirements on the quality and management of training and input data. These include requirements that are intended to prevent violations of the IT security of such data within an organisation. In concrete terms, this goal is pursued through specifications regarding the implementation of an appropriate access management system (see criterion DM-02). In the context of risk assessment, threats posed by AI-specific attacks must be analysed and the appropriate safeguards implemented. This also includes attacks that extract data from an AI model.
-
The issue of bias in applications is often closely linked to moral or ethical issues, such as the fair treatment of individuals or groups. The BSI has no mandate to make statements on ethical or moral issues. AIC4 therefore only considers aspects of bias that impact information security.
-
The explainability of AI is an intensive field of research in which, especially in connectionist systems, many fundamental questions have yet to be adequately clarified. The extent to which explainability can be audited is therefore also an open question at present.
AIC4 requires providers to systematically analyse the need for explainability with regard to ML solutions in the criteria Explainability EX-01 and Explainability EX-02. Based on this analysis (i.e. according to whether a need for explainability is determined in this way), suitable measures must then be taken as required. Since the content of these criteria is significantly shaped by the decisions of cloud service providers, a special level of procedural transparency is required at this point. The need for explainability, the measures taken and the limitations of these measures are to be documented in the audit report so that cloud users can evaluate the information provided for their own application scenarios.
-
The robustness of AI is an intensive field of research in which, especially in connectionist systems, many fundamental questions have yet to be adequately clarified. The extent to which robustness can be audited is therefore also an open question at present.
Within the framework of the SR (Security & Robustness) and PF (Performance & Functionality) sections of AIC4, providers must systematically analyse the impact of a lack of robustness and implement safeguards to improve it. Within the scope of an audit report, a cloud provider is to provide transparent information for the customer on the form of robustness its service features, the extent to which this robustness is ensured and any related limitations.