Make AI regulation ambitious and future-proof

Final negotiations on the AI Act have started in Brussels. Proposal must be improved with regard to the classification of high-risk AI systems. Mandatory independent assessments of high-risk AI systems strengthen trust and offer competitive advantages. Generative AI systems such as ChatGPT should be regulated.

©️ Getty Images via Unsplash

Berlin, 1 August 2023 – The European Parliament adopted its final position on the European AI Regulation (AI Act) in mid-June. The EU member states had already agreed on a common position in December last year. Since June, the trilogue negotiations between the EU institutions have been underway to find a compromise. In light of the current negotiations, Johannes Kröhnert, Head of Brussels Office of the TÜV Association, says:

"The AI Act is a great opportunity for Europe to become a global pioneer in the trustworthy and safe use of artificial intelligence. The goal must be to harness the opportunities of AI systems, but at the same time limit the associated risks."

Most consumer products are not covered by the AI Act at all

"The risk-based approach envisaged by the EU institutions is appropriate, but the classification rules based on it fall short. This is because only those AI systems are to be classified as high-risk where the physical products into which they are integrated are already subject to mandatory assessment by independent bodies. This mainly concerns industrial products such as lifts or pressure vessels.  However, the majority of consumer products, including toys or smart home devices, do not fall under this assessment obligation. This means that most AI-based consumer products are not classified as high-risk products under the AI Act, and thus would not have to meet the strict safety requirements. In our view, there is a major regulatory gap here, which the EU legislator still needs to close in the negotiations."

Risk classification by providers can lead to misjudgements

"We are equally critical of the classification of AI systems that are not integrated into existing products, but are brought to market as pure software for specific areas of application (stand-alone AI). These include, for example, AI systems for recruitment procedures or creditworthiness checks. According to the proposal of the European Parliament, the providers themselves should carry out the risk assessment and in the end also decide themselves whether their product is to be classified as high-risk or not. This creates a risk of misjudgement. The EU legislator should therefore establish clear and unambiguous classification criteria to ensure the effectiveness of the mandatory requirements."  

Mandatory independent assessments of high-risk AI systems boost confidence

"With regard to the assessment of AI systems, there is also a need for improvement. Here, the EU legislator relies very heavily on the instrument of self-declaration by providers. However, high-risk systems in particular can pose great risks, both to life and limb and to the fundamental rights of users (security, privacy) or the environment.  Instead of self-assessment, there is a need for a comprehensive obligation for verification, including a review by independent bodies. High-risk AI systems should be subject to mandatory certification by notified bodies. Only independent assessments will rule out possible conflicts of interest on the part of the providers. At the same time, people's trust in the technology is strengthened. According to a recent representative survey by the TÜV Association, 86% of Germans are in favour of a mandatory assessment of the quality and safety of AI systems. This also benefits providers of AI systems, who can quickly bring their products to market. 'AI Made in Europe' can thus become a real quality standard and global competitive advantage."

Regulatory sandboxes cannot replace necessary conformity assessment

"Setting up AI regulatory sandboxes is a good way to facilitate the development and testing of AI systems, especially for SMEs." The EU Parliament's call to make the establishment of a regulatory sandboxes in one or in cooperation with other EU member states mandatory is also to be supported. However, it must be clear that the use of a regulatory sandboxes by an AI system alone cannot give a presumption of conformity. The provider must still undergo a full conformity assessment procedure before placing its AI system on the market. This applies in particular if an independent body is to be involved on a mandatory basis. Here, the EU legislator should provide clarity in the AI Act."

"Independent assessment organisations should be included as partners in the development and use of regulatory sandboxes. With the 'TÜV AI Lab', the TÜV Association has committed itself to identify the technical and regulatory requirements that artificial intelligence entails and to accompany the development of future standards for the assessment of safety-critical AI applications. In addition, we have been actively involved in the establishment of interdisciplinary 'AI Quality & Testing Hubs' at state and federal level for some time."

ChatGPT & Co. must be regulated in the AI Act

"The last few months have clearly shown the development potential of basic models and generative AI systems, and the risks they can pose. It is therefore to be welcomed that the EU Parliament wants to regulate this technology directly in the AI Act. Generative AI systems must also fulfil basic safety requirements. In a second step, however, it should then also be examined which basic models are to be classified as highly critical. These should then be subject to all the requirements of the AI Act, including independent third-party assessment by notified bodies. European standardisation organisations and assessment bodies are currently working on developing corresponding norms and testing standards."

Download

On the occasion of the trilogue negotiations on the AI Act, the TÜV Association has published recommendations for a safe and trustworthy regulation of artificial intelligence.