European AI Act: TÜV Association welcomes the EU Member States' approval

TÜV Association welcomes Council's approval. Risk-based approach promotes safety and trust, especially in high-risk AI systems. General-purpose AI must fulfil mandatory minimum requirements. Outstanding implementation issues must be clarified quickly. TÜV organisations are preparing for implementation with the TÜV AI.Lab.

Steve Johnson via Unsplash

Berlin, 2 February 2024 - The TÜV Association welcomes today's Council approval of the European AI Regulation (AI Act), which creates the first European legal framework for safe and trustworthy AI. Prior to the vote, it was uncertain whether the AI Act would find a majority in the Council, as Germany and France, among others, had expressed reservations. "A failure of the AI Act would have resulted in AI systems remaining unregulated - and therefore unsafe - for the foreseeable future. Today’s agreement shows the determination of the EU member states to make artificial intelligence safe," says Dr Joachim Bühler, CEO of the TÜV Association. "With the AI Act, the EU is clearly positioning itself as a pioneer for safe and trustworthy AI." This will bring a decisive competitive advantage. 

Risk-based approach strikes a balance between innovation and safety 

A central element of the AI Act is its risk-based approach. Only AI systems that are classified as high-risk must fulfil mandatory safety requirements.  These include, for example, AI systems integrated into medical devices or used in critical infrastructures. From the TÜV Association's point of view, it is important that specific high-risk AI systems must be assessed by an independent assessment organisation. This will ensure that all transparency, data quality and cybersecurity requirements are met. Less risky AI systems, on the other hand, do not fall within the scope of regulation. 

Bühler: "With the risk-based approach, the EU legislator is striking the right balance between openness to innovation and the necessary level of protection. The AI Act will promote the success of European AI providers by fostering safety and trust as an important unique selling point of AI 'Made in Europe'." 

General-purpose AI systems such as ChatGPT must fulfil mandatory requirements 

It is also particularly positive that so-called general-purpose AI systems, for example generative AI models such as ChatGPT, will also have to meet certain minimum requirements in future. Bühler: "Powerful AI models in particular can be used for fake news, deepfakes or the manipulation of vulnerable groups and therefore harbour major risks for safety and democracy. By regulating these systems, the EU legislator is taking decisive action against these threats to our democracy. This is all the more important in the election year 2024 in view of possible election campaign interferences." 

TÜV Association calls for standardised implementation and clear guidelines 

The focus now is on setting the course for successful implementation of the AI Act at national and European level. The requirements of the AI Act must be specified in a timely manner through harmonised European standards in order to create legal certainty for AI providers, assessment organisations and authorities. In addition, legal uncertainties regarding the precise classification of high-risk AI systems must be eliminated. The TÜV Association is calling for the EU Commission to publish clear implementation guidelines with concrete examples in a timely manner in order to prevent possible misclassifications by providers. 

AI assessment readiness must be established by all stakeholders 

The success of the AI Act's implementation will largely depend on the timely and complete assessment readiness of all players in the AI ecosystem. "Providers of AI systems are called upon to familiarise themselves with the requirements of the AI Act starting now," says Bühler. Many requirements can be integrated by providers into existing quality and risk management systems.  

The TÜV Association also welcomes the obligation of the member states to set up corresponding regulatory sandboxes in which providers can test their AI systems in the development phase. The TÜV Association has been advocating for such AI quality centres for several years with the AI Quality & Testing Hubs, which will also benefit SMEs and start-ups. 

The TÜV assessment organisations are preparing intensively for the development of the necessary assessment expertise. The TÜV AI.Lab was founded at the beginning of 2021 to support regulatory and technical requirements for AI and to drive forward the development of assessment criteria. Amongst other things, TÜV AI.Lab is involved as a partner in the National Initiative for Artificial Intelligence and Data Economy (MISSION KI) of the Federal Ministry for Digital and Transport and the German Academy of Science and Engineering. 

Background: The EU institutions agreed on a political compromise text on the AI Act in December 2023. With the Council vote, the EU member states have now officially approved this compromise. The EU Parliament must now vote on the text - probably in April. The text will then be published in the Official Journal of the EU and thus officially enter into force. After a transitional period of two years, most of the requirements are expected to become effective from mid-2026.