EU AI Act enters into force: TÜV AI.Lab offers AI compliance check

European AI Act enters into force with staggered transition periods. TÜV Association welcomes the regulation and calls for rapid clarification of open implementation issues. AI Act Risk Navigator: TÜV AI.Lab develops tool for the initial assessment of the classification of AI systems.

© BoliviaInteligente via unsplash

Berlin, 31 July 2024 - The TÜV Association supports the entry into force of the European AI Act, which sets out rules for artificial intelligence (AI) for the first time. This creates a globally leading legal framework for safe and trustworthy AI. ‘The AI Act offers the opportunity to provide safeguards against the negative effects of artificial intelligence and at the same time to promote innovation. It can help to establish a global lead market for safe ‘AI Made in Europe’,' says Dr Joachim Bühler, CEO of the TÜV Association. ‘It is now important to ensure an efficient and unbureaucratic implementation. Independent bodies play a key role in this process, not only with regard to the mandatory requirements, but also in the voluntary AI assessment market.’

Staggered transition periods for AI systems

The EU AI Act will enter into force on 1 August 2024 with staggered transition periods. Six months after the entry into force, i.e. from early 2025, AI systems that use manipulative or deceptive techniques, among other things, will be banned. General purpose AI obligations will apply from 1 August 2025. In addition, EU member states must designate national authorities to carry out market surveillance. Mandatory assessments for high-risk AI in areas such as financial lending, human resources or law enforcement will be required from August 2026. They will affect not only AI developers, but also AI providers and operators of high-risk AI. From 2027, the requirements for AI incorporated into products that are subject to third-party assessment will enter into force. Bühler: ‘Assessments of AI systems create trust and are already a competitive advantage today. Companies are well advised to familiarise themselves with the requirements now, especially with regard to the transition periods. It is important to assess how and where the AI Act will affect their activities.’

Implementation challenges

‘A uniform interpretation and consistent application of the risk-based approach are crucial for the AI Act to be effective in practice - this is where the member states are called upon,’ says Bühler. In the view of the TÜV Association, particular attention should be paid to an efficient and unbureaucratic implementation. Clear responsibilities and competent bodies are necessary in order to implement the regulation in practice. For example, implementation guidelines for the categorisation of high-risk AI systems should be published by the AI Office as quickly as possible in order to provide legal certainty, especially for small and medium-sized enterprises (SMEs). In addition, new AI risks and the development of systemic risks of particularly powerful general-purpose AI models must be kept in view and the development of systematic AI incident reporting must be driven forward.

TÜV AI.Lab Risk Navigator shows which companies fall under the AI Act

The ‘TÜV AI.Lab’, which was founded in 2021, supports companies in adapting to the regulatory and technical requirements for AI and develops testing standards. To mark the entry into force of the AI Act, the TÜV AI.Lab is launching the AI Act Risk Navigator, a free online tool for the initial assessment of the risk classification of AI systems. ‘With the TÜV AI.Lab's AI Act Risk Navigator, we offer a user-friendly application to help companies understand if and how they are affected by the AI Act,’ says Franziska Weindauer, Managing Director of the TÜV AI.Lab. ‘Our goal is to provide clarity on the impact of the AI regulation so that companies can prepare in time.’ It is important for all companies to consider quality requirements for AI from the very beginning in order to establish trustworthy AI as a European unique selling point. The AI Act Risk Navigator helps to categorise AI systems according to the risk classes of the AI Act and to create transparency about the applicable requirements.

Requirements depending on risk classification

The EU regulation categorises AI systems into four risk classes, each with different requirements, which must be gradually complied with over the coming months. High-risk systems that are used in areas such as healthcare, critical infrastructure or human resources management, for example, will be subject to strict requirements in future and must fulfil comprehensive transparency, safety and supervision requirements. ‘The AI Act is not a paper tiger. Violations can result in fines of up to 15 million euros or up to three per cent of annual global turnover,’ emphasises Weindauer. Systems with limited risk, such as chatbots, must fulfil transparency requirements, while systems with minimal risk, such as simple video games, are not regulated at all. The risk-based classification is intended to ensure that the use of AI is safe and trustworthy, so that the innovative power of the technology and its market penetration can be further increased.

AI Act Risk Navigator