A robust regulatory framework is a fundamental prerequisite for building the necessary trust in AI-based products and systems and, with it, the acceptance of this new digital technology. The TÜV Association is of the opinion that the current legislative proposal is not sufficiently ambitious, however, and falls short of the European Commission’s objective of building an “ecosystem of trust”. Such an ecosystem of trust can only be achieved by focusing on AI security in the regulatory framework. In its current statement, the TÜV Association has compiled the points that need to be revised.
KEY REquirement
- Deduce adequate risk classes and prioritise the effective protection of legal interests
- Ensure continuous independent third-party assessment of high-risk AI systems
- Introduce risk-appropriate classification rules for high-risk AI systems
- Take risks to legal interests worthy of protection as the sole criterion for amending the list of high-risk AI systems
- Clarify the possibilities to appeal against decisions of notified bodies and ensure its uniform implementation across Europe