Digitalisation

Artificial Intelligence

[Translate to Englisch:]

Artificial intelligence (AI) is one of the key technologies of our time. Algorithms help medical professionals to diagnose certain diseases, consumers to save energy, and singles to choose a partner. Autonomous driving and new mobility concepts would be practically inconceivable without AI. However, this also means that the risks from erroneous AI applications and the possibility of invasions of privacy or cyber attacks are increasing.

Many AI-supported systems are like a "black box" for users who neither understand nor comprehend how they work. They must trust that the providers of these systems comply with the highest security standards. That is why the TÜV Association and its members are committed to establishing auditable security standards that make AI applications both transparent and secure. To achieve this, a number of steps are still necessary at the political level: Security standards must be formulated, ethical issues clarified, testing scenarios developed and institutions named for their implementation.

AI needs standardization

 

Four out of five German citizens demand a test mark for artificial intelligence assigned by independent bodies in order to strengthen trust in the technology.

AI study by the TÜV Association

This result of the  TÜV Association's AI study shows how important trust and security are for the success of artificial intelligence: The TÜV Association therefore supports the use of AI for the benefit of society and is working with its members to improve the quality and safety standards of AI-supported systems. In particular, the use of AI systems that pose a high risk to humans should be subject to binding safety standards, compliance with which should be monitored by independent testing organizations. After all, only the safe application of AI-supported systems will create the necessary social acceptance for their use.

An important cornerstone for trust in artificial intelligence is the planned European AI Act. A risk-based regulation is an appropriate method. However, the EU Commission's proposal still lacks a clear derivation and definition of the risk classes. Not all applications that can endanger the life and limb of people or elementary fundamental rights such as privacy or equal treatment are covered. The TÜV Association has therefore summarized in a statement what changes are needed to the current regulatory proposal.

TEST CENTRES FOR ARTIFICAL INTELLIGENCE

With a complex technology like artificial intelligence, it is not enough to formulate legal texts. The practical implementation of AI regulation must be driven forward now. To develop standards for testing safety-critical AI applications, the TÜV companies have founded the TÜV AI Lab.

In addition, the TÜV association is planning to establish interdisciplinary testing and quality centers for AI. In such AI Quality & Testing Hubs, AI providers, research institutions, start-ups and testing organisations could work together on norms, standards and quality criteria for AI and develop new testing and inspection procedures. They can also train specialists, implement research projects and support SMEs in the implementation of the upcoming AI regulation. The TÜV Association together with VDE initiated the „AI Quality & Testing Hub“. The German states of Berlin, Hesse and North Rhine-Westphalia have already launched concrete activities to implement the "AI Quality & Testing Hubs" in their states or to support them with funding projects.

TÜV AI CONFERENCE

These are the topics we discuss with leading experts from politics, science and business at the annual TÜV AI Conference. On this page you will find a review of the 2021 conference with the contents of the focus sessions, the video recordings and a small photo gallery (only available in German).

News

 

TÜV AI.Lab offers AI compliance check

 

AI Act: TÜV Association welcomes the EU Member States' approval

 

TÜV AI.Lab: MISSION KI initiative develops quality seal for AI

 

Franziska Weindauer becomes the new CEO of the TÜV AI.Lab

 

Make AI regulation ambitious and future-proof

 

Recommendations for the AI Act trilogue negotiations

 

Artificial intelligence: almost one in four uses ChatGPT

 

AI moratorium letter illustrates need for political action

 

TÜV Association becomes project partner at TEF-Health

 

Whitepaper "Towards Auditable AI Systems"

 

TÜV Association calls for test centres for AI

 

AI survey: Consumers call for tests marks

 

Testing AI with high risk for safety

 

Statement on the AI Act by the European Commission

 

AI-based systems and products

Your contact

[Translate to Englisch:]

Dr Patrick Gilroy

Head of AI and Education

+49 30 760095-360

patrick.gilroy@tuev-verband.de