TÜV AI.Lab: Newly founded MISSION KI initiative develops quality seal for artificial intelligence

Cooperation between leading organisations / Goal: Fair, comparable market conditions through common testing criteria and procedures / Initiative develops voluntary AI quality seal to accompany AI

Berlin, 27 November 2023 - Promoting innovation and strengthening trust in artificial intelligence (AI): On the road to the widespread, responsible use of AI applications, the German Nationale Initiative zur KI-basierten Transformation in der Datenökonomie (National Initiative for AI-based Transformation in the Data Economy, NITD) is seeking special cooperation. German Federal Minister Volker Wissing announced the MISSION KI initiative at the AI conference "Fueling European Innovation with AI" in Mainz. The partnership of leading experts in the field of AI assessment and certification will develop and test AI quality and testing standards and establish a voluntary AI seal of quality. 

 "I would like to strengthen the development of artificial intelligence in Germany. For that we need the sensible regulation of AI in Europe: Openness to innovation instead of technology bans, as well as standards that are internationally compatible. In parallel, we need to create a better environment for digital innovation in our own country. We aim to contribute to this by supporting the development of high-quality AI products through the MISSION KI project. The slogan ‘AI made in Germany’ can offer a competitive advantage internationally if we make it easier for our domestic AI businesses to bring high-quality, secure, high-performance AI applications onto the market.”, says Wissing.  

Manfred Rauhmeier, Managing Director of acatech - National Academy of Science and Engineering, adds: "MISSION AI aims to create the conditions and growth opportunities necessary for trustworthy and competitive AI ‘made in Germany’. The result must be usable, proportionate, and adaptable; it should promote an AI Ecosystem of trust and excellence in Europe. Being a leader (in the area of regulation as well) means creating opportunities, taking risks, learning and adapting rapidly. It is therefore important to establish a fair balance between innovation and regulation. For this, we are bringing together leading players such as the AI Quality & Testing Hub, CertifAI, Fraunhofer IAIS, PwC Germany, the TÜV AI.Lab, and the VDE. Together they offer a comprehensible palette of expertise." 

CLEAR STANDARDS AS THE ANSWER TO SAFETY ISSUES

MISSION KI pools expertise from science, consulting, standardisation and testing, thereby also strengthening the urgently needed exchange between the individual areas: "The initiative is a great opportunity to make Germany a leading location for safe and trustworthy AI. To achieve this, we at the TÜV AI.Lab use our expertise to help translate regulatory requirements into practice and develop assessment criteria and procedures that are tailored to the respective risk profile of the use case. We do everything we can to prepare the location well for the upcoming regulatory requirements. Our goal is to ensure that innovative AI applications can reach the market as quickly as possible with speedy, reliable and comprehensible tests," says Franziska Weindauer, CEO of the TÜV AI.Lab. 

BECOMING A PIONEER WITH SAFE AND TRUSTWORTHY AI 

The big innovation push for AI-based applications and services is currently still debated in the context of how safe, sustainable and inclusive AI systems are. MISSION AI aims to close this gap. "With sophisticated quality and testing standards, we are committed to the conscious use of AI. At the same time, we ensure fair and comparable market conditions and, ideally, shorten the time-to-market for digital innovations," says Hendrik Reese, Partner for responsible AI and project manager at PwC Germany. "This is how we promote Germany as a business and digital location." 

The planned voluntary AI seal of quality is also intended to resolve another uncertainty: While the EU AI Act specifically regulates high-risk AI systems, there is currently only a transparency obligation for non-risk AI systems. Here too, the seal can increase safety for private and industrial users and establish minimum requirements and market standards.