AI moratorium letter illustrates need for political action

Social and economic consequences of powerful AI systems cannot yet be controlled. Adopt the European AI Act as soon as possible. Legislation creates the legal framework for the use of high-risk AI systems.

© D koi via unsplash

Berlin, 13 April 2023 - Commenting on the open letter for a six-month moratorium on the further development and training of particularly powerful AI systems, Dr Joachim Bühler, CEO of the TÜV Association, says: 

"Leading global AI experts, scientists and AI entrepreneurs are pointing out that the social and economic consequences of powerful AI systems are not yet manageable. This appeal shows the need for political action: for clear legal regulation of artificial intelligence. Only through an ambitious AI legislation, we can manage and reduce the risks of particularly powerful AI systems. At the same time, we can create the basis for exploiting the immense opportunities of this technology." 

"Experts warn of a flood of propaganda and fake news, the destruction of many jobs and a general loss of control. At the same time, it is clear that AI systems are increasingly being used in medicine, vehicles or other safety-critical areas. Malfunctions can have fatal consequences. There is a need for a clear and AI specific legal framework, which gives providers guidance and legal certainty. This creates trust and fosters innovation instead of slowing them down." 

"With AI Act, Europe has the chance to create the world's first legal framework for artificial intelligence. Knowing the capabilities of powerful AI systems such as ChatGPT, Europe must waste no more time and swiftly create the legal framework. As suggested by the signatories of the appeal, independent assessments and certifications should play an important role for establishing trust and acceptance of AI systems, thus making the law work in practice." 

"TÜV companies are currently building up competencies for the assessment and certification of safety-critical AI applications with the 'TÜV AI Lab'. The aim is to develop test procedures for AI systems and the suitability of their training data. In addition to the AI Lab, further competence centres for the assessment of AI systems should be established in Germany and in the EU in order to pool know-how and to better connect companies, researchers, assessment organisations and standardisation institutes." 

Both German citizens and the German business community are in favour of regulating AI systems. This is shown by representative surveys commissioned by the TÜV Association. According to the survey, 82 per cent of German citizens consider a regulation of artificial intelligence to be appropriate. The greatest dangers of AI, according to the respondents, are the discrimination of people (66 percent), the manipulation of people (62 percent), the lack of transparency in the use of AI (61 percent) and data protection violations (57 percent). In another survey of companies, 87 percent of respondents said that AI applications should be regulated depending on their risk. 

Methodology note: The data is based on a representative Forsa survey commissioned by the TÜV Association among 1,005 people aged 16 and over in November 2022 and an Ipsos survey among 500 companies with 50 or more employees in Germany. Managing directors and IT managers were surveyed in 2020.