AI survey: Consumers call for test marks for artificial intelligence

The German population shares the concern about hacker attacks, manipulation or faulty AI applications. This is one of the findings of the TÜV Association's current consumer survey. The vast majority would like to see comprehensive regulation of the technology. The TÜV Association is therefore calling for independent tests for all high-risk AI applications.

©Clint Adair via Unsplash

26. October 2021 – Automated vehicles, new methods for cancer detection, face recognition in public spaces or software support for personnel selection: Artificial intelligence (AI) is increasingly being used in safety-critical areas or endangers basic civil rights. Four out of five German citizens (80 percent) therefore demand a test mark for artificial intelligence assigned by independent bodies in order to strengthen trust in the technology. 79 percent want products and applications with artificial intelligence to be labelled as a matter of principle. And 71 percent of those surveyed call for comprehensive legal regulation of the technology. These are the core results of a representative study commissioned by the TÜV Association, in which 1,000 people aged 16 and over were surveyed. "Whenever products or applications with artificial intelligence endanger people's health or their fundamental rights such as privacy or equity, we need legal regulation," said Dr Dirk Stenkamp, President of the TÜV Association, during the presentation of the study. "Legislation for a European AI regulation must now be pushed forward quickly and improvements must be incorporated." For example, AI applications with a high risk should always be tested by independent bodies to guarantee their safety.

According to the results of the survey, 95 percent of the respondents know the term artificial intelligence. Almost one in two (48 percent) can explain the most important features of this complex technology well or very well. Compared to a previous study of the TÜV Association in 2019, this is an increase of 13 percentage points. Another 42 percent could give a rough explanation (minus 5 points) and only 11 percent know very little or nothing about AI (minus 6 points). "Knowledge about AI is improving as the technology becomes more widespread," said Stenkamp. Parallel to this, the attitude of citizens is also changing. 51 percent of the respondents feel positive when they think of AI, 5 points more than two years ago. On the other hand, only 14 percent feel somehow negative; two years ago, this figure was twice as high at 28 percent. 35 percent are neutral (plus 14 points). The attitudes of women and men are similar. 45 percent of the women surveyed feel something positive, compared to the 35 percent in 2019. The proportion of men with a positive attitude has remained constant at 56 percent (2019: 57 percent). And only 15 percent of women feel something negative, down from 35 per cent in 2019, putting them almost on par with men, among whom 13 percent feel negative towards AI (2019: 21 percent).

A large majority of consumers have a very positive view of the further development of artificial intelligence in industry (73 percent), in research (69 percent), in the health sector (66 percent) and in education (62 percent). Four out of five of all respondents (80 percent) also hope that the use of artificial intelligence will bring personal benefits or make their everyday lives easier: 53 percent expect time savings, 48 percent energy savings and 43 percent relief from routine tasks. At the same time, the respondents have many concerns and reservations in connection with the use of AI. In first place, with a share of 66 percent, is the fear of hacker attacks that are automated or personalised with help of AI. This is followed by concerns about massive surveillance (62 percent) or misuse of personal data (61 percent). For 61 percent of the respondents, there is the concern that AI will be used to manipulate people. The targeted spread of fake news and filter bubble effects in social networks are examples of this. "The fear that AI systems will make mistakes in safety-critical applications is also widespread at 60 percent," said Stenkamp. Faulty AI systems could have devastating consequences in automated driving, for example. Other fears concern job losses through the use of AI systems (57 percent) and discrimination against humans, for example in personnel selection or in the automated granting of loans (41 percent). The scepticism about autonomous driving is very clear. Only 39 percent would ride a fully automated vehicle. 33 percent reject it and another 29 percent are unsure and answer "don't know".

To ensure the safety of AI applications, 81 percent of the respondents want independent testing before the products are launched on the market. 79 percent demand safety tests of AI products even when they are already on the market. "Especially for safety-critical AI applications, independent tests are necessary during operation, for example of vehicles or medical technology devices," Stenkamp emphasised. AI-supported products could also "wear out" over time or be impaired in their effect, for example the assistance systems in cars. Furthermore, AI products change their properties when new functions are added with a software update. A reassessment of safety is then necessary.

IMPROVing THE EU COMMISSION'S DRAFT REGULATION

In the EU Commission's draft regulation presented in April, no external audits are foreseen for many less safety-critical AI applications. "A risk-based approach to AI regulation makes sense," said Stenkamp. An email spam filter would have to be treated differently than a vehicle or a medical device. However, in the view of the TÜV Association, there is still a need for improvement here. "So far, a clear derivation and definition of the risk classes is missing, which can lead to legal uncertainties," said Stenkamp. The assignment to the four risk classes should not be regulated by a fixed technology catalogue. Instead, protection goals such as danger to life and the restriction of fundamental rights should be the maxim for every AI application. Stenkamp: "AI systems that pose a high risk should always be tested by independent third parties.

According to the TÜV Association, the EU Commission should learn from the experience of the General Data Protection Regulation when it comes to AI legislation. Its application is often proved too complicated in practice. "We should not only develop AI regulation, but also push its practical implementation," said Stenkamp. "New methods and standards are needed for testing and certifying AI applications." As a suitable instrument for this process, the TÜV Association proposes the establishment of interdisciplinary "AI Quality & Testing Hubs". Here, AI providers, companies, research institutions and testing organisations could work together on norms and standards for artificial intelligence and develop new testing methods. This would require political support from the new German government.

Methodological note: The study results are based on a representative survey of 1,000 people aged 16 and over in Germany conducted by the market research company Statista on behalf of the TÜV Association. The survey is representative of the population as a whole. It was conducted in August 2021.

Downloads

Study report (English)

Study report (German) 

Presentation (German)

Infograph