Create trust in AI-based systems and products

Artificial intelligence is penetrating more and more areas of consumers' lives. Therefore, the existing legal framework needs to be adapted.

©David Clode via Unsplash

Although artificial intelligence (AI) is not a new phenomenon from a scientific point of view, political interest in the issue has grown significantly in recent years. Due to the increasing use of AI-based systems, they are quickly becoming prevalent in various areas of citizens’ everyday lives. This technology holds enormous potential that should be explored, but it also carries serious risks which need to be mitigated. For instance, AI can impinge on privacy, raise liability issues, and limit users’ autonomy. Large amounts of data are generated, used and exchanged, which must be protected against tampering and spying. This is particularly crucial when it comes to personal data.

The security of and trust in AI-based applications are fundamental requirements for the social acceptance of this pivotal technology. The desire for security is confirmed by recent surveys. According to a poll by VdTÜV, 83% of citizens would like AI-based applications to be monitored by an independent body1. This requires high quality and safety standards based on a coherent and robust legal framework.

Our Positions:

  • Amend the existing legal framework to account for new technology
  • Mandate a risk-based security assessment for the entire product life cycle

Download the position paper

Position paper: Artificial intelligence