What measures ensure that AI is developed responsibly in companies? 

A roundtable series organised by the German Informatics Society (GI) and Stiftung Mercator is now exploring this question. The kick-off workshop on 21 June brought together AI decision-makers from a variety of companies – from start-ups to large corporations. 

The use of artificial intelligence (AI) makes many things possible, but also entails social, ecological and economic risks. To address these, companies are increasingly adopting ethics policies to help achieve goals such as transparency, security, robustness and fairness. Globally, AlgorithmWatch counted over 160 different guidelines on ethical AI development as early as 2020. However, only ten of these guidelines contain binding commitments and practical mechanisms for their implementation. For this reason, experts* doubt their effectiveness.

“The mere existence of guidelines does not guarantee that the ethical principles they state will be taken into account in AI development. Rather, there needs to be concrete procedures for the practical implementation of AI ethics and a shared understanding of the values imposed,” says GI project leader Julia Meisner.

Many companies are already aware of this issue. However, existing attempts to formulate guidelines more concretely, for example, or to establish processes for their implementation are rarely communicated publicly or shared with other companies. The roundtable series ethical AI development (RTeKI) therefore provides a space for AI decision-makers from software developing companies to exchange ideas about measures in the implementation of AI ethics guidelines. Instead of developing further guidelines, the participants collect best practices for the implementation of existing guidelines, prepare them and disseminate them widely so that other companies also benefit from the results. Together with experts from academia and civil society, participants look not only at technical approaches, but also at governance and business development issues.

“The implementation of AI ethics guidelines also benefits companies in a very practical way: in the development of algorithmic systems, they create greater certainty of action, increase product quality and promote trust. ‘Ethical AI’ can thus become a decisive competitive advantage in the future,” says Florian Christ, project manager in Stiftung Mercator’s Digitized Society department.

To kick off the series, 13 AI experts met at the AI Campus in Berlin last Tuesday, June 21. In a workshop, the participants discussed the current status of ethical guidelines for AI in their companies and formulated initial problems and goals. The very different profiles of the participating companies and their representatives provided much food for discussion: The series unites start-ups as well as corporations, expertise from various fields of activity such as IT, management and compliance, and sectors such as automotive, industrial applications and e-commerce.

Led by GI and Stiftung Mercator, six more roundtable events will follow at intervals of two to four months until the end of 2023. Participating companies include ai-omatic solutions, Aleph Alpha, BMW, Continental, Deutsche Bahn, Deutsche Telekom, Lufthansa Industry Solutions, Microsoft Deutschland, ML6, SAP, Siemens, team neusta, Volkswagen and Zalando.

Additional information can be found at: roundtable-ki.gi.de