New Research Project on the Traceability of Algorithmic Decision Systems 

The increasing use of algorithmic decision systems (ADM) is bringing about far-reaching changes for the world of work, particularly in personnel management, where decisions on selection and management are increasingly prepared by ADM systems. 

Employees also interact increasingly with “intelligent” systems in their direct working environment. In this context, what form can procedures take to ensure the comprehensible, controllable and non-discriminatory use of “artificial intelligence” (AI)? How can they be implemented and what institutional requirements must be observed? 

The Gesellschaft für Informatik e.V. (GI) is working on behalf of the AI Observatory of the German Federal Ministry of Labour and Social Affairs (BMAS) in a 20-month research project with Fraunhofer IESE, the Algorithm Accountability Lab at the Technical University of Kaiserslautern, the Institute for Legal Informatics at Saarland University and the Stiftung Neue Verantwortung.

GI Managing Director, Daniel Krupka, said: “With this research project we want to make our contribution to a safe, reliable, fair and non-discriminatory use of AI technology. In our interdisciplinary project, we will identify suitable procedures to ensure that algorithmic decision systems will work transparently and in compliance with the law in the future. We are therefore looking forward to the cooperation with our partners and the AI Observatory of the BMAS.”

The outsourcing of decisions to ADM systems is associated with the expectation that they will be made more precisely, objectively and economically more efficiently. However, they often leave those concerned in the dark about the processing of their data and may be incompatible with applicable labor law. The study, “Technical and legal considerations of algorithmic decision-making procedures”, published in 2018 by the GI’s “Legal Informatics” section on behalf of the Council of Experts for Consumer Affairs (SVRV) therefore comes to the conclusion that suitable testing and auditing procedures are needed to create the necessary transparency and make the use of AI systems legally secure. 

The research project “AI Testing & Auditing” ties in with this finding and, together with an interdisciplinary team of (socio)computer scientists, software engineers, and legal and political scientists, will present recommendations for action on testing and auditing procedures. In addition to AI systems in human resources management and recruiting, the field of human-machine cooperation in industrial production will also be taken into account. 

Further information on the “AI Testing & Auditing” research project can be found at https://testing-ai.gi.de/