BCS, the Chartered Institute for IT, has warned that a lack of diversity in teams developing artificial intelligence (AI) could lead to in-built bias and discrimination in its decisions.

The comments are featured in a new report by the Committee for Standards in Public Life (CSPL), which examines whether the existing frameworks and regulations around machine learning are sufficient to ensure high standards of conduct are upheld as technologically assisted decision-making is adopted more widely across the public sector.

Dr Bill Mitchell OBE, Director of Policy at BCS, The Chartered Institute for IT said: “Lack of diversity in product development teams is a concern as non-diverse teams may be more likely to follow practices that inadvertently hard-wire bias into new products or services.”

Sampling errors can also produce discriminatory outcomes. For example – a machine learning tool designed to diagnose skin cancer that has been trained only on white skin could be less accurate on black skin, the report explained. This bias in the training data may not be the result of active human prejudice, but can still result in a discriminatory outcome because the system is more likely to misdiagnose BAME people.

According to Dr Mitchell, “There is a very old adage in computer science that sums up many of the concerns around AI-enabled public services: ‘Garbage in, garbage out.’ In other words, if you put poor, partial, flawed data into a computer it will mindlessly follow its programming and output poor, partial, flawed computations. AI is a statistical-inference technology that learns by example. This means if we allow AI systems to learn from ‘garbage’ examples, then we will end up with a statistical-inference model that is really good at producing ‘garbage’ inferences.”

The report also suggested diverse teams will also make public authorities more likely to identify potential ethical pitfalls of an AI project. Many contributors emphasised the importance of diversity, telling the Committee that diverse teams would lead to more diverse thought, and that in turn this would help public authorities identify any potential adverse impact of an AI system.