AI and algorithmic decision-making will over time bring significant benefits to many areas of our human endeavours. The proliferation of AI systems imbued with increasingly complex mathematical modelling and machine learning algorithms are being integrated in virtually every sector of the economy and society, to support and in many cases undertake more autonomous decisions and actions. Algorithmic decision-making is often opaque and complex, and it can be difficult to explain the rationale for its conclusions – raising concerns including trustworthiness, accountability, liability, explainability, interpretability, transparency and human control. 

In the public sector, these systems are increasingly being adopted by governments to improve and reform public service processes. In many situations, stakeholders and users of AI will expect reasons to be given for transparency and accountability of government decisions, which are important elements for the functioning of public administration.

The need to address these challenges has increased in urgency as adverse potential impacts could be significant. If not appropriately addressed, human trust suffers, impacting on adoption and oversight and in some cases posing significant risks to humanity and societal values.

IFIP Vice President and Deputy Vice-Chair of IP3, Anthony Wong is collaborating with a multi-disciplinary group of 54 technology law experts from 16 countries to update Principle 3 – Transparency and Explainability of the iTechlaw Global Policy Framework for Responsible AI: www.itechlaw.org/ResponsibleAI

Mr Wong welcomes and seeks input from technical experts in the community who have made recent innovation and developments and in the field of Explainable Artificial Intelligence (XAI) and Transparency at anthonywong@agwconsult.com