AI Governance will be the primary focus of IFIP’s newest Working Group (WG), which has been established within IFIP Technical Committee (TC) 12.

WG12.12 (AIGOV) will be chaired by IFIP Vice President, ex-CIO and technology lawyer, Anthony Wong, who has written and presented extensively on the issue of ethical and regulatory approaches to AI in his various capacities as ACS President (2010-11 and 2016-17), SEARCC President (2010-11) and as IP3 Deputy Chair.

AI Governance has become a pressing issue for humanity in light of the latest technological advances. Recent global developments advocate for new frameworks, structures and processes for better governance and for responsible design, development, deployment and use of AI. 

Many stakeholders have produced AI ethical principles and frameworks including Australia, the EU, OECD, the World Economic Forum (WEF) and Singapore, to name a few. The debates have matured significantly since 2017, beyond the ‘what’ of ethical principles to more on the ‘how’ – how such principles can be operationalised in the design and implementation to minimise risks and negative outcomes. 

The main purpose of the AIGOV will be to connect with selected groups working on AI Governance, fostering international collaboration and bring fresh ideas/opinion from a multidisciplinary, multilateral and multicultural group of stakeholders including AI experts and students. It will also elaborate on some reasonable mechanisms for AI governance and for the mitigation of AI risks.

There is also a growing awareness that principles and professional practices provide important norms for the larger AI governance ecosystem, including on: relevant policies (eg. AI national plans), laws and regulations, standards, design and impact assessment frameworks, auditing and certification, to name a few. In January this year, IFIP IP3 and TC 9 launched the IFIP Code of Ethics and Professional Practice www.ifipnews.org/ifip-launches-global-code-ethics-ict-sector/.

Mr Wong said ICT professionals and academics are uniquely positioned to make a meaningful contribution to the ethical dilemmas relating to AI.

“Technologists and AI developers understand better than most in relation to the trends and trajectories of emergent technologies and their potential impact on the economic, safety and social constructs of the workplace and society. It is incumbent that we raise these issues and ensure they are widely debated, so that appropriate and intelligent decisions can be made for the changes, risks and challenges ahead. Technologists and AI developers are well placed to address some of the risks and challenges during the design and lifecycle of AI-enabled systems. It would be beneficial to society for ICT professionals to assist government, legislators, regulators and policy formulators with their unique understanding of the strengths and limitations of the technology and its effects.”

In a paper presented at AI4KM 2020 at the International Joint Conference on Artificial Intelligence – Pacific Rim International Conference on Artificial Intelligence, Yokohama, Japan in January 2021, entitled, “Ethics and Regulation of Artificial Intelligence, Mr Wong wrote, “As autonomic and self-learning capabilities increase, robots and intelligent AI systems will feel less and less like machines and tools. Self-learning capabilities for AI have added complexity to the equation. Will human actors use robots to shield themselves from liability or shift any potential liabilities from the developers to the robots? Or will the spectrum, allocation and apportionment of responsibility keep step with the evolution of self-learning robots and intelligent AI systems? Regulators around the world are wrestling with these questions.”

Following consultation and negotiation with representatives from UNESCO’s member states, UNESCO has released the final draft of the international standard-setting instrument on the ethics of artificial intelligence (The Recommendation), which is to be submitted to the UNESCO member states for adoption in November 2021. The Recommendation will establish a global framework to ensure that digital transformations promote human rights and contribute to the achievement of the Sustainable Development Goals. It will address issues around transparency, accountability and privacy, contain action-oriented policy chapters on data governance, education, culture, healthcare and the economy, and provide governments and policy makers with a global framework for regulating AI. 

In April 2021, in a revolutionary milestone, the European Commission proposed the first AI legal framework, that could set new benchmarks and global norms for the global regulation of AI. The global implications could be similar to that of the EU’s General Data Protection Regulation (GDPR). The proposal followed on from intense debates on ethics of AI over the last few years and adopted a risk-based approach, differentiating between 3 categories of risks: uses of AI that create (i) an unacceptable risk, (ii) a high risk, and (iii) low or minimal risk.

The legislative proposals contained a list of prohibited practices where uses of AI are considered unacceptable. These include practices that have significant potential to manipulate persons or exploit vulnerabilities of specific groups, AI-based social scoring and, the use of biometric systems in publicly accessible spaces unless certain limited exceptions apply. Fines of up to €30 million or 6 per   cent of worldwide annual turnover, have been proposed.

Under the proposals, AI systems identified as high-risk are subject to more stringent requirements and include critical infrastructures (eg.transport); scoring to determine access to educational or vocational training; safety of products; employment; essential services; law enforcement; and administration of justice.

AIGOV looks forward to participating in these international conversations. Some members of AIGOV participated in the group’s first public appearance this week at the #ifip60 panel session “AI Ethics and Governance”. Moderated by Eunika Mercier-Laurent, Chair IFIP TC12 (AI), the session was part of 30th International Joint Conference on Artificial Intelligence (IJCAI-21), held in Montreal-themed virtual reality due to the Covid-19 pandemic.

Image: The AI Ethics and Governance Panel held this week as part of IJCAI-21