European Union member states last week gave final agreement to the world’s first major law for regulating artificial intelligence, as institutions around the world race to introduce curbs for the technology.
The EU Council said it had approved the AI Act — a groundbreaking piece of regulatory law that sets comprehensive rules surrounding artificial intelligence technology.
In responding to the announcement, IFIP President Anthony Wong said the risk-based legislation was amended in the late stages to reflect some of the issues and challenges with the public release of Generative AI.
“The creation of the EU AI Office is one of the outcomes of the legislation, including measures to ensure proper enforcement, with the creation of several other governing bodies:
- A scientific panel of independent experts to support enforcement activities;
- An AI Board with member states’ representatives to advise and assist the EU Commission and member states on consistent and effective application of the AI Act; and
- An advisory forum for stakeholders to provide technical expertise to the AI Board and the EU Commission.
“As the world had embraced and continues to embrace the ramifications of the EU GDPR, the EU AI Act will have similar flowing effects globally, and our minds must now turn to regulatory compliance measures and their implications. The big challenge forward is its implementation and ensuing compliance, including evolving the regulation to keep pace with the rapid advancements and development in AI,” he said.
“The adoption of the AI act is a significant milestone for the European Union,” Mathieu Michel, Belgium’s secretary of state for digitization said in a Tuesday statement.
“With the AI act, Europe emphasizes the importance of trust, transparency and accountability when dealing with new technologies while at the same time ensuring this fast-changing technology can flourish and boost European innovation,” Michel added.
The AI Act applies a risk-based approach to artificial intelligence, meaning that different applications of the technology are treated differently, depending on the perceived threats they pose to society.
The law prohibits applications of AI that are considered “unacceptable” in terms of their risk level. Such applications feature so-called “social scoring” systems that rank citizens based on aggregation and analysis of their data, predictive policing and emotional recognition in the workplace and schools.
High-risk AI systems cover autonomous vehicles or medical devices, which are evaluated on the risks they pose to the health, safety, and fundamental rights of citizens. They also include applications of AI in financial services and education, where there is a risk of bias embedded in AI algorithms.
U.S. Big Tech firms in the spotlight
Matthew Holman, a partner at law firm Cripps, said the rules will have major implications for any person or entity developing, creating, using or reselling AI in the EU — with U.S. tech firms firmly in the spotlight.
“The EU AI is unlike any law anywhere else on earth,” Holman said. “It creates for the first time a detailed regulatory regime for AI.”
“U.S. tech giants have been watching this developing law closely,” Holman added. “There has been a lot of funding into public-facing generative AI systems which will need to ensure compliance with the new law that is, in some places, quite onerous.”
The EU Commission will have the power to fine companies that breach the AI Act as much 35 million euros ($38 million) or 7% of their annual global revenues — whichever is higher.
The change in EU law comes after OpenAI’s November 2022 launch of ChatGPT. Officials realized at the time that existing legislation lacked the detail needed to address the advanced capabilities of emerging generative AI technology and the risks around the use of copyrighted material.
The law imposes tough restrictions on generative AI systems, referred to by the EU as “general-purpose” AI. These include requirements to respect EU copyright law, transparency disclosures on how the models are trained, routine testing and adequate cybersecurity protections.
But it’s going to take some time before these requirements actually kick in, according to Dessi Savova, a partner at Clifford Chance. The restrictions on general-purpose systems won’t begin until 12 months after the AI Act comes into force.
And even then, generative AI systems that are currently commercially available, like OpenAI’s ChatGPT, Google’s Gemini, and Microsoft’s Copilot, get a “transition period” that gives them 36 months from the day it comes into force to get their technology compliant with the legislation.
“Agreement has been reached on the AI Act — and that rulebook is about to become a reality,” Savova told CNBC via email. “Now, attention must turn to the effective implementation and enforcement of the AI Act.”
This article was first published on the CNBC website