
Photo Credit: Adobe Stock
The EU’s Artificial Intelligence Act entered into force on 1 August 2024. In the context of the AI Act, deployers are responsible for ensuring the safe and compliant use of AI systems as they are rolled out. Organisations deploying high-risk AI systems must prepare to meet a complex set of obligations by 2 August 2027.
A “deployer” is a person using an AI system under its authority. Deployers become subject to the AI Act if they are established or located in the EU or, to the extent they are established or located in a third country if the output of the AI system is used in the EU. AI systems that are considered to be high-risk, such as those used in critical infrastructure, education, healthcare, law enforcement, border management or elections, will have to comply with strict requirements set out in the Act. The list of high-risk use cases is set out in Annex III of the AI Act and will be maintained by the European Commission.
When dealing with high-risk AI systems, deployers face a long list of obligations to comply with including:
- AI Literacy Training: Make sure all users have enough knowledge and information about AI to use the system as intended.
- Training and Support for Overseers: Offer the necessary training and support to those overseeing high-risk AI systems to ensure they have the required competence and authority to fulfil the role. Specifically, ensure that overseers of high-risk AI systems receive guidance on how and when to make informed decisions regarding intervening to avoid negative consequences or risks, or stop the system if it does not perform as intended.
- Operational Obligations (Articles 26, and 79, Recitals 91, 94, and 95)
- Technical and organisational measures: Implement appropriate technical and organisational measures to ensure that the high-risk AI system is used in accordance with the instructions for use.
- Input data quality management: Ensure that input data is relevant and sufficiently representative for the high-risk AI system’s intended purpose.
- Suspension of use: Suspend use of high-risk AI system where it presents a risk.
- Control and Risk-Management Obligations (Articles 5, 26, and 27, Recital 96)
- Pre-check for prohibited practices: Ensure that the AI system does not engage in any of the prohibited practices defined in Article 5 of the AI Act. Note there is limited binding effect for Ireland in the areas of judicial and police cooperation as set out in Recital 40.
- Fundamental Rights Impact Assessment (FRIA): A deployer may also have to conduct a Fundamental Rights Impact Assessment (FRIA) under the AI Act and notify the national authority of the results. Deployers that are public bodies or private enterprises providing public services, and operators providing high-risk systems, are covered by this requirement.
A DPIA will also need to be conducted by deployers in respect of the use of high-risk AI systems as well as providing a summary to the national authority.
In practice, the FRIA requirement can be met where the FRIA elements have been incorporated in one consolidated DPIA that meets the requirements of both the GDPR, and the AI Act – meaning one document will suffice.
- Human oversight: Ensure that human oversight is assigned to oversee the system. These individuals must have the necessary training, competence, and authority to manage the system and intervene if necessary.
- Continuous monitoring: Ensure that the AI system is monitored continuously. This includes assessing the system’s operation according to the instructions provided by the provider and taking action if any risks are identified. If the use of the AI system presents a significant risk, deployers must inform both the provider and the relevant market authorities, and suspend the use of the system.
- Documentation Obligations (Article 26)
- Log retention: Ensure that logs automatically generated by the AI system are retained for a period of at least six months, unless national or EU law specifies otherwise, in particular in EU law on the protection of personal data.
- Notification and Transparency Obligations (Articles 26, 50, 72, 73, 79, Recitals 92, 93, and 155)
- Toward providers: Inform providers of the high-risk AI system of any relevant operational circumstances occurring in accordance with Article 72.
- Towards providers and authorities: notify providers and national authorities in case of risks to the health, safety, or fundamental rights of individuals or a serious incident, and suspend use of the system.
- Employees and individuals: Inform individuals when a high-risk AI system is being used in decisions that affect them. Before putting into service or using a high-risk AI system in the workplace, deployers that are also employers must inform their workers and any relevant workers’ representatives.
- Compliance Obligations with other Legislation
- Ensure all AI-related data processing complies with other EU and national laws, including meeting data protection obligations under the GDPR.
- Cooperation Obligations
- EU and national authorities: Cooperate with EU and national authorities and engage in voluntary codes of practice and guidance.
- White labelling, modification, change in purpose: Check if the high-risk AI system is deployed on a white-labelled basis (i.e., the system is provided by a third party, but is labelled with the deployer’s firm name or trademark) or the high-risk AI system has been substantially modified, or the purpose of the AI system is put into service in such a way that it becomes a high-risk AI system in its own right. In these circumstances, the deployer shall be considered to be a provider high-risk AI system for the purposes of the AI Act and subject to the obligations set out in Article 16.
- Public authorities: Deployers that are public authorities are subject to specific obligations regarding the registration of high-risk AI systems. If a system has not been registered in the EU database, these authorities are prohibited from using it and must notify the provider or distributor. Other deployers will be entitled to register voluntarily.
It is recommended that organisations deploying high-risk AI systems in the EU should begin factoring in the AI Act requirements into their current planning and design decisions. For organisations deploying high-risk AI systems, these obligations represent organisational commitment to good governance, safety, compliance, and accountability. Organisations can foster trust, accountability, and compliance with AI-driven innovation by embracing these obligations.
Written by Kieran Harte, this article was originally published on the Irish Computer Society website
Links to more information
The AI Act Explorer | EU Artificial Intelligence Act