On 21 April 2021, the European Commission published a legislative proposal with the first EU rules for Artificial Intelligence (AI). This AI Regulation was recently adopted by the European Parliament. The regulation will enter into force twenty days after its publication in the Official Journal. This is expected to take place in the second half of 2024. The AI Regulation will mainly focus on high-risk AI systems, including AI systems used in the workplace. In this Alert, we therefore list the most important consequences for employers for you.
The AI Regulation
The AI Regulation is the first legislation that sets rules on the use of AI and aims to protect fundamental rights, democracy, the rule of law and environmental sustainability against AI systems.
The European legislator has included the following definition for AI in Article 3 of the AI Regulation : “Software developed using one or more of the techniques or approaches listed in Annex I which, for a given set of purposes defined by humans, is capable of generating outputs such as content, predictions, recommendations or decisions which affect the environment with which it interacts.”
The definition used by the European legislator is very broad; a system will in principle already fall under the definition if the system operates with a certain degree of autonomy. Autonomy means that the system generates “output” that is influenced by factors beyond the user’s control. Examples of this include chatbots, personalised suggestions, but also selection systems for job applicants.
The AI Regulation follows a so-called “risk-based approach”, distinguishing between AI applications that entail i) an unacceptable risk, ii) a high risk, and iii) a limited or minimal risk:
- Unacceptable risk : AI systems that pose an unacceptable risk will be banned. These are systems that are considered a danger to humans. Title II of the AI Regulation contains a list of prohibited AI systems, such as systems that manipulate behaviour or classify people based on their appearance or social characteristics.
- High Risk : The text of Title III of the AI Regulation makes it clear that AI-systems that could cause significant harm to health, safety, fundamental rights, the environment, democracy or the rule of law fall under “High Risk”. Annex III of the Regulation lists categories of AI-systems that qualify as high risk. High-risk AI-systems will be subject to strict regulation.
- Limited and minimal risk : For systems with a limited risk, specific transparency obligations will be introduced to ensure that individuals are informed where necessary. For example, it will have to be indicated that an AI system is involved. The AI Regulation also allows for the free use of AI systems with a minimal risk, which concerns the vast majority of AI systems.
AI-systems in the workplace in principle qualified as high-risk systems
The largest part of the AI Regulation focuses on high-risk AI-systems. Annex III to the AI Regulation states that AI-systems used in the field of employment, human resources management and access to self-employment, in particular for the recruitment and selection of persons, for taking decisions on promotion and dismissal and for the assignment of tasks, monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-risk systems. These systems may have a significant impact on the future career prospects and livelihood of these persons.
What requirements must be met if a high-risk system is used in the workplace?
If high-risk AI-systems are used, there are mandatory requirements that must be met. The AI Regulation distinguishes between the provider or distributor of an AI-system and a user. If an AI system used in the workplace qualifies as a high-risk system and the company is not a provider or distributor, the rules for users will apply. Users of high-risk AI systems are required, among other things, to:
- Monitor the use of AI systems and keep track of the data used to deliver output. If there are indications that the system may pose a risk to the health, safety or protection of fundamental rights of individuals, the provider or distributor must be informed and the system may no longer be used;
- To check, to the extent that the user has control, whether the input data is relevant for the intended purpose of the AI system;
- To follow the instructions of the provider and to use the system for its intended purposes. The provider is responsible for the prior approval of high-risk systems, registers the system in the designated EU database and reports risks to the national competent authorities. In case of deviation, the user will be considered a provider and the rules for providers will apply to the users;
- To inform those exposed to high-risk AI systems.
Entry into force
The AI Regulation will enter into force, as indicated above, twenty days after its publication in the Official Journal of the European Union, expected in the second half of 2024. After this, the AI Regulation will enter into force in phases. Six months after entry into force, the provisions on prohibited AI-systems will enter into force. Two years after entry into force, the remaining provisions of the Regulation will enter into force, including the rules on high-risk AI-systems.
Sanctions for non-compliance
The sanctions for non-compliance with the AI Regulation are based on the sanctions in the GDPR. In principle, the sanctions are as follows, based on Article 71 of the AI Regulation: up to €30 million or 6% of the total worldwide annual turnover for infringements of the prohibited AI-systems, up to €20 million or 4% of the total worldwide annual turnover for infringements of the requirements or obligations of the AI Regulation or €10 million or 2% of the total worldwide annual turnover for providing incorrect information.
When determining the amount of the administrative fine, all relevant circumstances of the situation will be taken into account, including the size of the company committing the infringement.
To do
If AI systems are used in the workplace, the AI Regulation will entail various administrative tasks. Our advice is therefore to take the following steps in advance:
- Identify which AI systems are currently in use and how they are qualified. This includes onboarding processes, systems that automate HR tasks or systems used in assessing the award of a bonus.
- Check whether the AI systems used comply with legal regulations;
- Assess whether high-risk AI systems are used for the right purposes;
- Develop a plan on how the systems will be monitored and how the logs will be stored. For example, contractual agreements can be made with the providers of the AI systems that the storage of the logs in accordance with Article 20 of the AI Regulation is under the control of the provider.
Key contacts
Suzan van de Kam
Partner | Lawyer
Send me an email
+31 (0)70 318 4297
Lotte Varkevisser
Associate | Lawyer
Send me an email
+31 (0)70 318 4200