Dear Reader,
At the beginning of 2024, the EU Parliament adopted the Regulation laying down harmonized rules on Artificial Intelligence (AI Act). In the following, you will find out what far-reaching consequences this may have for employers and what needs to be considered in particular.
I. Regulatory Content of the AI Act
The AI Act is the world’s first comprehensive set of rules for the use of artificial intelligence (AI). AI refers to a fast evolving family of technologies that contributes to a wide array of economic, environ- mental and societal benefits, but at the same time entail risks for public interests and certain rights. The AI Regulation aims to resolve this conflict by promoting the uptake of human-centric and trust- worthy AI while ensuring a high level of protection of health, safety and rights enshrined in the Char- ter of Fundamental Rights of the EU.
The Regulation enters into force on August 01, 2024, and, with a few exceptions, is fully applicable 24 months later.
II. Relevance of the AI Regulation in Connection with the Recruitment Process and HR Management
According to Art. 2 para. 1 b), c) AI Act, the law applies, among others, to deployers of AI systems that have their place of establishment or are located within the European Union or that have their place of establishment or are located in a third country, but the output produced by the AI system is used in the European Union.
In simple terms, an “AI system” is defined as a ma- chine-based system that autonomously infers from human input how to generate certain outputs that can influence physical or virtual environments.
“Deployer” is, among others, a natural or legal per- son using such an AI system under its authority in a professional, non-personal context.
The AI Regulation follows a risk-based approach: The higher the risk for the mentioned rights, the stricter the obligations. A distinction is made between prohibited AI practices, high-risk AI systems and lower-risk AI systems. “Risk” means the com- bination of the probability of occurrence and the severity of damage. Art. 6 para. 2 AI Act in con- junction with Annex III no. 4 defines such AI sys- tems as high-risk AI systems that are intended to be used for the recruitment or selection of natural persons, in particular to place targeted job adver- tisements, to analyse or filter job applications and to evaluate candidates. The same applies to AI systems that are intended to be used in existing work-related relationships to make decisions regarding working terms, promotions, terminations, task allocations based on behavior or personal characteristics and traits or to monitor and evaluate performance and behavior. Only in narrowly defined exceptions are such AI systems not considered to be high-risk.
III Deployer Obligations under the AI Act
According to Art. 4 AI Act, deployers must ensure to the best of their ability that their staff and other persons involved in the operation and use of AI systems on their behalf have a sufficient level of “AI competence”. Sect. 12 para. 1 cl. 3 ArbSchG also obliges the employer to provide sufficient in- struction when introducing new work equipment and technologies. Company guidelines are particularly suitable for this purpose.
According to Art. 26 AI Act, deployers of high-risk AI systems also have the following obligations, among others: ensuring a use in accordance with the instructions; delegating supervision to appropriate persons; ensuring that input data is relevant and sufficiently representative in view of the in- tended purpose of the AI system; monitoring on the basis of the instructions and, if necessary, informing persons/authorities in charge and sus- pending the use; keeping automatically generated logs; informing employee representatives and affected employees that they will be subject to the use of the AI system; cooperating with the authorities in charge.
In the cases of Art. 25 AI Act, deployers can also be considered as providers of a high-risk AI system, meaning that they are also subject to the numerous provider obligations.
IV. Suspicion and Consequences of Breaches of the AI Regulation
According to Art. 77 AI Act, the authorities in charge may request and inspect documentation if necessary to protect any rights that may be affected. Should this be insufficient, a technical test of the high-risk AI system may also be carried out.
According to Art. 99 para. 4 AI Act, fines of up to EUR 15,000,000 or – in the case of companies – up to 3 % of the total worldwide annual turnover of the previous financial year, whichever is higher, may be imposed for breaches of the deployer ob- ligations under Art. 26 AI Act.
V. Involvement of the Works Council in the Use of AI systems
The works council does not have a right of initia- tive regarding the introduction of AI. However, if the employer plans to use AI in the company, he must inform the works council in good time and submit the necessary documents, Sect. 90 para. 1 no. 3 BetrVG. If the works council needs to assess the introduction or use of AI in order to fulfil its du- ties or if the works council and employer agree on a permanent AI expert, the consultation of an expert is considered as necessary, Sect. 80 para. 3 cl. 2, 3 BetrVG. According to Sect.95 para. 2a BetrVG, certain rights that the works council owns when establishing selection guidelines for hiring, transfers, regrouping and terminations in accordance with para. 1 and 2 also apply if AI is used in the process of establishing the guidelines.
The works council may also have enforceable co- determination rights in the introduction of specific AI tools, Sect. 87 para. 1 no. 6 (introduction and use of technical equipment intended to monitor the behavior or performance of employees), no. 7 (regulations on the prevention of accidents at work and occupational illnesses as well as on health protection within the framework of statutory provisions or accident prevention regulations) BetrVG. The use of AI may also constitute a change in operations in the sense of Sect. 111 BetrVG, so that the negotiation of a reconciliation of interests or social plan may be necessary.
VI. Conclusion
When operating a high-risk AI system, employers are subject to numerous obligations, whose non- compliance can have far-reaching negative con- sequences. We will be happy to support and ad- vise you on all issues regarding the use of AI in companies.