What does the AI Act mean for employers?

The EU has recently adopted the AI Act, in full called the “Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts”.

The EU has recently adopted the AI Act, in full called the “Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts”. It aims to classify and regulate different AI systems and tools in order to identify and restrict harmful applications. Below we will mostly look at how this can impact employers using AI e.g. to recruit, monitor or evaluate their employees.

  1. Risk qualification

It is important to know that the AI-act does not only impose obligations on AI-providers, who create AI-tools, but also to deployers, meaning any company that applies the AI-tool, including employers who make use of these applications in an employment context.

The AI Act is in general a system for risk-assessment regarding AI. It provides a scale based on the perceived risk of an AI application:

  • Unacceptable risk: AI tools falling under this category are prohibited (Chapter II)
  • High risk: these AI tools pose a significant risk to health, safety, or the fundamental rights of persons. In this case, the AI-act imposes several measures and safeguards in order to keep the application safe and under control. (Chapter III)
  • Limited risk: this includes AI systems intended to directly interact with natural persons, AI systems, including General Purpose AI- systems, generating synthetic audio, image, video or text content, and deep fakes. In this case there is a transparency obligation towards users. (Chapter IV and V)
  • Minimal risk: these applications do not require any further regulation.
  1. Prohibited AI

The prohibited category includes e.g. malignant applications like purposefully manipulative or deceptive techniques, used to distort the behaviour of persons by appreciably impairing their ability to make an informed decision, thereby causing a person to take a decision that that person would not have otherwise taken. Other prohibited systems are those which exploit vulnerabilities, disabilities or social and economic situations and which cause significant harm, as well as far-reaching surveillance systems used by the authorities. In any case, most employment applications will normally not fall under the prohibited category.

  1. Classification as high-risk AI

To the contrary, it is easy to imagine HR applications that fall under the high-risk category. This category includes (see Annex III of the AI-Act):

  • Biometrics, including biometric identification systems (unless the sole purpose is identification), systems for biometric categorisation (using sensitive or protected characteristics), systems for emotion recognition;
  • Employment, workers management and access to self-employment:
    • AI systems intended to be used for the recruitment or selection of natural persons, in particular, to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates;
    • AI systems intended to be used to make decisions affecting terms of work-related relationships, the promotion or termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics or to monitor and evaluate the performance and behaviour of persons in such relationships.

It is safe to say that most AI tools that could be interesting for managing employees fall under these categories. Art. 6.3 AI Act allows a derogation of this classification if the system does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, this is the case if one or more of the following conditions are fulfilled:

  • the AI system is intended to perform a narrow procedural task;
  • the AI system is intended to improve the result of a previously completed human activity;
  • the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review; or
  • the AI system is intended to perform a preparatory task to an assessment relevant to the purposes listed as high risk.

Notwithstanding these derogations, an AI system referred to in Annex III shall always be considered to be high-risk where the AI system performs profiling of natural persons. Of course, one can expect a lot of discussion regarding this catch-all concept of profiling natural persons.

  1. Requirements for high-risk AI systems

Art. 9 requires the establishment of a risk management system. This system includes a preventive identification of all the possible risks and the adoption of appropriate and targeted risk management measures designed to address the risks identified. This does not only have consequences for the design of the AI tools but also e.g. for the information and training of the persons who operate them. High risk AI systems also bring with specific obligations regarding data management, especially when personal data are use to train the AI-system, as well as record keeping obligations (to monitor and trace the operation of the system). Providers of high-risk systems need to inform deployers (and thus employers) of the instructions to be followed (so the risks can be mitigated) and Art. 14 of the AI Act includes the obligations to make human oversight of the operation of the AI systems possible. Art. 26 of the AI Act provides the specific obligations of deployers, a.o.:

  • take appropriate technical and organisational measures to ensure they use such systems in accordance with the instructions for use accompanying the systems.
  • assign human oversight to natural persons who have the necessary competence, training and authority, as well as the necessary support;
  • to the extent the deployer exercises control over the input data, ensure that input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system;
  • monitor the operation of the high-risk AI system on the basis of the instructions for use and, where relevant, inform providers and authorities (e.g. if they have identified a significant risk, in this case they have to suspend the use of the system);
  • keep the logs automatically generated by that high-risk AI system, to the extent such logs are under their control, for a period of at least six months;
  • Before putting into service or using a high-risk AI system at the workplace, deployers who are employers shall inform workers’ representatives and the affected workers that they will be subject to the use of the high-risk AI system. This information shall be provided, where applicable, in accordance with the rules and procedures laid down in Union and national law and practice on information of workers and their representatives;
  • Where applicable, deployers of high-risk AI systems shall carry out a data protection impact assessment.
  1. Enforcement

Without prejudice to other administrative or judicial remedies, any natural or legal person having grounds to consider that there has been an infringement of the provisions of this Regulation may submit reasoned complaints to the relevant national market surveillance authority. This means that anyone can lodge a complaint, even without being directly affected. The market surveillance authority has all kinds of powers to monitor the systems and can investigate and decide to suspend or terminate AI systems and sanction providers and deployers.

Furthermore, any affected person subject to a decision which is taken by the deployer on the basis of the output from a high-risk AI system and which affects that person in a way that they consider to have an adverse impact on their health, safety or fundamental rights shall have the right to obtain from the deployer (employer) clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken.

The AI Act provides as a sanction for violations of the rules regarding high-risk AI-systems, penalties of up to 15 million EU or 3% of the total worldwide annual turnover of the company.

  1. Entry into force

The AI Act will enter into force 20 days after its publication in the Official Journal of the EU (expected in May 2024). Furthermore, the following deadlines apply:

  • 6 months after entry into force: enforcement of prohibited AI practices
  • 24 months after entry into force: the AI Act will apply and most other obligations will take effect.
  • 36 months after entry into force: obligations for high-risk systems will take effect.

Source: Text of the approved Regulation on the website of the EU-Parliament

Disclaimer: This is merely a summary and a simplified presentation of the AI Act.

More Partner Blogs


21 juni 2024

Takeaways from the Belgian Presidency of the Council of the EU on Climate and Energy Topics

The introduction of the 'essential use' concept and its possible impact on the PFAS restriction...

Lees meer...

20 juni 2024

Chemicals PFAS restriction proposal

The introduction of the 'essential use' concept and its possible impact on the PFAS restriction...

Lees meer...

18 juni 2024

Getting Ready For a Group Discount - The European Commission’s Updated Guidance on Joint Purchasing Arrangements

The European Commission recently revised its Guidelines on Horizontal Cooperation Agreements.

Lees meer...

14 juni 2024

Measuring the level of maturity of your legal function

Bénéficiez de l'expertise d'Alan Ragueneau et des experts Wolters Kluwer

Lees meer...

10 juni 2024

Energetische renovaties in de drie gewesten

In het kader van de strijd tegen de klimaatverandering, heeft de Europese Unie ambitieuze...

Lees meer...