Entry into force of the first obligations from the ai act: what are the implications?

The AI Act entered into force on 1 August 2024, and from 2 August 2026, most obligations must be complied with.

The AI Act entered into force on 1 August 2024, and from 2 August 2026, most obligations must be complied with. However, the first obligations already entered into force on 2 February 2025. These obligations relate to prohibiting certain AI practices and promoting AI literacy.

Purpose and Scope

With the AI Act, the European Union aims to create a uniform legal framework in particular for the development, placing on the market, the putting into service and the use of AI systems. Its goal is to promote the introduction of human centric and reliable AI while ensuring a high level of protection for health, safety, and fundamental rights.

The AI Act applies among others to providers that place in the market or put into service AI systems, as well as to deployers who use these systems.

Prohibited AI Practices

As from 2 February 2025, the AI Act prohibits several unacceptable AI practices that violate European fundamental norms and values. This includes AI systems that use subliminal techniques, exploit vulnerabilities of individuals, evaluate people based on social behavior or personal characteristics, and infer emotions in the workplace or educational institutions except for medical or safety reasons.

On 4 February 2025, the European Commission published a comprehensive document with guidelines on the prohibited AI practices.

For organisations who use AI tools, the first step is to know what AI systems are being used within the organisation and to assess the associated risks and opportunities. When identifying the risks, focus on the effects an AI system can have on staff and society. Then identify what policies and measures, if any, already exist within the organisation regarding AI literacy.

Organisations developing or using prohibited AI systems can face administrative fines of up to 35,000,000 EUR or 7% of their total worldwide annual turnover. When fines are imposed on SMEs and start-ups, their interests and their economic viability are taken into account and a lower fine may be imposed.

AI Literacy

The second obligation that came into force on 2 February 2025 is to identify the (degree of) risks, the people involved, and the context of AI systems, which influence the determination of the required AI literacy within the organisation. The higher the risks of AI systems, the higher the level of AI literacy that is required from staff.

Moreover, the content and level of skills, knowledge and understanding depend on the role a specific employee has within the organisation. In addition, the context in which the AI system is used also affects the level of required AI literacy. Which measures are needed depends on the (financial) possibilities organisations have.

Consequently, not all employees need to achieve the same level of AI literacy. It is not a ‘one size fits all’ obligation, but a tailor-made approach. Everyone who comes into contact with AI is expected to understand the basic principles, as well as to be able to deal responsibly and critically with AI systems. Compliance with this obligation is clearly an ongoing and dynamic process.

The AI Act does not specify the measures employers must take to achieve a sufficient level of AI literacy, giving organisations some leeway to determine what is sufficient for their employees. The AI Office, a body that was established within the European Commission as the centre of AI expertise, already provided additional information on sufficient literacy practices. Additionally, the Dutch Data Protection Authority has issued guidelines on AI literacy.

Offering AI literacy training and implementing a Responsible AI Governance Policy are recommended measures. The AI-office encourages the drawing up of codes of conduct or policies related to the application of the AI Act's provisions. An AI Governance Policy can include guidelines for the use of AI within the organisation, specifying which AI systems can be used by whom and how to maintain sufficient AI literacy among staff.

Key message

From 2 February 2025, the first obligations of the AI Act entered into force. Organisations developing or using AI tools must identify which AI systems they use and cease the use of prohibited AI systems. In addition, the current level of AI literacy in the organisation needs to be assessed and necessary additional measures, such as training and drawing up an internal policy, must be determined.

Inger Verhelst, Advocaat – Vennoot, Claeys & Engels

Lucas De Vooght, Advocaat – Medewerker, Claeys & Engels

More Partner Blogs


21 februari 2025

From absenteeism to dismissal: a legal guide for employers

Rising absenteeism due to incapacity for work is a growing concern for many employers.

Lees meer...

20 februari 2025

The Anti-Coercion Instrument: What Is It and How Europe Might Use It Over the Next Four Years

Since Donald Trump’s election to a second term as President of the United States, the possibility...

Lees meer...

18 februari 2025

Entry into force of the first obligations from the ai act: what are the implications?

The AI Act entered into force on 1 August 2024, and from 2 August 2026, most obligations must be...

Lees meer...

13 februari 2025

Gender pay transparency

On May 10, 2023 the European Parliament adopted a new Directive to strengthen the application of...

Lees meer...

07 februari 2025

Unlock Key Legal Trends with the Legisway Benchmark 2024 Report

In today’s rapidly evolving legal landscape, staying ahead requires data-driven insights and best...

Lees meer...