Artificial intelligence (AI) is a driver of digital transformation – but it also entails risks. The European regulation on artificial intelligence Regulation (EU) 2024/168 (“AI Act“), takes a risk-based approach to create a balanced and binding set of rules for AI systems. In this sense, the AI Act defines four risk levels: unacceptable, high, limited and minimal risk. In particular, the AI Act assesses potential damage to individual and public interests – from fundamental rights to health and safety to environmental protection. It covers physical, psychological, social and economic damage, regardless of whether it is material or immaterial in nature.
On February 2, 2025, key provisions of the AI Act came into force, namely binding bans on certain AI practices with “unacceptable risk”. These measures are intended to ensure that artificial intelligence is not used in a way that jeopardizes fundamental rights, security or democratic values. Companies and organizations that develop or use AI systems must therefore deal with the new regulations at an early stage.
To accompany this, the EU Commission published detailed guidelines on February 4, 2025, explaining the practical implementation and enforcement of the bans under Article 5 of the AI Act.
What is prohibited?
The AI Act distinguishes between different risk levels of AI systems. There is a particular focus on prohibited AI systems within the meaning of Article 5, which are now prohibited in the EU. These include:
- Manipulative and deceptive AI techniques: Systems that use subconscious influence or targeted psychological manipulation.
- Exploitation of weaknesses: AI systems must not manipulate people on the basis of age, disability or social status.
- Social scoring: The evaluation of people based on their behavior or personal characteristics is prohibited, especially if it leads to unjustified discrimination.
- Predictive policing: AI systems must not predict whether someone will commit a crime solely on the basis of profiling.
- Untargeted collection of biometric data: Facial recognition through mass screening, for example by collecting images from the internet or images from surveillance cameras, is prohibited.
- Emotion recognition in the workplace and in schools: Companies and educational institutions may not use AI systems to analyze the emotions of their employees or students – with the exception of medical or safety-related purposes.
- Biometric categorization: The use of biometric data to derive sensitive characteristics (e.g. political views or sexual orientation) is prohibited.
- Real-time remote biometric identification in public spaces: Real-time facial recognition by law enforcement agencies is only permitted under the strictest conditions (e.g. to prevent terrorism or to search for missing persons).
Enforcement and sanctions
Although the bans have already been in force since February 2, 2025, the actual market surveillance and sanctions will not begin until August 2, 2025. Companies and authorities that violate the new regulations with regard to prohibited AI systems within the meaning of Article 5 face severe penalties:
- Fines of up to 35 million euros or 7 % of the worldwide annual turnover for companies (Article 99 (3)).
- Reduced fines of up to 1.5 million euros apply to EU authorities (Article 100 (2)).
Affected companies should familiarize themselves with the new requirements at an early stage and ensure that their AI systems are compliant.
Conclusion: Need for action for companies
The new regulations represent a milestone in the regulation of artificial intelligence. Companies that develop or use AI systems should urgently review their systems to minimize legal risks. The guidelines now published by the EU Commission provide valuable guidance for the legally compliant use of AI.
Our law firm will be happy to assist you with the compliance review of your AI systems and the adaptation to the new regulatory requirements. Contact us for advice on the impact of the AI Act on your company.
Blog