European Union: Application of Artificial Intelligence Act begins

The European Union’s Artificial Intelligence Act, which began to apply on February 2, is the first comprehensive legal framework for AI to be drawn up worldwide. The legislation aims to foster trustworthy AI while ensuring public safety, protecting fundamental rights, and strengthening Europe’s position as a global leader in AI development.

Framework for AI regulation and supervision


Formally known as Regulation (EU) 2024/1689, the AI Act introduces a structured, risk-based approach to governance of artificial intelligence. AI systems are classified according to four risk levels: unacceptable risk, high risk, transparency risk, and minimal or no risk. The legislation bans outright AI applications considered to pose an unacceptable risk, including manipulative AI, social scoring, biometric categorisation based on sensitive characteristics, and real-time remote biometric identification in public spaces for law enforcement purposes. High-risk AI systems, such as those used in critical infrastructure, education, healthcare and law enforcement, must meet strict requirements, including risk assessment, transparency measures and human oversight to prevent harmful outcomes.

Ensuring safety and encouraging innovation


The legislation requires AI systems deployed in the EU to adhere to stringent safety and accountability standards. Providers and deployers of high-risk AI systems must implement risk mitigation strategies, maintain robust documentation, and ensure human oversight. Transparency rules require AI-generated content, including both deepfakes and public information, to be clearly labelled. The legislation also introduces the AI Pact, a voluntary initiative designed to promote early compliance with key obligations. It also seeks to facilitate innovation through regulatory sandbox schemes and the AI Innovation Package, which authorises AI developers to test their technologies in a controlled environment while maintaining compliance.

“AI should serve as a tool for people, with the ultimate aim of increasing human wellbeing”

Impact on businesses and society


The AI Act establishes clear legal obligations for AI providers and deployers in order to win the trust of consumers and prevent regulatory fragmentation. General-purpose AI models, which form the basis for many AI systems, must meet transparency rules and copyright compliance obligations, while providers of models entailing systemic risk must assess and mitigate these risks. “AI should serve as a tool for people, with the ultimate aim of increasing human wellbeing,” the legislation states. The European AI Office, along with national authorities, will oversee compliance with and enforcement of the legislation, which takes effect in stages, with prohibitions and AI literacy obligations applying from February 2, 2025 and full implementation from August 2026. By balancing technological progress with ethical responsibility, the AI Act seeks to position the EU as a leader in safe and trustworthy AI development.

Tags