Brussels, August 1, 2024 – Today marks a significant milestone in the world of technology and regulation as the European Artificial Intelligence Act (AI Act), the world’s first comprehensive regulation on artificial intelligence, officially comes into force. Aimed at ensuring that AI developed and used in the EU is trustworthy, the AI Act introduces a range of safeguards to protect individuals’ fundamental rights while fostering a supportive environment for innovation and investment.
“The AI Act is a landmark achievement,” stated Thierry Breton, the EU Commissioner for Internal Market. “It creates a harmonized internal market for AI, setting a global standard for AI governance and promoting responsible innovation.”
Key Provisions of the AI Act
The AI Act categorizes AI systems into four risk-based levels:
Minimal Risk: Most AI systems, such as recommender systems and spam filters, fall into this category. These systems face no obligations under the AI Act, though companies can voluntarily adopt additional codes of conduct.
Specific Transparency Risk: AI systems like chatbots must disclose to users that they are interacting with a machine. AI-generated content, including deep fakes, must be labeled accordingly. Biometric categorization and emotion recognition systems must inform users when in use, and synthetic content must be marked as such in a machine-readable format.
High Risk: High-risk AI systems, such as those used for recruitment or loan assessments, must comply with stringent requirements. These include risk-mitigation systems, high-quality data sets, activity logging, detailed documentation, clear user information, human oversight, and high levels of robustness, accuracy, and cybersecurity. “Regulatory sandboxes will facilitate responsible innovation,” explained Margrethe Vestager, the Executive Vice-President for A Europe Fit for the Digital Age.
Unacceptable Risk: AI systems that pose a clear threat to fundamental rights are banned. This includes applications that manipulate human behavior, systems enabling social scoring by governments or companies, and certain uses of predictive policing. Emotion recognition systems used in workplaces and real-time remote biometric identification for law enforcement in public spaces are also prohibited, with narrow exceptions.
To address general-purpose AI models, the AI Act introduces specific rules ensuring transparency along the value chain and mitigating systemic risks.
Implementation and Enforcement
Member States have until August 2, 2025, to designate national authorities responsible for overseeing AI system compliance and conducting market surveillance. The European Commission’s AI Office will play a central role in implementing the AI Act at the EU level and enforcing rules for general-purpose AI models.
Three advisory bodies will support the implementation:
The European Artificial Intelligence Board will ensure uniform application across Member States and facilitate cooperation between the Commission and national authorities.
A scientific panel of independent experts will provide technical advice and issue alerts about risks associated with general-purpose AI models.
An advisory forum composed of diverse stakeholders will offer additional guidance to the AI Office.
Non-compliant companies face hefty fines, up to 7% of global annual turnover for violations involving banned AI applications, up to 3% for other violations, and up to 1.5% for providing incorrect information.
Most of the AI Act’s rules will become applicable on August 2, 2026. However, prohibitions on AI systems deemed to present an unacceptable risk will take effect in six months, and rules for general-purpose AI models will apply in 12 months. To bridge this transitional period, the Commission has introduced the AI Pact, encouraging developers to voluntarily adopt key obligations of the AI Act ahead of the deadlines.
The Commission is also developing guidelines and co-regulatory instruments, such as standards and codes of practice. A call for expressions of interest to participate in the creation of the first general-purpose AI Code of Practice is currently open, along with a multi-stakeholder consultation process.
The AI Act has been shaped by continuous independent, evidence-based research from the Joint Research Centre (JRC), fundamental in forming the EU’s AI policies. The political agreement on the AI Act was welcomed by the Commission on December 9, 2023, followed by a support package for startups and SMEs on January 24, 2024, and the unveiling of the AI Office on May 29, 2024.
As the AI Act comes into effect, Europe sets a precedent in global AI regulation, aiming to balance innovation with the protection of fundamental rights.