Regulation Overview

About the EU AI Act

The EU AI Act is a horizontal, risk-based Regulation that sets harmonized rules for AI in the European market. It entered into force on August 1, 2024 and applies obligations based on risk: the higher the risk, the stricter the obligations.

It is called horizontal because it applies across sectors (for example, healthcare, employment, finance, education, and public services), rather than being limited to one specific industry.

Why It Exists

The Act is designed to reduce risks to health, safety, and fundamental rights while enabling trustworthy AI adoption across industries, including already regulated sectors such as medical devices and transport.

Who Is Affected

The rules apply across the AI value chain: providers, deployers, importers, and distributors, inside and outside the EU when AI output is used in the EU.

Risk Tiers

  • Unacceptable Risk: Prohibited use cases.
  • High Risk: AI in high-impact contexts with strict safety and governance duties.
  • Limited Risk: Transparency obligations for potentially deceptive AI interactions.
  • No / Minimal Risk: Most AI use cases, with lighter obligations.

Provider vs Deployer

A provider is the actor that develops an AI system (or places it on the market under its own name/trademark), and typically carries the main build-time and pre-market compliance duties. A deployer is the actor that uses the AI system in operations when interacting with people (for example in hiring, insurance, healthcare, or public services). In simple terms: provider = builder/seller, deployer = user/operator. Example: Microsoft can be a provider of an AI service, while an insurance company using that service for customer decisions is a deployer.

Unacceptable Risk (Prohibited)

  • Subliminal manipulation causing significant harm.
  • Exploitation of vulnerable groups (for example children).
  • Mass social scoring and predictive policing based on profiling.
  • Untargeted scraping of faces from internet/CCTV for face databases.
  • Emotion recognition in workplaces or education settings.
  • Certain mass real-time remote biometric identification in public spaces.

High-Risk Areas

  • Biometrics and certain AI-enabled surveillance contexts.
  • Education and employment decisions with life-impacting effects.
  • Essential services such as credit, health, and insurance access.
  • Law enforcement, migration, asylum, and border-control contexts.
  • Critical infrastructure, justice, and election-related systems.

High-Risk Provider Obligations

  • Conformity assessment before market placement.
  • Risk management and data governance controls.
  • Technical documentation, logging, traceability, and transparency.
  • Human oversight design and robustness/cybersecurity measures.
  • Quality management and registration duties where required.

High-Risk Deployer Obligations

  • Use systems according to instructions and maintain relevant logs.
  • Monitor operation and report serious incidents.
  • Inform affected workers and natural persons in relevant contexts.
  • Conduct FRIA in specific public-authority/essential-service scenarios.
  • Substantial modification can turn a deployer into a provider.

Limited / No-Risk AI

Limited-risk systems such as chatbots and synthetic media tools are mainly subject to transparency requirements (for example, informing users they are interacting with AI and labeling AI-generated content). Most low-risk AI uses remain largely unregulated, aside from AI literacy expectations.

General-Purpose AI (GPAI)

The Act distinguishes AI systems from AI models and sets dedicated obligations for GPAI model providers. Baseline obligations include documentation and downstream transparency. GPAI models with systemic risk face additional duties, including risk evaluation, incident reporting, and stronger cybersecurity controls.

Enforcement Snapshot

Engaging in prohibited AI practices can trigger significant penalties, including fines that can reach up to 35 million EUR or 7% of annual global turnover, depending on the infringement category.

How This Agent Helps

This tool supports initial compliance discovery by mapping your AI tool profile to likely obligations, highlighting potential gaps, and producing report-ready summaries for internal review.

This page is informational and does not constitute legal advice. Confirm final obligations with legal counsel for your specific use case.
Back to Home Start Assessment