Why It Exists
The Act is designed to reduce risks to health, safety, and fundamental rights while enabling trustworthy AI adoption across industries, including already regulated sectors such as medical devices and transport.
The EU AI Act is a horizontal, risk-based Regulation that sets harmonized rules for AI in the European market. It entered into force on August 1, 2024 and applies obligations based on risk: the higher the risk, the stricter the obligations.
It is called horizontal because it applies across sectors (for example, healthcare, employment, finance, education, and public services), rather than being limited to one specific industry.
The Act is designed to reduce risks to health, safety, and fundamental rights while enabling trustworthy AI adoption across industries, including already regulated sectors such as medical devices and transport.
The rules apply across the AI value chain: providers, deployers, importers, and distributors, inside and outside the EU when AI output is used in the EU.
A provider is the actor that develops an AI system (or places it on the market under its own name/trademark), and typically carries the main build-time and pre-market compliance duties. A deployer is the actor that uses the AI system in operations when interacting with people (for example in hiring, insurance, healthcare, or public services). In simple terms: provider = builder/seller, deployer = user/operator. Example: Microsoft can be a provider of an AI service, while an insurance company using that service for customer decisions is a deployer.
Limited-risk systems such as chatbots and synthetic media tools are mainly subject to transparency requirements (for example, informing users they are interacting with AI and labeling AI-generated content). Most low-risk AI uses remain largely unregulated, aside from AI literacy expectations.
The Act distinguishes AI systems from AI models and sets dedicated obligations for GPAI model providers. Baseline obligations include documentation and downstream transparency. GPAI models with systemic risk face additional duties, including risk evaluation, incident reporting, and stronger cybersecurity controls.
Engaging in prohibited AI practices can trigger significant penalties, including fines that can reach up to 35 million EUR or 7% of annual global turnover, depending on the infringement category.
This tool supports initial compliance discovery by mapping your AI tool profile to likely obligations, highlighting potential gaps, and producing report-ready summaries for internal review.