Deployer Obligations Apply: August 2, 2026

The EU AI Act Has a Deployer Problem

Most compliance content targets AI developers. But the companies deploying AI into real decisions have their own obligations — and most aren’t ready.

Who Is a Deployer?

Under the EU AI Act, a deployer is any natural or legal person that uses an AI system under its authority. If your organization purchases, licenses, or integrates an AI system and puts it in front of customers, employees, or the public — you are a deployer. The obligations are yours, not your vendor’s.

Article 26 — Your Obligations

Each deployer duty under Article 26, in plain language, with the evidence artifact you need to produce.

Use systems according to instructions of use

Operate each AI system within the boundaries set by the provider’s instructions for use. Document deviations.

Required artifact: Documented usage policy referencing provider instructions

Assign competent natural persons for human oversight

Designate trained individuals with the authority, competence, and resources to oversee AI system operation effectively.

Required artifact: Named oversight roster with training records

Ensure input data is relevant and representative

Where you control the input data, verify it is relevant and sufficiently representative for the system’s intended purpose.

Required artifact: Data quality assessment documentation

Monitor operation and report risks

Monitor the AI system during operation based on the instructions for use. Report any risks or malfunctions to the provider.

Required artifact: Monitoring log and risk reporting procedures

Suspend use if system may present risk at national level

Immediately suspend use of a high-risk AI system if you consider it presents a risk at national level. Inform the provider and relevant authorities.

Required artifact: Suspension and escalation protocol

Report serious incidents to provider, then authorities

Report serious incidents to the AI system provider first, then to the importer/distributor and market-surveillance authorities.

Required artifact: Incident reporting procedure with timelines

Keep automatically generated logs for minimum 6 months

Retain logs automatically generated by the high-risk AI system for at least six months, unless otherwise provided by applicable law.

Required artifact: Log retention policy and storage evidence

Inform workers and their representatives before workplace deployment

Before putting a high-risk AI system into use in the workplace, inform workers and their representatives that they will be subject to the system.

Required artifact: Worker notification records

Comply with registration duties

Public authorities and bodies deploying high-risk AI systems must register in the EU database before putting the system into use.

Required artifact: EU database registration confirmation

Conduct DPIAs using provider information

Use the information provided by the AI system provider to conduct Data Protection Impact Assessments where required under GDPR.

Required artifact: DPIA documentation referencing provider data

Inform affected persons of Annex III decisions

When decisions about natural persons fall under Annex III categories, inform those individuals that they are subject to AI-assisted decision-making.

Required artifact: Individual notification procedures and templates

Cooperate with national competent authorities

Provide national competent authorities with all information and access necessary for them to assess your compliance with the regulation.

Required artifact: Authority cooperation procedure and evidence index

Fundamental Rights Impact Assessment (Article 27)

Who must do it: Public law bodies, private entities providing public services, and deployers of certain Annex III systems (specifically point 5(b) creditworthiness assessment and 5(c) risk assessment and pricing for life and health insurance).

What it must include: Process description, intended period and frequency of use, categories of affected natural persons, specific risks of harm, human oversight measures, and measures to take if those risks materialize.

When: Before first putting the high-risk AI system into use. Must be updated when any material element changes.

Important: Results must be notified to the market surveillance authority. This assessment complements a GDPR DPIA — it does not replace it.

Incident Reporting

Standard incidents

Within 15 days of becoming aware of the incident.

Widespread infringement

Immediately, no later than 2 days.

Death or serious harm

Immediately after establishing or suspecting a causal link, no later than 10 days.

Deployers must inform the provider first, then the importer/distributor and market-surveillance authorities. If the provider is unreachable, Article 73 applies mutatis mutandis.

Penalties

Violation Type Maximum Fine
Deployer obligation breaches (Article 26, Article 50) €15M or 3% worldwide annual turnover
Prohibited practices €35M or 7% worldwide annual turnover
Misleading information to authorities €7.5M or 1% worldwide annual turnover

The penalty framework is already in force as of August 2, 2025.

Enforcement Timeline

February 2, 2025

Chapters I–II in force

Definitions and scope provisions take effect.

August 2, 2025

Governance, penalties, prohibited practices

AI Office established. Penalty framework enforceable. Prohibited AI practices banned.

August 2, 2026

Main deployer obligations (Article 26, 27)

All Article 26 deployer duties and Fundamental Rights Impact Assessments enforceable.

August 2, 2027

Article 6(1) Annex I high-risk regime

Full high-risk classification system under Annex I takes effect.

What This Means for Your Organization

If you deploy AI systems in the EU — whether you built them or bought them — you need a documented compliance framework before August 2, 2026. Not a legal opinion. Not a PDF in a shared drive. A structured, versioned, audit-ready evidence file that maps to each Article 26 obligation.

That’s what we build at AOP.

Colorado AI Act → Deployer vs Developer → Book a Call →