EU AI Act Compliance: Your 2026 Deployment Checklist
The EU AI Act is now in force, with major obligations taking effect in August 2025 and August 2026. Here's your practical guide to compliance.
The Compliance Clock Is Ticking
The EU AI Act entered into force on August 1, 2024, making it the world's first comprehensive legal framework for AI regulation. But compliance isn't a single deadline — it's a staggered rollout with different requirements taking effect at different times.
The European Commission has been clear: there will be no delays or transition periods. Organizations must meet each milestone as it arrives.
Key dates:
- February 2, 2025: Prohibited AI practices banned, AI literacy requirements in effect
- August 2, 2025: GPAI model rules and transparency obligations apply
- August 2, 2026: Full compliance framework for high-risk AI systems
- August 2, 2027: High-risk AI systems embedded in regulated products
Who Must Comply
The EU AI Act has extraterritorial reach. You're in scope if:
- You're an AI provider (developer) placing AI systems on the EU market
- You're a deployer (user) of AI systems within the EU
- Your AI system's outputs are used in the EU — even if you're based elsewhere
- You substantially modify an AI system for EU deployment
If your AI affects EU residents, you likely need to comply.
Understanding the Risk Tiers
The EU AI Act uses a risk-based approach. Different AI systems face different requirements based on the risk they pose.
Unacceptable Risk (Prohibited)
These AI practices are banned entirely as of February 2, 2025:
- Social scoring systems by governments
- Real-time remote biometric identification in public spaces (with limited exceptions)
- AI that manipulates human behavior to circumvent free will
- AI exploiting vulnerabilities of specific groups (age, disability)
- Emotion recognition in workplaces and schools (with exceptions)
- Biometric categorization inferring sensitive attributes
- Untargeted scraping for facial recognition databases
Action required: Audit your AI inventory immediately. Any system falling into these categories must be removed from EU operations.
High Risk (Heavily Regulated)
High-risk AI systems face the most stringent requirements. These include AI used in:
- Critical infrastructure: Energy, transport, water, digital infrastructure
- Education: Student admissions, assessments, exam proctoring
- Employment: Recruiting, screening, performance evaluation, promotion decisions
- Essential services: Credit scoring, insurance pricing, emergency services
- Law enforcement: Risk assessment, evidence evaluation, lie detection
- Migration and border control: Visa processing, asylum applications
- Justice and democracy: Legal research, case outcome prediction
High-risk systems must undergo conformity assessments, implement quality management systems, maintain detailed technical documentation, and enable human oversight.
Limited Risk (Transparency Required)
Systems with limited risk — including chatbots and deepfake generators — must:
- Disclose to users they're interacting with AI
- Label AI-generated content clearly
- Publish summaries of copyrighted training data
Minimal Risk (Unregulated)
Most AI applications — spam filters, AI-enabled games, inventory management — face no specific requirements under the Act, though general EU laws still apply.
General-Purpose AI (GPAI) Requirements
The August 2025 deadline brought specific obligations for providers of general-purpose AI models — foundation models like GPT-4, Claude, and Gemini that can be adapted for various downstream uses.
All GPAI Providers Must:
- Create technical documentation detailing model capabilities, limitations, and intended uses
- Develop compliance policies for EU copyright law, including training data summaries
- Provide downstream information so deployers can meet their own obligations
- Publish a model summary using the Commission's template
GPAI Models with Systemic Risk Must Also:
Models meeting certain compute thresholds (currently 10^25 FLOPs) face additional requirements:
- Conduct model evaluations and adversarial testing
- Assess and mitigate systemic risks
- Report serious incidents to authorities
- Ensure adequate cybersecurity protections
The GPAI Code of Practice, published in July 2025, provides guidance on demonstrating compliance. Adherence creates a presumption of conformity until harmonized standards are available.
Compliance Checklist by Role
If You're an AI Provider (Developer)
Immediate (Now)
- [ ] Audit AI portfolio for prohibited practices
- [ ] Remove or modify any banned AI systems
- [ ] Classify all AI systems by risk tier
- [ ] Ensure AI literacy for relevant staff
By August 2025 (GPAI providers)
- [ ] Prepare technical documentation
- [ ] Implement copyright compliance policies
- [ ] Create public model summary
- [ ] Consider Code of Practice adherence
By August 2026 (High-risk systems)
- [ ] Establish quality management system
- [ ] Complete conformity assessments
- [ ] Implement risk management procedures
- [ ] Create detailed technical documentation
- [ ] Enable logging and human oversight
- [ ] Register in EU AI database
- [ ] Appoint EU representative (if outside EU)
If You're a Deployer (User)
Immediate (Now)
- [ ] Inventory all AI systems in use
- [ ] Identify high-risk systems
- [ ] Ensure AI literacy for staff operating AI systems
- [ ] Verify providers have necessary documentation
By August 2025
- [ ] Request GPAI documentation from providers
- [ ] Implement transparency notices where required
- [ ] Ensure human oversight mechanisms exist
By August 2026
- [ ] Conduct fundamental rights impact assessments
- [ ] Implement data governance measures
- [ ] Maintain logs of high-risk system operations
- [ ] Establish human oversight processes
- [ ] Create incident reporting procedures
- [ ] Document compliance measures
Penalties for Non-Compliance
The EU AI Act's penalty regime became enforceable on August 2, 2025. Fines are substantial:
- Prohibited AI practices: Up to €35 million or 7% of global annual turnover (whichever is higher)
- Other AI Act violations: Up to €15 million or 3% of global turnover
- Supplying incorrect information: Up to €7.5 million or 1% of global turnover
For SMEs and startups, the lower amount threshold applies, but penalties remain significant relative to company size.
Institutional Framework
The AI Office
Established within the European Commission, the AI Office became operational on August 2, 2025. It:
- Oversees GPAI model compliance
- Coordinates enforcement across member states
- Develops guidelines and codes of practice
- Monitors systemic risks
National Competent Authorities
Each EU member state must designate:
- At least one market surveillance authority
- At least one notifying authority for conformity assessments
These bodies handle national-level enforcement and can investigate complaints, conduct inspections, and impose penalties.
Practical Implementation Steps
Step 1: AI Inventory and Classification
You can't comply with requirements for systems you don't know about. Start with a comprehensive inventory:
Document for each AI system:
- System name and vendor
- Business function and users
- Data inputs and outputs
- Decision-making impact
- Risk classification under the Act
Shadow AI challenge: Employees may be using AI tools without IT approval. A 2025 study found 67% of AI usage happens via unmanaged personal accounts, creating compliance blind spots.
Step 2: Gap Analysis
Compare your current state to requirements:
- Do you have technical documentation for each high-risk system?
- Can you demonstrate human oversight mechanisms?
- Do you maintain adequate audit logs?
- Have staff received AI literacy training?
- Do you have processes for incident reporting?
Step 3: Vendor Engagement
For AI systems you use but didn't build:
- Request documentation on risk classifications
- Confirm providers will meet their obligations
- Negotiate contractual protections
- Plan for potential vendor non-compliance
As TechClass notes: "The user (your company) is also responsible for ensuring compliance when deploying an AI tool."
Step 4: Governance Structure
Establish clear accountability:
- Designate AI compliance responsibility
- Create review and approval processes
- Implement change management procedures
- Build incident response capabilities
Step 5: Documentation and Evidence
Regulators will want proof. Maintain records of:
- Risk assessments and classifications
- Conformity assessment results
- Training and literacy programs
- Oversight mechanisms and their use
- Incident reports and responses
- Ongoing monitoring results
The Data Protection Connection
The EU AI Act intersects with GDPR in important ways. AI systems processing personal data must comply with both frameworks:
- GDPR's data minimization applies to AI training data
- Data subject rights extend to AI-processed information
- Automated decision-making rules (Article 22) may apply
- Data protection impact assessments may be required
Organizations already managing GDPR compliance have a foundation to build on — but AI introduces new complexities around training data, model outputs, and algorithmic transparency.
How Tenlines Supports EU AI Act Compliance
Several EU AI Act requirements directly relate to data protection and transparency — areas where Tenlines provides critical capabilities:
Data governance for AI: The Act requires appropriate data governance measures. Tenlines prevents sensitive data from reaching AI systems in the first place, reducing the scope of data governance challenges.
Audit trails: High-risk AI deployers must maintain logs of system operations. Tenlines provides comprehensive logging of all AI interactions.
Shadow AI visibility: You can't comply with regulations governing AI you don't know exists. Tenlines reveals which AI tools employees are using and what data they're sharing.
Training data protection: For organizations developing AI, protecting training data from containing unauthorized personal information is essential. Tenlines' PII scrubbing capabilities help ensure training datasets stay clean.
Transparency support: When employees use AI through Tenlines, organizations maintain visibility into AI usage patterns needed for regulatory reporting.
Key Takeaways
-
Compliance is staged, not a single deadline. Different requirements take effect at different times through 2027.
-
Risk classification determines obligations. High-risk systems face extensive requirements; minimal-risk systems face few.
-
Extraterritorial reach is real. If your AI affects EU residents, you're likely in scope regardless of where you're based.
-
Penalties are significant. Up to €35M or 7% of global turnover for serious violations.
-
Shadow AI creates compliance gaps. Unsanctioned AI usage can violate the Act without your knowledge.
-
Documentation is essential. Regulators will want evidence of compliance, not just assertions.
Stop data leakage before it starts
Tenlines sits between your team and AI providers, scrubbing sensitive data before it leaves your environment. No workflow changes required.
Join the Waitlist