Back to Blog
ComplianceFeatured

EU AI Act Compliance for LLM Applications: The Enterprise Checklist (2026 Deadlines)

The August 2026 EU AI Act enforcement deadline is approaching. This complete guide covers high-risk classification, GPAI obligations, Article 15 cybersecurity requirements, and a 30-question compliance checklist for enterprise LLM deployers.

17 min read
By Prompt Guardrails Security Team

The EU AI Act's August 2, 2026 deadline for high-risk AI systems is no longer a distant concern — it is imminent. Organizations deploying Large Language Models in employment screening, credit scoring, medical diagnostics, education, or law enforcement face penalties of up to €35 million or 7% of global annual turnover for non-compliance. This guide gives enterprise LLM deployers a practical, actionable compliance roadmap.

Critical Deadline

August 2, 2026: Full enforcement for high-risk AI systems under Annex III and GPAI models with systemic risk. If your LLM application touches regulated use cases, compliance obligations are active now — not after the deadline.

EU AI Act Timeline: What's Already Active vs. Coming in August 2026

The EU AI Act follows a phased implementation timeline. Understanding which obligations are already enforceable is critical:

Date Obligation Who Is Affected
February 2025 Prohibited AI practices banned (Article 5) All providers and deployers
August 2025 GPAI model obligations apply; codes of practice General-purpose AI model providers
August 2026 Full high-risk AI system requirements All Annex III high-risk deployers
August 2027 High-risk AI embedded in regulated products Medical devices, machinery, vehicles

Does Your LLM Application Qualify as High-Risk?

Annex III of the EU AI Act defines eight categories of high-risk AI systems. If your LLM application falls into any of these, the full compliance framework applies from August 2026:

  • Biometric identification and categorisation — real-time remote identification systems
  • Critical infrastructure — safety components in water, gas, electricity, road transport
  • Education and vocational training — AI determining access, outcomes, or evaluation in education
  • Employment and worker management — CV screening, recruitment, performance evaluation, promotion decisions
  • Essential private and public services — credit scoring, insurance risk assessment, benefits eligibility
  • Law enforcement — predictive policing, evidence reliability assessment, profiling
  • Migration, asylum and border control — risk assessment and document authentication
  • Administration of justice — AI assisting courts or alternative dispute resolution

The "Mere Tool" Exception

If your LLM is a productivity tool (document summarization, code generation, internal Q&A) with no direct bearing on the decisions above, it likely does not meet the Annex III threshold. However, if its outputs are used as inputs to high-risk decisions — even indirectly — it may still require documentation and transparency measures.

GPAI Model Obligations for LLM Builders

If you are building and releasing a general-purpose AI model (a foundation model or fine-tuned variant used by others), obligations under Article 53 and 55 apply since August 2025:

For All GPAI Providers

  • Prepare and maintain technical documentation per Annex XI
  • Provide information and documentation to downstream providers
  • Establish a policy for copyright compliance with EU law
  • Publish a sufficiently detailed summary of training data

For Systemic Risk GPAI Models (>10²³ FLOPs)

Models trained with computational power exceeding 10²³ floating-point operations face additional obligations:

  • Perform and document model evaluations including adversarial testing
  • Assess and mitigate systemic risks, including cybersecurity risks (directly applicable to prompt injection)
  • Report serious incidents to the AI Office
  • Ensure adequate cybersecurity protections
  • Report energy consumption of training

The 8 Compliance Requirements Every Enterprise LLM Deployer Must Meet

1. Risk Management Lifecycle (Article 9)

Establish, implement, document, and maintain a risk management system that is continuous throughout the AI system's lifecycle. This means:

  • Identify and analyze known and reasonably foreseeable risks, including prompt injection and jailbreak attacks
  • Estimate and evaluate risks that may emerge during intended use
  • Adopt suitable risk management measures including testing against adversarial inputs
  • Residual risks must be communicated to users

2. Data Governance (Article 10)

Training, validation, and testing datasets must meet quality criteria including:

  • Relevance, representativeness, and freedom from errors
  • Appropriate statistical properties with respect to the target population
  • Examination for potential biases that could lead to discrimination
  • Documentation of data collection methodology and data sources

3. Technical Documentation (Article 11 + Annex IV)

Documentation must be established before the AI system is placed on the market and kept up to date. Required content includes:

  • General description: intended purpose, system architecture, design choices
  • Description of elements and their development process
  • Information on monitoring, functioning, and control
  • Description of the risk management system
  • Changes made to the system during lifecycle

4. Automatic Logging (Article 12)

High-risk AI systems must automatically log events during operation. For LLM applications this means:

  • Recording input/output pairs, including timestamps
  • Logging of detected anomalies or failures
  • Traceability of decisions made with or by the AI
  • Retention period sufficient for post-incident investigation
  • Protection of logs against unauthorized modification

5. Transparency and User Information (Article 13)

Deployers must ensure sufficient transparency to enable informed use:

  • Users must be informed they are interacting with an AI system
  • Instructions for use must describe the AI's capabilities and limitations
  • Describe the intended purpose, accuracy level, and foreseeable misuse
  • Disclose human oversight mechanisms available to operators

6. Human Oversight (Article 14)

High-risk AI systems must be designed to allow effective human oversight:

  • Human overseers must be able to understand the system's capabilities and limitations
  • Ability to interrupt, stop, or override the AI system
  • Awareness of automation bias (humans over-trusting AI outputs)
  • Mechanism to not rely solely on AI output for consequential decisions

7. Accuracy, Robustness, and Cybersecurity (Article 15)

This is where LLM security directly intersects with regulatory compliance. Article 15 requires:

  • Accuracy: Declare and maintain the accuracy level throughout the lifecycle
  • Robustness: Resist attempts to alter use, behavior, or performance through adversarial manipulation — this directly encompasses prompt injection, jailbreaking, and data poisoning defenses
  • Cybersecurity: Implement security measures proportionate to the system's risk level; protect against unauthorized third-party access, including via prompt injection
  • Resilience: Technical redundancy and fail-safe mechanisms for continuous operations

Article 15 and Prompt Injection

Regulators have explicitly stated that the "adversarial robustness" requirement in Article 15 encompasses prompt injection and prompt manipulation attacks. Organizations must document their technical controls — such as input validation, output filtering, and continuous security testing — as part of their conformity assessment.

8. Conformity Assessment and Registration

Most high-risk AI systems require either a self-assessment or third-party conformity assessment before market placement:

  • Self-assessment is permitted for most Annex III systems
  • Third-party assessment required for biometric identification and certain critical infrastructure
  • CE marking affixed to declare conformity
  • Registration in the EU database for high-risk AI systems (operated by the EU AI Office)
  • EU declaration of conformity prepared and retained for 10 years

Mapping EU AI Act to NIST AI RMF and ISO 42001

Organizations already aligned with NIST AI RMF or ISO/IEC 42001 can leverage significant overlap to avoid duplicating compliance work:

EU AI Act Article NIST AI RMF ISO 42001
Article 9 (Risk Management) GOVERN + MAP + MEASURE Clause 6.1, 8.4
Article 10 (Data Governance) MAP 1.6, MEASURE 2.5 Clause 8.3
Article 11-12 (Documentation + Logging) GOVERN 1.2, MANAGE 4.2 Clause 7.5, 9.1
Article 14 (Human Oversight) GOVERN 5.2, MANAGE 3.2 Clause 8.5
Article 15 (Accuracy, Robustness, Cybersecurity) MEASURE 2.6, 2.8 Clause 8.4, 8.6

30-Question EU AI Act Compliance Checklist

Use this checklist to assess your readiness before the August 2026 deadline. Items marked [Critical] can trigger penalties if absent.

Risk Classification (Questions 1-5)

  • Have you documented which Annex III category your LLM application falls under, or confirmed it is out of scope? [Critical]
  • If out of scope, is that determination documented and signed off by legal/compliance?
  • Have you assessed whether your LLM outputs feed into any high-risk decision making, even indirectly?
  • If you are a GPAI provider, have you assessed whether your training compute exceeds 10²³ FLOPs?
  • Is there a named individual responsible for AI Act compliance in your organization?

Risk Management System (Questions 6-10)

  • Is there a documented risk management process specific to your LLM application? [Critical]
  • Does risk documentation include prompt injection, jailbreaking, and data poisoning as named threat categories?
  • Are risks re-evaluated when the model, system prompt, or data sources change?
  • Has adversarial testing been conducted and documented?
  • Are residual risks communicated to end users in the instructions for use?

Technical Controls: Cybersecurity (Questions 11-17)

  • Is there documented input validation to detect and block prompt injection attempts? [Critical]
  • Are outputs validated before being used in downstream decisions?
  • Has system prompt hardening been applied and documented?
  • Are access controls in place limiting what the LLM can access and act upon?
  • Is there continuous security monitoring for anomalous LLM behavior in production?
  • Has a red team exercise tested the system against adversarial manipulation?
  • Is there an incident response plan for AI security events?

Logging and Traceability (Questions 18-21)

  • Are all LLM inputs and outputs logged with timestamps? [Critical]
  • Are logs tamper-protected and retained per your jurisdiction's retention requirements?
  • Can you reconstruct a specific AI-assisted decision from logs for regulatory audit?
  • Are anomalies and failures logged with sufficient detail for investigation?

Human Oversight and Transparency (Questions 22-27)

  • Do users know they are interacting with an AI system? [Critical]
  • Is there a mechanism for a human to override, stop, or correct the AI's output before it affects a decision?
  • Are operators trained on automation bias risks?
  • Are the AI system's capabilities and known limitations documented for end-users?
  • Is there a complaints mechanism for individuals affected by AI-assisted decisions?
  • Has the instructions-for-use document been reviewed by legal/DPO?

Conformity and Registration (Questions 28-30)

  • Has a conformity assessment been completed (self-assessment or third-party)? [Critical]
  • Has the system been registered in the EU AI Act database (mandatory for high-risk systems)? [Critical]
  • Is an EU declaration of conformity prepared and ready for regulatory inspection?
promptguardrails

AI Security Platform

EU AI Act Article 15 requires documented adversarial robustness controls. Prompt Guardrails provides the technical layer — real-time prompt scanning, system prompt hardening, red team testing, and audit-ready logging — that satisfies your cybersecurity compliance obligations.

Article 15 Compliance — documented prompt injection and adversarial robustness controls
Audit-Ready Logging — tamper-protected logs of all LLM inputs and threat events
Continuous Red Teaming — adversarial testing documented for Article 9 risk records
Security Evals — track security regression across model updates for conformity
Get Early Access

Conclusion

The EU AI Act is not primarily a legal compliance problem — it is a technical governance problem. The organizations that will meet the August 2026 deadline are those that have already embedded risk management, adversarial testing, logging, and human oversight into their LLM development lifecycle. Article 15's cybersecurity requirements make robust prompt security infrastructure a regulatory necessity, not an optional enhancement. Start with the 30-question checklist above, identify your gaps, and prioritize the technical controls that satisfy both your security posture and your regulatory obligations.

Tags:
EU AI ActComplianceLLM SecurityRegulationNIST AI RMFEnterprise AIArticle 15
Share this article:Post on XShare on LinkedIn

Secure Your LLM Applications

Join the waitlist for promptguardrails and protect your AI applications from prompt injection, data leakage, and other vulnerabilities.

Join the Waitlist