EU AI Act: New Rules and Penalties Set for High-Risk AI Compliance by 2028
January 14, 2026
High-risk AI systems remain classified under Annex III with a 24-month transition for legacy systems; the European AI Office will run 85 conformity assessment bodies by 2026 and process about 2,800 annual assessments from roughly 1,900 providers.
Bilateral cooperation is accelerating third-country compliance, advancing 12 adequacy decisions and mutual recognition efforts to support GPAI model validations and joint testing across the US, UK, Japan, Canada, and other partners.
Transparency rules for general-purpose AI stay in place, with added cybersecurity incident reporting for models above 10^26 FLOPs; these amendments respond to 18 months of enforcement burdens on thousands of SMEs while protecting fundamental rights.
Transparency obligations for general-purpose AI demand detailed training data summaries, quality metrics, and systemic risk assessments, with governance prioritizing compute power over other risk indicators.
The Commission defends amendments to the EU AI Act, keeping prohibitions on real-time biometric identification in public spaces and preserving a risk-based framework for high-risk AI systems.
The implementation timeline maintains phased dates: Annex III high-risk from late 2027, Annex I from mid-2028, GPAI obligations in effect by 2025, and a 36-month transition for substantial modifications to existing deployments.
Core prohibitions on real-time biometric identification, social scoring, and manipulative subliminal techniques remain, with targeted law-enforcement exceptions needing judicial authorization; 2,400 deployed systems across 18 member states showed high non-compliance prior to enforcement.
CODES OF PRACTICE standardize systemic risk evaluation for GPAI, incorporating thousands of public responses and maintaining compute-related risk thresholds at 10^26 FLOPs, with cybersecurity codes aligned to ENISA timelines.
AI Office gains exclusive supervisory power over high-risk systems that integrate general-purpose AI from the same provider, centralizing oversight of AI in Very Large Online Platforms, while regional offices and a Joint Coordination Group oversee cross-border investigations and market surveillance.
Market surveillance equips national authorities to issue 30-day information requests and conduct quarterly cross-border deployment investigations, with penalties up to GDPR levels (up to €35 million or 7% of global turnover) for non-compliance.
SME-focused simplifications cut technical documentation costs by about 42% for thousands of high-risk deployments, introducing derogations and fast-track conformity assessments to streamline processes for SMEs and innovative deployments.
AI-generated content labeling is set to begin by August 2026, with millions of daily interactions automatically labeled and watermarked; workplace emotion recognition requires explicit GDPR-aligned consent.
Summary based on 1 source
Get a daily email with more EU News stories
Source

Brussels Morning • Jan 14, 2026
European Commission defends policy decisions in proposed EU AI law changes