The European Union’s AI Act entered into force in August 2024. In February 2025, its first enforceable provisions took effect: a ban on AI systems categorized as unacceptable risk. These include social scoring systems, real-time biometric surveillance in public spaces, AI that exploits psychological vulnerabilities, and systems that manipulate behavior through subliminal techniques.
By the time the August 2025 deadline arrived, obligations for general-purpose AI models had become enforceable as well. Providers of large language models and foundation models used within the EU must now maintain technical documentation, implement copyright compliance policies, and publish summaries of training data. High-capability models face additional requirements around systemic risk assessment and incident reporting.
The high-risk AI system requirements — covering healthcare, employment, education, critical infrastructure, and law enforcement — take effect on a rolling basis through 2026. These are the provisions with the most significant operational implications for businesses deploying AI in regulated sectors.
For US companies with EU customers or EU-origin data, the AI Act is not a distant regulatory concern. It is a current compliance obligation. The extraterritorial reach of the Act mirrors GDPR: if your AI system is used by EU residents or processes EU personal data, the rules apply to you regardless of where your company is incorporated.
The pattern established by GDPR is repeating. European regulation moves first. Then it becomes the de facto global standard because multinationals cannot efficiently maintain separate systems for different jurisdictions. If you are building AI-powered products and you have not mapped your systems against the AI Act’s risk classification framework, the time to do that is now.



