Not easy to compare US, EU and China AI regulation
The EU’s stringent enforcement and transparency mandates stand out as a model for prioritising safety, while the US’s federal encouragement paired with state-level rules showcases a knack for fostering rapid technological growth
The world’s biggest technology superpowers — the US, the European Union (EU), and China — have each established their own distinct rulebooks to regulate artificial intelligence, creating a complex and fragmented global landscape. Naturally this is presenting challenges and possibly some risks.
The European Union’s landmark AI Act, which went into effect in 2024, imposes a risk-based compliance framework, demanding strict documentation and audits for everything from biometric scans to hiring algorithms. China, meanwhile, has implemented an equally exacting but state-directed strategy, enforcing laws on everything from generative AI to domestic data localization and party-aligned ethical standards. The internet was once seen as an existential threat to the ruling Communist Party, but Beijing instead brought it to heel through a system of censorship and tight control over China’s largest internet companies.
Artificial intelligence poses a similar dilemma: a transformative force that promises economic gains while having the potential to undermine the party’s grip on power. Companies need to tell the China government about how their apps work and keep officials updated as the technology evolves. The greater the influence that regulators perceive an app to have on public opinion, the closer attention they pay to it.
Even as Chinese officials seek to balance the promise of artificial intelligence with an aversion to risk, the technology has become increasingly important to Beijing amid slowing economic growth. No wonder, the prodigy Chinese start-up DeepSeek that released its AI model in 2024, has followed by a series of high-performing releases. By comparison, both the EU and the US are jointly pivotal to the future of global AI governance to facilitate bilateral trade, improve regulatory oversight, and enable broader transatlantic cooperation.
The EU approach to AI risk management is characterized by a more comprehensive range of legislation tailored to specific digital environments. The EU plans to place new requirements on high-risk AI in socioeconomic processes, the government use of AI, and regulated consumer products with AI systems. Other EU legislation enables more public transparency and influence over the design of AI systems in social media and e-commerce. The EU and US strategies share a conceptual alignment on a risk-based approach, agree on key principles of trustworthy AI, and endorse an important role for international standards.
The European Union has positioned itself as a pioneer in AI governance with the AI Act, landmark legislation that categorizes AI systems based on risk levels and imposes stringent requirements on high-risk applications. Enforced through national authorities across member states, this framework aims to ensure safety, transparency, and ethical standards in AI deployment.
In contrast, the US approach is more fragmented, with regulations primarily at the state level and a lack of a unified federal statute, leading to varying standards and practices across different jurisdictions. The United States operates under a dual structure, where federal policies promote innovation and competitiveness through initiatives like the AI Action Plan, while state-level regulations, such as California’s safety laws, introduce localized constraints. This bifurcated approach reflects a preference for flexibility over uniformity, allowing for rapid technological advancement alongside targeted oversight.
The US lacks a comprehensive federal law governing artificial intelligence (AI), resulting in a patchwork of state-level regulations amid ongoing tensions between federal and state authorities.
These enforcement mechanisms reveal stark differences in their impact across US and EU. The latter has centralised penalties to ensure a uniform deterrent across its market, compelling companies to prioritize compliance even at significant cost. It becomes evident that the EU’s AI Act and the US’s policy landscape represent two distinct philosophies — one rooted in centralised, risk-based oversight and the other in decentralised, innovation-driven flexibility.
The EU’s stringent enforcement and transparency mandates stand out as a model for prioritising safety, while the US’s federal encouragement paired with state-level rules showcases a knack for fostering rapid technological growth. Both methodologies share a commitment to accountability yet diverge sharply in their methods and priorities.
Looking ahead, stakeholders must consider harmonising certain standards through multilateral platforms like the OECD to ease the burden on global companies navigating disparate rules. It goes without saying that Governments and industry leaders should prioritise developing streamlined compliance tools, particularly for smaller firms, to ensure that innovation isn’t sacrificed for regulation.
Under President Donald Trump’s second term, federal policy has shifted toward an “innovation-first” approach, emphasizing minimal regulatory burdens to maintain US global AI leadership. This has led to executive actions challenging state laws perceived as obstructive, while states continue to enact and enforce rules focused on high-risk AI uses, transparency, bias mitigation, and consumer protections. Since January 2025, Trump revoked parts of the 2023 Biden-era AI EO, thus reducing safety testing and reporting mandates to prioritise innovation by introducing a tiered risk-based approach.
Back to the EU AI act, this imposes heavy fines, and prioritises fundamental rights, safety, non-discrimination, and trust. It reflects a precautionary, rights-centric philosophy. The good news is that a digital Omnibus on AI was promulgated towards the end of 2025 to address specific implementation challenges and lighten regulatory burdens.
As part of a broader digital package, the proposal has sparked a debate about the appropriate balance between simplification and ensuring fundamental rights. This is structured into two parts; one amending personal and non-personal data and cybersecurity rules, and another amending AI rules.
The digital package aims to simplify and enhance the effectiveness of the laws, and help EU businesses to innovate, scale, and save on administrative costs. While the digital package has been welcomed by most stakeholders, the digital omnibus has raised concerns about achieving simplification while ensuring fundamental rights.
In summary, the success rate of AI regulation is still uncertain, as the three countries are in the early stages of developing and implementing policies. While there is growing recognition of the need for regulation, and possibly simplification yet public opinion remains mixed, and effective governance models are still being explored.
