Non-Lawyers Summary

There is no single AI law that covers everything everywhere. Companies have to map where they operate, what kind of AI they use, and whether that system triggers EU, state, sector, or consumer-protection rules. This post explains that patchwork with cybersecurity risk in mind.


The Cold Open

Just before the product launched, the lawyers met.

The system was elegant: an AI that made credit decisions in milliseconds, filtered job applicants at scale, flagged insurance claims for denial. It was fast, it was accurate by the metrics the company used, and it was about to be deployed to users in six countries.

No one in the room had a complete picture of what laws applied.

The EU lawyer knew about the AI Act. The U.S. counsel knew about the FTC. The Colorado compliance officer had a spreadsheet about SB 24-205 that no one had read in full. Nobody had mapped China. Nobody had thought about what happened when the same model processed data from all these jurisdictions simultaneously.

This is the actual state of AI regulation in 2026: a patchwork so dense and so jurisdictionally complex that understanding it requires mapping four different legal architectures at once — a comprehensive EU statute, fragmented U.S. federal guidance, an accelerating wave of state laws, and distinct national regimes from China and the UK. Get it wrong and the penalties reach 7% of worldwide annual turnover. Get it wrong in the wrong way and the system gets banned outright.

What follows is the map.


Overview

There is no unified global framework for artificial intelligence regulation. Instead, legal counsel advising clients on AI deployment must navigate a patchwork: a comprehensive risk-tiered EU statute, sector-fragmented U.S. federal guidance, a growing wave of state-level AI laws, and distinct national approaches from China and the UK. For lawyers, this is fundamentally a conflict-of-laws and compliance layering problem. A company deploying AI to EU users must comply with the EU AI Act regardless of where it is incorporated. That same company, if publicly traded and using AI in credit decisions, also faces SEC, OCC, and FTC obligations. The Colorado AI Act — the first comprehensive U.S. state AI law — signals that state-level complexity will only grow. This module provides the cross-jurisdiction map that practitioners need to classify AI systems, identify obligations, and advise clients before deployment.


Key Concepts

What Is "AI" for Regulatory Purposes? — The Definition Wars

The first trap is definitional. Every regulator defines "AI" differently, and a system that falls outside one definition may fall squarely within another.

The EU AI Act defines an AI system as a "machine-based system that... can, for a given set of objectives, generate outputs such as predictions, recommendations, decisions, or content." The NIST AI RMF uses a similar but broader functional definition. State laws like the Colorado AI Act focus on AI systems used for "consequential decisions" regardless of technical architecture.

For practitioners, the operative question is functional: does this system make or significantly influence a decision that affects a person's interests? If yes, assume the law applies until you can prove otherwise.

The Risk-Tier Concept — How the EU Changed Global Compliance Logic

The EU AI Act introduced a four-tier risk classification that has become the dominant analytical framework internationally. The tier determines everything:

  • Prohibited: Banned outright, no path to compliance
  • High-risk: Full conformity assessment, mandatory documentation, ongoing monitoring
  • Limited-risk: Transparency and disclosure obligations
  • Minimal-risk: No specific AI Act obligations

U.S. state laws like Colorado's use a binary high-risk / non-high-risk structure. The risk tier assigned to a system determines the compliance obligations: from full conformity assessments and mandatory documentation at the high-risk tier, to simple transparency disclosures at the limited-risk tier, to no specific obligations at the minimal-risk tier.

The EU framework established the logic that other jurisdictions are now copying. Understanding it is not optional for AI compliance work.


EU AI Act — Regulation (EU) 2024/1689

The EU AI Act is the world's most comprehensive AI regulation. It is a directly applicable regulation — it has the force of law in all 27 Member States without implementing legislation. There was no transition period for the most dangerous systems. The bans went live first.

Timeline

MilestoneDate
Entered into forceAugust 1, 2024
Prohibited AI systems bannedFebruary 2, 2025
GPAI model obligations applyAugust 2, 2025
High-risk AI obligations applyAugust 2, 2026
Full applicability (most provisions)August 2, 2026

Current implementation posture: The Commission currently frames August 2, 2026 as the main application date for most remaining provisions, while high-risk AI systems embedded in Annex II regulated products have a longer runway until August 2, 2027.

Extraterritorial reach: The EU AI Act applies to providers that place AI systems on the EU market or put them into service in the EU, and to deployers of AI systems located in the EU — regardless of where the provider is established. A U.S. company providing AI services to European clients is within scope. The law does not care where the servers are.

Risk Classification System — The Four Tiers

Prohibited AI (Article 5) — banned from February 2, 2025:

These systems are illegal to deploy in the EU. Not regulated. Not licensed. Banned.

  • Cognitive behavioral manipulation techniques exploiting vulnerabilities to cause harm
  • Social scoring systems operated by public authorities
  • Real-time remote biometric identification systems in publicly accessible spaces (with narrow law enforcement exceptions)
  • AI that exploits vulnerabilities of specific groups (age, disability)
  • Subliminal manipulation causing harm without awareness
  • Predictive policing based solely on profiling (without individual criminal activity)
  • Emotion recognition in workplace or educational settings (with certain exceptions)
  • Untargeted scraping of facial recognition databases

High-Risk AI (Annex III) — subject to full conformity assessment obligations:

These systems can be deployed — but only after conformity assessment, documentation, registration, and with ongoing monitoring. Deployers that skip this process are not operating in a gray area. They are in violation.

  • Biometric categorization and identification systems
  • Critical infrastructure management and operation
  • Education and vocational training access decisions
  • Employment, worker management, and access to self-employment
  • Essential private and public services (credit, insurance, benefits)
  • Law enforcement tools affecting individual rights
  • Migration, asylum, and border control management
  • Administration of justice and democratic processes

Limited-Risk AI — transparency and disclosure obligations only:

  • Chatbots and conversational AI (must disclose AI nature to users)
  • Deepfakes and AI-generated synthetic content (must label as AI-generated)

Minimal-Risk AI — no specific obligations:

  • Spam filters, AI in video games, AI-powered content recommendation — may deploy without EU AI Act compliance requirements

GPAI Models — The Frontier Regime

GPAI (General Purpose AI) models are foundation models trained on large datasets and capable of performing a wide range of tasks — examples include GPT-4, Claude, Llama, and Gemini. These are the systems that power most of the commercial AI products on the market.

All GPAI providers face baseline obligations:

  • Technical documentation of the model
  • Training data summary (including datasets used, sources, data governance)
  • Copyright compliance policy

Systemic risk threshold: GPAI models trained using more than 10^25 floating-point operations (FLOPs) are classified as GPAI models with systemic risk. Current frontier models from major providers generally exceed this threshold.

Systemic-risk GPAI additional obligations:

  • Adversarial testing and red-teaming
  • Incident reporting to the EU AI Office for serious incidents
  • Cybersecurity measures proportionate to the systemic risk

The EU AI Office (within the European Commission) is the primary oversight body for GPAI models and coordinates enforcement across Member States. This is the body that will decide what "systemic risk" means in practice — and the decisions it makes will reverberate across every company building on top of frontier models.

Fines — The Numbers That Reframe the Calculation

Violation TypeMaximum Penalty
Prohibited AI violations€35,000,000 or 7% of worldwide annual turnover (higher amount applies)
Other AI Act violations€15,000,000 or 3% of worldwide annual turnover
False information to authorities€7,500,000 or 1% of worldwide annual turnover

SME exceptions apply — reduced caps for small and medium enterprises. For large technology companies, the turnover-based percentage will typically exceed the flat euro cap by orders of magnitude.

7% of worldwide annual turnover. For the largest AI companies, that figure is not a fine. It is a restructuring event.

Enforcement Structure

  • EU AI Office: Oversees GPAI models; cross-border coordination; enforcement against systemic-risk models; primary body for frontier AI
  • National Competent Authorities (NCAs): Designated in each Member State; responsible for high-risk AI oversight and enforcement at country level
  • Notified Bodies: Accredited third parties that conduct conformity assessments for high-risk AI systems before market placement

U.S. Federal — The Regime That Isn't There

As of 2026, the United States has no comprehensive federal AI statute. This is not an oversight. It is a policy choice — and it is one that leaves practitioners and clients navigating a system that requires expertise in multiple separate legal regimes.

FTC Enforcement — Where the Teeth Actually Are

The Federal Trade Commission's primary AI enforcement authority derives from FTC Act Section 5's prohibition on "unfair or deceptive acts or practices." This applies to AI claims in several ways:

  • False capability claims: Companies cannot make unsubstantiated claims about AI system capabilities (accuracy, fairness, safety)
  • AI bias: Using AI systems that produce discriminatory results for protected classes may constitute an unfair practice
  • AI-generated deceptive content: Deploying AI to generate fake reviews, synthetic personas, or misleading content is an unfair or deceptive act
  • Project Rampart: FTC enforcement action targeting AI voice cloning fraud — demonstrates FTC's willingness to apply existing consumer protection law to novel AI harms

The FTC's 2023 AI guidance established that AI claims must be truthful, substantiated, and non-misleading — the same standards that apply to any advertising claim. No sector-specific AI statute exists at the federal level.

AI-adjacent lesson from TikTok/ByteDance: The FTC's pending 2024 TikTok/ByteDance case is not a standalone "AI law" case, but it is highly relevant to AI compliance analysis. The FTC alleges COPPA violations, violations of the 2019 order, use of persistent identifiers and other children's data, maintenance of millions of under-13 or "age unknown" accounts, and retargeting of less-active Kids Mode users through third-party services. For practitioners, the lesson is that U.S. regulators often police data-driven optimization, recommender, and targeting systems through existing privacy and children's-data law rather than through a dedicated AI statute.

Executive Orders and Current Policy Direction

EO 14110 (Biden, October 2023) — Rescinded:

  • Required safety testing for dual-use foundation models
  • Directed adoption of NIST AI RMF across federal agencies
  • Required companies training models above 10^26 FLOPs to notify the federal government
  • Revoked January 20, 2025 by EO 14148
  • EO 14179 on January 23, 2025 then directed agencies to review actions taken under revoked EO 14110 and directed OMB to revise prior AI memoranda within 60 days

Trump Administration AI Policy (January 2025 — Current):

  • EO 14179: "Removing Barriers to American Leadership in Artificial Intelligence"
  • Deregulatory direction — removed Biden-era safety mandates
  • Focus on international AI competitiveness over domestic safety obligations
  • No replacement mandatory safety testing framework established as of 2026

OMB's current operating memos (April 3, 2025):

  • M-25-21: Governs federal agency use of AI with an innovation/governance/public-trust framework and rescinds/replaces M-24-10
  • M-25-22: Governs federal AI acquisition and procurement and is the practical baseline for agencies buying AI systems or services
  • Why this matters: For federal contractors and vendors, live U.S. AI governance now runs more through procurement, agency-use rules, and sector regulators than through a single cross-economy AI statute

NIST AI RMF 1.0 (January 2023):

  • Voluntary framework for organizations designing, developing, deploying, or evaluating AI
  • Four functions: GOVERN (organizational culture, policies, accountability), MAP (context identification, risk categorization), MEASURE (risk assessment, monitoring), MANAGE (risk prioritization, treatment, response)
  • Not legally binding, but widely referenced in federal procurement, financial services, and healthcare AI guidance
  • Effectively mandatory for federal contractors through procurement requirements

Sector-Specific U.S. AI Obligations

SectorGoverning BodyAuthorityAI Application
Financial servicesOCC, FDIC, FRBSR 11-7 Model Risk GuidanceAI models used in credit decisions, trading, fraud detection
HealthcareFDASaMD regulations; 2021 AI/ML Action PlanAI/ML-based Software as a Medical Device
EmploymentEEOCTitle VII of Civil Rights ActAI hiring tools with discriminatory impact
Federal contractingFAR/DFARProcurement regulations (in development)AI systems in government contracts

The most practically significant sector-specific AI regulation is the FDA's AI/ML SaMD framework, which applies rigorous pre-market review requirements to AI systems used in medical diagnosis, treatment, or decision support.


State-Level U.S. AI Laws — The Wave That's Coming

Colorado AI Act (SB 24-205) — The Shot Heard Round the Country

Status: Signed May 17, 2024; effective February 1, 2026

Colorado was first. It will not be last.

Scope: The Colorado AI Act covers "high-risk AI systems" — AI systems that make or are a substantial factor in making "consequential decisions" affecting Colorado residents. Consequential decisions include: employment, education, financial services, government services, healthcare, housing, insurance, and legal services.

Key obligations for developers:

  • Disclose known high-risk AI system limitations and use cases to deployers
  • Provide documentation enabling deployers to conduct impact assessments
  • Maintain documentation of training data, intended use, and performance metrics

Key obligations for deployers:

  • Disclose to consumers when a high-risk AI system is used in a consequential decision
  • Allow consumers to opt out of AI-based decisions or appeal automated outcomes
  • Conduct and document annual impact assessments for each high-risk AI system
  • Notify AI developers of known risks discovered during deployment
  • Publish a statement of compliance describing policies and safeguards

Enforcement: Colorado Attorney General; no private right of action (unlike California's CCPA)

Significance: Colorado's law is a direct analog to the EU AI Act's high-risk AI obligations, adapted for U.S. legal structures. It signals the direction of U.S. state-level AI compliance — practitioners should assume that other states will adopt similar frameworks before 2030.

California

  • SB 1047 (2024): Would have required safety testing for large AI models trained above a cost threshold. Vetoed by Governor Newsom in September 2024. Newsom cited concerns about chilling AI innovation in California.
  • AB 2930 (2024): Automated decision systems — passed and signed; effective 2026. Requires disclosure when automated systems are used in consequential decisions and provides appeal rights.
  • Pending: Multiple additional California bills on AI transparency, deepfake labeling, AI in healthcare decisions, and AI-generated political advertising disclosures.

Other States

  • Texas HB 1709 (2025): AI transparency for consequential decisions; pending as of 2026
  • Illinois: AI Video Interview Act (existing law) requires disclosure when AI is used to evaluate job applicant video interviews; bias auditing requirements for AI used in employment decisions
  • New York City Local Law 144 (2023): Mandatory independent bias audits for AI hiring tools used for NYC jobs; audit results must be publicly posted; effective January 1, 2023

China AI Regulations — The Fastest Jurisdiction on Earth

China has been the most active jurisdiction in rapidly deploying binding AI-specific regulations. While the U.S. debated and the EU deliberated, China enacted.

Interim Measures for the Management of Generative AI Services:

  • Effective: August 15, 2023
  • Scope: Covers generative AI services provided to the Chinese public — includes chatbots, image generation, code generation, text generation
  • Key requirements:
  • Security assessments before public launch (similar to EU AI Act pre-market review)
  • Content moderation systems to prevent prohibited content
  • Traceability of AI-generated content (metadata requirements)
  • Prohibition on content endangering national security, state power, or socialist core values
  • Data training sets must comply with Chinese data protection law
  • Enforcement: Cyberspace Administration of China (CAC)

2026 Overhaul: China expanded its AI regulatory framework effective January 1, 2026, with extended extraterritorial reach applying Chinese AI content and security requirements to services offered in China by foreign operators.

Practical impact for multinational clients: Companies operating in China or offering AI services accessible in China must comply with Chinese content requirements that are fundamentally incompatible with the open content policies permissible in other jurisdictions. This creates compliance architecture challenges for global AI deployments. You cannot run one model. You need two — or you need to choose a market.


UK AI Approach — The Deliberate Exception

The UK has deliberately chosen a principles-based, sector-specific approach rather than enacting a single comprehensive AI statute — a conscious policy choice to differentiate from EU regulation post-Brexit.

Core approach:

  • No single AI Act; no new AI-specific legislation as of 2026
  • Pro-innovation stance: existing sector regulators apply existing legal frameworks to AI
  • Regulatory sandboxes for AI innovation in financial services and healthcare

Key bodies:

  • CMA (Competition and Markets Authority): AI in competitive markets, market consolidation by AI companies
  • ICO (Information Commissioner's Office): AI processing of personal data under UK GDPR
  • FCA (Financial Conduct Authority): AI in financial services, algorithmic trading
  • MHRA (Medicines and Healthcare products Regulatory Agency): AI in medical devices
  • AI Safety Institute (AISI): Renamed AI Security Institute in 2025; conducts voluntary evaluations of frontier AI models; not a regulatory body

Mandatory AI incident reporting: Voluntary as of 2026; sector regulators may impose incident reporting through existing powers.

For practitioners: UK AI compliance is assessed framework-by-framework (GDPR for data; FCA for financial; MHRA for medical) rather than through a single AI law analysis. Post-Brexit, UK law diverges from the EU AI Act — companies operating in both markets face dual compliance requirements.


NIST AI Risk Management Framework — The U.S. Standard of Care

Published: January 26, 2023 Type: Voluntary framework — not legally binding, but widely referenced

The NIST AI RMF is the primary U.S. government-endorsed framework for AI risk management. Its four functions:

FunctionPurposeKey Activities
GOVERNEstablish accountability culture and policiesPolicies, roles, accountability, risk tolerance
MAPIdentify context and categorize risksUse case identification, stakeholder mapping, risk enumeration
MEASUREAssess and monitor risksTesting, evaluation, measurement, monitoring
MANAGEPrioritize and treat risksRisk treatment plans, incident response, continuous improvement

Practical significance: The AI RMF is effectively mandatory for federal contractors through procurement requirements and is the standard against which federal agency AI programs are assessed. In regulated industries (financial services, healthcare), regulators reference the AI RMF as the standard of care for AI risk governance. Organizations pursuing federal contracts should treat NIST AI RMF alignment as a compliance requirement.


Cybersecurity Intersection — AI as Attack Surface and Attack Tool

AI systems create unique cybersecurity challenges in both directions: AI systems are themselves attack surfaces, and AI is increasingly deployed to conduct cybersecurity functions.

AI systems as attack surfaces:

  • Adversarial inputs: crafted inputs designed to cause misclassification or harmful outputs
  • Model extraction: reconstructing a model through API queries
  • Data poisoning: corrupting training data to alter model behavior
  • Prompt injection: injecting instructions through user inputs to hijack AI agent behavior

AI used for cybersecurity:

  • SIEM/SOAR automation for alert triage and response
  • Anomaly detection in network traffic
  • Vulnerability scanning and prioritization
  • Threat intelligence synthesis

OWASP LLM Top 10 (2023, updated 2025):

  1. Prompt Injection
  2. Insecure Output Handling
  3. Training Data Poisoning
  4. Model Denial of Service
  5. Supply Chain Vulnerabilities
  6. Sensitive Information Disclosure
  7. Insecure Plugin Design
  8. Excessive Agency
  9. Overreliance
  10. Model Theft

EU AI Act intersection: Annex III classifies "critical infrastructure" AI as high-risk — AI systems used in cybersecurity for critical infrastructure (power grids, water systems, financial systems) may qualify as high-risk AI under the EU AI Act, triggering full conformity assessment obligations.

Children's-data intersection: The FTC's September 2024 social-media surveillance report found that major platforms fed user and non-user personal information into algorithms, data analytics, and AI while offering little ability to opt out and providing inadequate protections for kids and teens. Combined with the pending TikTok/ByteDance litigation, that means lawyers evaluating AI or algorithmic systems that touch minors should expect scrutiny to arrive through COPPA, privacy, retention, and deletion theories even where no comprehensive federal AI law exists.


Practitioner Takeaways

1. The EU AI Act applies to your non-EU clients. Any client deploying AI systems to EU users — including U.S. SaaS companies, cloud providers, and AI application developers with European customers — is within scope of the EU AI Act. The extraterritorial reach is unambiguous. The analysis starts with: what AI systems does this client deploy, and do EU users interact with them?

2. Classify before advising. For EU-facing deployments, the risk tier classification (prohibited / high-risk / limited-risk / minimal-risk) determines all subsequent obligations. A chatbot (limited-risk) and an AI credit scoring system (high-risk) have entirely different compliance requirements. Always classify first.

3. U.S. federal law requires sector mapping, not AI law analysis. No comprehensive U.S. federal AI statute exists. U.S. AI compliance is a function of: what sector the client operates in (FDA for healthcare AI, OCC/FRB for banking AI, EEOC for hiring AI); whether the client is publicly traded (FTC deception standards); and whether the client is a federal contractor (NIST AI RMF alignment). Build a sector map before advising.

3A. For youth-facing AI or algorithmic systems, map COPPA before you map "AI law." If the product uses recommender logic, engagement optimization, retargeting, or age-estimation flows involving children, the first federal question may be COPPA, consent-order exposure, and Section 5 deception or unfairness — not whether Congress has enacted a general AI act.

4. Colorado is the leading indicator for U.S. state AI compliance. The Colorado AI Act (effective February 1, 2026) is the first U.S. comprehensive state AI law. Clients deploying AI for consequential decisions involving Colorado residents must comply. More importantly, Colorado signals what other states will enact — companies investing in AI compliance infrastructure today should build to Colorado-equivalent standards, not just current federal minimums.

5. China AI compliance requires content architecture decisions, not just policy updates. The Chinese Generative AI Measures require content traceability and prohibition on specified content categories. For global AI deployments, Chinese content requirements are often incompatible with global default settings. Advise multinational clients that China AI compliance requires dedicated deployment decisions — content moderation, watermarking, and output restrictions — not just policy overlays.

6. Federal AI work now means NIST AI RMF plus the 2025 OMB memos. Federal contractors and regulated-industry entities should treat NIST AI RMF alignment as the control framework, but they should also map federal opportunities and obligations against OMB M-25-21 (agency use) and M-25-22 (procurement). Document GOVERN, MAP, MEASURE, and MANAGE activities for each AI system in use, and expect those artifacts to surface in vendor diligence and contract negotiations.


Cross-Jurisdiction Comparison Table

JurisdictionRegulation NameStatus / Effective DateRisk TiersMax PenaltiesEnforcement Body
EUEU AI Act (Reg. 2024/1689)In effect; Aug 2, 2026 full applicability4 tiers: Prohibited, High-risk, Limited-risk, Minimal-risk€35M or 7% turnover (prohibited); €15M or 3% (other)EU AI Office (GPAI); National Competent Authorities (high-risk)
U.S. FederalNo comprehensive AI law; FTC Act § 5; OMB M-25-21/M-25-22 for agenciesIn effect (FTC / OMB / sector guidance)None (sector-based)FTC: up to $50,120/day per violationFTC; OMB; FDA (healthcare); OCC/FRB (finance); EEOC (employment)
ColoradoAI Act (SB 24-205)February 1, 20262 tiers: High-risk, Non-high-riskAG enforcement; no fine cap specifiedColorado AG
CaliforniaAB 2930 (automated decisions)2026No formal tiersNot yet specifiedCalifornia AG
New York CityLocal Law 144 (2023)In effect (Jan 1, 2023)Hiring AI onlyCivil penalty (limited)NYC DCWP
ChinaGenerative AI Measures (2023); 2026 overhaulAugust 15, 2023; expanded Jan 1, 2026N/A (content-based, not risk-tiered)Administrative fines; license revocationCAC
UKPrinciples-based; no single statuteExisting law applies nowN/A (sector-based)Sector-specific (FCA, ICO, MHRA)CMA, ICO, FCA, MHRA, AISI

Quiz

See: artifacts/quizzes/quiz-01k.md

Test your knowledge

Ready to check what stuck?

10 questions — cases, statutes, and the practical move for each. Takes 5 minutes.

Take the quiz now →