We’re having the wrong debate about AI regulation.
The fight between Washington and the states is really just political theater, distracting from the actual problem: regulation can never keep pace with AI development, no matter who’s writing the rules. Whether it’s federal law or California’s latest mandate, by the time a regulation is written and enforced, the technology has already moved three steps ahead.
AI doesn’t respect state lines.

The Impossibility of 50 Different Rules
Imagine an AI company forced to comply with 50 different regulatory frameworks simultaneously. They either build products compliant with the strictest state (turning California or New York into the global standard anyway), or they get creative with workarounds: VPNs, feature restrictions, geographic blocking. Citizens in stricter states get frustrated, demand access, and the regulations are relaxed or circumvented entirely.
This isn’t hypothetical. We’ve seen it before with automotive emissions standards. California set strict rules. Instead of creating 50 different car models, manufacturers built vehicles that could sell everywhere, because meeting California’s standards was easier than managing a patchwork. The strictest became the standard.
AI needs the same approach. One federal baseline. Not because federal regulators are smarter, but because fragmenting AI regulation across state lines either doesn’t work or creates the de facto standard anyway. Why go through the chaos of the second option?
The Regulation Speed Problem Is Real
Regulators lack the technical expertise to understand what they’re regulating. By the time they understand a technology well enough to write rules, the technology has evolved. And the speed gap between quarterly AI model releases and the years-long process of drafting, debating, and implementing law is widening, not closing.
This doesn’t mean we shouldn’t regulate. It means we need to be realistic about what regulation can actually do.
What Actually Works: Broad Principles and Public Accountability
Federal law should establish broad strokes, not granular rules that will be obsolete in 18 months.
Prohibit clear harms: AI systems cannot be deliberately designed to harm humans. Companies must actively patch known vulnerabilities and security gaps. If a tool is hacked and weaponized, that’s the attacker’s responsibility, but if a company knew about an exploitable weakness and ignored it, they’re liable.
There’s a difference between a tool that can be misused and one that’s designed to be misused. Someone building an open-source AI from scratch specifically to bypass safety guidelines bears full responsibility for what they create.
But here’s where public opinion becomes the interim guardrail while regulation catches up: the court of public opinion moves faster than courts of law. When people understand why certain guardrails exist, not from legal jargon but from clear ethical principles they can grasp, they hold companies accountable before regulators can.
Enforcement Through Consequences
Enforcement doesn’t need to come from regulatory agencies writing faster. It comes from consequences.
Class action lawsuits are filed when companies knowingly ship unsafe systems. Federal prosecution when someone creates an AI specifically designed to cause harm. High-profile examples that send a message: there are real costs to building dangerously.
These consequences ripple. Companies watch what happened to the first player who cut corners. They fund safety research. They patch vulnerabilities proactively. Not because a regulation told them to, but because the alternative is expensive and public.
And globally? The US market is too big to ignore. If American companies have to meet a federal standard to operate here, international companies will too. It becomes the global standard, or they build feature-limited versions for different regions. Better than a fragmented mess of 50 different rulebooks.
The Real Regulation We Need
We don’t need faster-moving regulators trying to keep pace with AI. We need clear ethical principles that most people agree on, federal enforcement mechanisms that make examples of bad actors, and a public that understands what’s at stake.
Regulation that works isn’t about being comprehensive. It’s about being consistent, consequential, and actually enforceable.