The Federal AI Policy Framework: An Improvement, But My Offer Is (Still Almost) Nothing
TL;DR
The framework’s real center of gravity is federal preemption, not AI governance — Ziv Mauitz argues the White House’s seven-part plan mostly exists to block state laws like California’s SB 53 and the RAISE Act while offering almost no substitute regulation at the federal level.
The standout bright spot is unusually strong free-speech language aimed at the federal government itself — he’s "especially heartened" by provisions saying Congress should stop agencies from coercing AI platforms to alter lawful content and give Americans a way to seek redress.
On frontier and existential AI risk, the proposal is basically silence — beyond saying national-security agencies should understand frontier models, the framework includes no transparency requirements and no real mechanism for catastrophic-risk oversight.
Most of the six headline objectives are, in his words, applause lights plus a few modest asks — child-safety ideas like age assurance for minors, data-center permitting, and some small-business or education support are treated as fine in principle but thin on implementation.
The framework repeatedly punts hard legal questions to courts or existing law — on copyright, likeness rights, and broader AI regulation, it prefers ambiguity and sector-specific agencies over a new federal AI regulator, which Ziv calls an "unframework."
His bottom line is simple: the offer is ‘still almost nothing’ — if Congress wants to preempt states, he says it needs at minimum an exception for frontier-risk laws or a serious federal replacement, not a minimally burdensome national standard that mostly means doing nothing.
The Breakdown
Four pages, finally — but mostly four pages of vibes
Ziv opens with a mix of relief and exasperation: yes, the federal AI policy framework exists, and yes, that is technically progress. But it’s just a four-page outline that mostly repeats prior talking points, with the biggest practical shift being an acknowledgment that actual policy should come through Congress instead of stapling an AI-state-law moratorium onto unrelated child-safety measures.
The surprisingly excellent free-speech section
The biggest positive surprise for him is the section on censorship and free speech, especially because it targets federal coercion of platforms rather than vague cultural complaints. He calls it “badly needed and most welcome,” while also pointing out the obvious irony: the executive branch is currently doing the kind of pressure campaign the framework says should be banned.
Child safety gets the most concrete proposal: age assurance
In the first section, he sees one real policy ask hiding inside a lot of generic applause lines: commercially reasonable age assurance for AI services likely to be used by minors. He says that’s acceptable if it really means AI-enabled detection rather than burdensome ID-style verification and if smaller platforms are protected from excessive compliance costs.
Infrastructure, scams, and national-security skilling up
On strengthening communities, Ziv is mostly fine with the practical pieces: no electricity-cost spikes from data centers, faster permitting, anti-fraud enforcement, and AI help for small businesses. The frontier-model line here matters more symbolically than substantively: agencies should understand model capabilities and national-security implications, but he plainly does not trust current officials — especially what he calls the Department of War, or “DO” — to use that capacity to reduce risk rather than worsen it.
Copyright, creators, and a lot of strategic ambiguity
The intellectual-property section frustrates him because it keeps saying “consider” and “let courts handle it” instead of clarifying anything. He likes the idea of licensing frameworks or collective bargaining systems and supports protections for voice and likeness, but his read is that the document is mostly trying to head off tougher proposals like Senator Marsha Blackburn’s more creator-friendly approach without actually settling the law.
The ‘unframework’: innovation through not regulating AI
The innovation section is where he says the quiet part out loud: no new federal AI regulator, rely on existing sector-specific agencies, use sandboxes, and let industry-led standards do the work. For Ziv, that amounts to an "unframework" — a decision to avoid choosing policy for the AI era, especially on existential risk, and instead hope old institutions and courts somehow absorb the shock.
The real point: preempt state laws and leave almost everything else untouched
After briefly dismissing the workforce section as mostly low-value programming and possible pork, he lands on the framework’s true purpose: section seven, federal preemption. States would still be allowed to enforce general laws, zoning, and rules for their own procurement, but not to regulate AI development or meaningfully constrain AI use in ways Washington thinks burden “American AI dominance.”
‘Their offer is nothing’
His harshest line comes in the final stretch: the framework bars states from acting while refusing to build serious federal guardrails, including transparency rules or liability standards for developers whose models are used unlawfully by third parties. He notes that supporters of a broad moratorium, like Dean Ball, Neil Chilson, David Sacks, and Mike Johnson, are predictably lining up behind it — and his conclusion is unchanged: not quite nothing, but close enough that he still couldn’t support it as written.