
AI Policy — March 27, 2026
The White House AI Framework Says Exactly
What Big Tech Wanted to Hear.
Seven pillars. One clear message: no new federal regulator, preempt state AI laws, let courts decide copyright. Here is what the framework actually says, what it deliberately avoids, and what it means for builders and publishers.
Sources: White House National AI Policy Framework (March 20, 2026); EFF analysis; CDT policy brief; Electronic Frontier Foundation; March 2026.
The White House released its AI framework in early 2026 with language designed to sound like regulation while functioning as a permission structure. The framework establishes “voluntary commitments” for AI companies, recommends “risk-based approaches” to AI governance, and calls for “responsible innovation” without defining what any of those terms mean in enforceable language. The five companies that control AI infrastructure (OpenAI, Google DeepMind, Anthropic, Meta, Microsoft) got exactly what they wanted: the appearance of governance without the constraint of regulation.
The framework’s structure reveals its priorities. It addresses AI safety (in the context of frontier models), AI workforce impacts (without mandating protections), AI competition (without restricting consolidation), and AI in government (with the most specific and actionable provisions). The provisions that apply to the government itself are detailed and enforceable. The provisions that apply to the private sector are suggestions. This asymmetry is not accidental. It reflects the political reality that the White House can direct federal agencies but cannot regulate private companies without Congressional legislation.
What the Framework Actually Says
The framework has four pillars. The first pillar, safety and security, calls for frontier model developers to conduct pre-deployment safety testing, report safety incidents to the government, and implement safeguards against misuse. These are the same voluntary commitments that OpenAI, Google, Anthropic, Meta, and Microsoft already made in July 2023. The framework codifies existing voluntary behavior without adding enforcement mechanisms. There is no penalty for non-compliance because there is no compliance requirement.
The second pillar, innovation and competition, calls for maintaining open access to AI resources, supporting open-source AI development, and preventing anti-competitive practices. This pillar directly contradicts the consolidation happening in the market: five companies control the foundation model layer, three companies control the cloud compute layer, and one company (NVIDIA) controls the GPU hardware layer. The framework acknowledges the concentration risk without proposing structural remedies.
The third pillar, worker protections, acknowledges that AI will displace workers and calls for retraining programs, transparent disclosure of AI use in hiring and management, and protections against AI-driven surveillance in the workplace. Atlassian cut 1,600 jobs in early 2026 and restructured its leadership to prioritize AI. The framework was released during a period of accelerating AI-driven workforce reductions across the technology sector. The worker protection provisions are non-binding recommendations.
The fourth pillar, government use of AI, contains the most specific provisions. Federal agencies must inventory their AI systems, conduct impact assessments, ensure transparency in AI-assisted decisions affecting the public, and establish oversight mechanisms. These provisions are enforceable because they apply to the executive branch, which the White House controls directly through executive orders.
What Big Tech Wanted
The Comparison That Matters
The EU AI Act, which entered enforcement in 2025, classifies AI systems into risk categories (unacceptable, high, limited, minimal) and imposes mandatory requirements on each. High-risk AI systems (used in hiring, credit scoring, medical devices, law enforcement) must meet specific accuracy, transparency, and oversight requirements before deployment. Non-compliance carries fines of up to 7% of global annual revenue. The EU approach regulates AI applications. The U.S. approach does not regulate anything.
The practical consequence: AI companies that operate globally must comply with the EU AI Act regardless of the U.S. framework. The U.S. framework provides no additional protection for American citizens beyond what EU law already requires of companies operating in European markets. The companies that lobbied for a permissive U.S. framework are already complying with stricter EU requirements for their European users. The gap in protection is borne entirely by American users who interact with AI systems that have no mandatory safety, accuracy, or transparency requirements under U.S. law.
Why the Framework Exists at All
The framework serves a political function, not a regulatory one. It allows the administration to claim it has addressed AI governance without alienating the technology companies that fund campaigns, employ voters, and drive stock market performance. It provides a reference document for federal agencies that need guidance on AI procurement and deployment. It establishes vocabulary and categories that future legislation can build on, if Congress ever acts.
The likelihood of binding AI legislation from Congress in 2026 is low. The technology sector spent over $100 million on AI-related lobbying in 2025 (OpenSecrets data). Bipartisan disagreement exists on whether AI regulation should focus on safety (Democratic priority), competition (bipartisan but vague), or avoiding regulation that hampers innovation (Republican priority). The framework splits the difference by doing nothing enforceable while sounding comprehensive.
For the AI industry, the framework is a green light. Build what you want. Deploy how you want. If something goes wrong, there is no federal enforcement mechanism. For the public, the framework is a press release dressed as policy. The protections it describes do not exist as enforceable rights. The gap between what the framework says and what it does is the gap between marketing and governance. In 2026, that gap is the entire width of U.S. AI policy.
Sources: White House AI framework (full text); EU AI Act enforcement timeline; OpenSecrets (AI lobbying expenditures 2025); Atlassian workforce reduction announcement (March 2026); NVIDIA market position data; Congressional Research Service (AI legislation tracker); Brookings Institution (AI governance analysis).
The most telling detail is what the framework omits. It does not mention the DOJ antitrust case against Google. It does not mention the FTC’s investigations into AI company practices. It does not mention the pending lawsuits from artists, writers, and publishers against AI companies for training on copyrighted material without permission. It does not mention the concentration of compute resources in three cloud providers and one hardware company. These are the structural issues that determine who benefits from AI and who bears the costs. The framework addresses none of them. A governance document that ignores the power structure it is supposed to govern is not a governance document. It is an endorsement of the status quo.
The European approach is not perfect. The EU AI Act has been criticized for being too prescriptive, too slow, and potentially stifling innovation. But it is a law with penalties. The U.S. framework is a suggestion with no penalties. When the next major AI incident occurs (a deepfake that influences an election, an autonomous system that causes physical harm, a model that leaks private data at scale), the U.S. will discover that voluntary commitments are worth exactly what they cost to enforce: nothing.