
Anthropic built its brand on one idea: we are the responsible AI company. Constitutional AI. Careful deployment. The adults in the room. On Friday, April 3, the adults filed paperwork with the Federal Election Commission to launch a political action committee called AnthroPAC. The company that wrote papers about AI alignment is now aligning campaign donations.
I participate in AI safety cohorts. I test frontier models under NDA before they ship. I spend time with researchers and engineers who take alignment seriously as a technical problem, not a marketing position. The reaction to AnthroPAC among those people has been visceral. Not because PACs are unusual. Google, Microsoft, and Amazon all have them. Because Anthropic was supposed to be different. The company whose CEO warns that \”we are considerably closer to real danger in 2026 than we were in 2023\” is now spending money to influence which politicians regulate that danger. The tension between those two positions is not subtle, and nobody I talk to is pretending it does not exist.
What AnthroPAC Actually Is
AnthroPAC is a traditional corporate PAC, funded by voluntary employee contributions capped at $5,000 per person per year. Allison Rossi, Anthropic’s treasurer, signed the filing from the company’s San Francisco headquarters. A bipartisan board will decide which House and Senate candidates receive money, filtered through AI policy relevance. All donations get reported through FEC filings.
This is different from a super PAC in a way that matters. Super PACs accept unlimited money but cannot give directly to campaigns. AnthroPAC can write checks to candidates but only uses employee money. The practical effect: Anthropic employees voluntarily donate small amounts to a fund that backs politicians who will write the rules governing AI. In theory, bipartisan. In practice, 82% of Anthropic employee donations since 2020 have gone to Democrats. Early Anthropic investor Dustin Moskovitz has donated $110 million to political causes, nearly all of it to the left. Anthropic board member Reed Hastings sent $20 million to Democrats, including $7 million to a pro-Harris super PAC.
The \”bipartisan\” framing faces an immediate credibility problem.
The Pentagon Fight That Explains the Timing
AnthroPAC arrives during a legal war between Anthropic and the Trump administration. The dispute started when the Pentagon wanted to use Claude without the ethical guardrails Anthropic insisted on. Anthropic pushed back. In February, War Secretary Pete Hegseth labeled Anthropic a \”supply chain risk.\” President Trump ordered federal agencies to stop using the company’s products. Anthropic filed two lawsuits.
A federal judge in California blocked the Pentagon from taking punitive actions against Anthropic last week, finding the government’s response likely violated the company’s First Amendment and due process rights. The Department of Justice filed an intent to appeal on Thursday. A second lawsuit is still pending.
The substance of the dispute is worth understanding because it is the best argument for AnthroPAC’s existence. Anthropic wanted contractual language requiring that Claude’s use in military contexts follow the company’s Acceptable Use Policy. The Pentagon wanted unrestricted access. That disagreement escalated from a contract negotiation to a \”supply chain risk\” designation to an executive order to two federal lawsuits in less than two months. Anthropic’s position, that an AI company should have a say in how its models are deployed by the government, is a genuine safety principle. It is also a business liability that requires political protection. AnthroPAC exists at the intersection of both.
Against that backdrop, AnthroPAC reads differently than a routine corporate PAC filing. Anthropic has a concrete, active reason to want allies in Congress. The company that refused to let the military use Claude without guardrails now needs legislators who will protect its right to set those guardrails. That is a defensible position. It is also a political position, and the leap from \”we build safe AI\” to \”we fund campaigns\” crossed a line that some in the safety community thought Anthropic would not cross.
The $300 Million Context
AnthroPAC does not exist in isolation. AI companies have poured more than $300 million into the 2026 midterm elections. Leading the Future, backed by OpenAI’s Greg Brockman and Andreessen Horowitz, raised $125 million. Anthropic separately donated $20 million to Public First Action, a bipartisan advocacy group focused on AI safeguards. The crypto sector’s 2024 spending was the closest prior comparison, and AI is already exceeding it.
What are they buying? Access to the committees that matter: Senate Commerce, House Energy and Commerce. These are the committees drafting liability frameworks, export controls on chips, copyright rules for training data, and immigration policy for AI talent. Every major AI company wants legislators who understand the technology and will not reflexively vote for restrictions. The $300 million is the cost of ensuring that the people writing AI law have heard the industry’s version of the story before they write it.
The regulatory pressure is real. Seventy-eight chatbot safety bills are alive in 27 states right now. Tennessee just signed a law prohibiting AI systems from representing themselves as mental health professionals. New York’s RAISE Act targets frontier models using more than 10^26 FLOPs of compute. California’s SB 53 requires safety documentation and whistleblower protections. The EU AI Act is moving from draft to enforcement posture. For a company like Anthropic that trains frontier models, these bills directly constrain what it can ship and how. A PAC that backs sympathetic legislators on those committees is a direct line of defense against regulation that could slow product launches.
Engineers I work with are watching this with a mix of resignation and alarm. Resignation because the political spending was always coming once AI revenue hit this scale. Alarm because the speed of escalation suggests the industry is less confident than it claims about surviving regulation on the merits of its technology alone. If your product is clearly beneficial, you do not need $300 million in political influence. You need customers who tell their legislators how much they depend on it. The spending says the industry does not trust its own customers to make that case.
What the Safety Community Actually Thinks
I will be direct about what I hear in conversations that do not happen on the record. People doing alignment work, testing models before release, participating in red-team evaluations, are not surprised that Anthropic formed a PAC. They are processing what it means for the credibility of the safety argument itself.
The concern is specific and worth spelling out. AI safety already has a sycophancy problem. Models tell users what they want to hear. If the companies building those models are simultaneously funding the politicians who regulate them, the \”safety-first\” framing starts to look like a brand strategy rather than a technical commitment. Anthropic’s Dario Amodei wrote an essay in 2025 warning about existential risks from AI. Anthropic’s PAC is now spending money to influence the politicians who decide how seriously to take those warnings. Both things can be true simultaneously. But the appearance of conflict is enough to erode trust, and trust is the only asset a safety-focused company cannot buy back once it is gone.
I have sat in rooms where alignment researchers discussed whether Anthropic’s safety work was genuine or strategic positioning. Before AnthroPAC, the consensus leaned genuine. After AnthroPAC, the question reopened. That shift matters more than any individual campaign contribution, because the people doing the hardest technical work on making AI safe need to believe the companies deploying their research are acting in good faith. If that belief erodes, the talent pipeline from safety research into industry dries up. And then the companies lose the thing that made them credible in the first place.
The CFR piece published on April 1 noted that there are roughly 1,100 AI safety researchers worldwide. AI companies are spending $300 million on midterm elections. That ratio tells you where the resources are going. The research community is underfunded. The lobbying apparatus is not.
Where This Goes
The midterms will test whether AnthroPAC actually donates to both parties or gravitates toward Democrats, which is where 99.8% of Anthropic-affiliated political spending has gone since 2020. FEC filings are public. The donations will be visible. If the bipartisan framing turns out to be cover for partisan spending, the credibility cost will be immediate and permanent.
For Anthropic specifically, the calculus is clear. The company is acquiring biotech startups for $400 million, restructuring its pricing model, fighting the Pentagon in court, and preparing for a possible IPO. AnthroPAC is one more tool in an expanding political toolkit. The question the safety community keeps coming back to is whether a company can simultaneously build the world’s most capable AI, lobby the government to regulate it gently, and remain a credible voice on the risks that regulation is supposed to address.
That question is not academic. It determines whether the safety argument retains credibility with the public, with legislators, and with the researchers doing the actual technical work on alignment. If the answer is \”companies cannot hold both positions without losing trust,\” then the entire model of industry-led AI safety collapses. External, independent safety evaluation, the kind METR and ARC Evals do, becomes the only credible option. If the answer is \”of course companies lobby while also doing safety work, that is how every regulated industry operates,\” then Anthropic is simply growing up.
I do not have an answer to that question. The people I work with on alignment do not have one either. But the fact that we are asking it about Anthropic, the company that was supposed to make asking it unnecessary, tells you something real about where the AI industry landed in April 2026.