MASHINIi

OpenAI vs Anthropic: An Ethical Breakdown

openai vs anthropicchatgpt vs claudeai ethics
April 28, 2026

Both AI labs took the Pentagon's money. It went down in two very different ways.

Anthropic and OpenAI now hold defence contracts with the United States Department of Defense. Each is reported at around $200 million. Each puts a frontier AI model — Claude, ChatGPT — onto classified networks for mission planning, intelligence work and cyber operations. The two contracts were signed within months of each other.

Why, then, did one of those companies spend the spring of 2026 fighting the Pentagon in court — and the other lobbying for its own ethics framework to become the industry standard?

The popular answer is downstream of branding. Anthropic is read as the safety lab; OpenAI is read as the speed-run. Both companies have invested in that reading. Both companies benefit from it commercially.

The public record tells a more specific story. In February 2026, the Department of Defense asked Anthropic to remove the autonomous-weapons restriction from Claude. Dario Amodei, the company's chief executive, refused. In March, the Department designated Anthropic a "national security supply chain risk" and ordered federal agencies to stop using the product. Anthropic filed a legal challenge. The same week, OpenAI signed its own classified DoD contract — and then publicly opposed the Anthropic designation, proposing that its own contractually-negotiated ethics framework be adopted across the industry.

One company drew a line at the product level and was punished for it. The other negotiated lines into the body of a commercial contract, took the contract, and lobbied against its competitor's lines.

This note documents that difference, and the four others it sits inside.

Mashinii scored OpenAI and Anthropic on eleven ethical dimensions against the public record — court filings, regulatory enforcement, peer-reviewed research, investigative reporting and NGO documentation. There are no survey responses in the methodology, no corporate self-disclosure feed, no paid placements. The full per-company profiles are at /score/OPENAI.P and /score/ANTHROPIC.P.

Headline result. OpenAI averages -17 across dimensions where evidence exists. Anthropic averages 0. Both are negative. The comparison is between two negative profiles, not between an ethical lab and an unethical one — and the gap concentrates in five specific places.


1. Defence-contract conduct

OpenAI maintained a blanket prohibition on military and warfare applications until 2024. By February 2026 the company had reversed the policy and signed a classified DoD contract worth up to $200 million. Its models are now deployed on classified networks with cleared personnel in the loop. The contract contains negotiated guardrails against fully autonomous weapons and high-stakes automated decision-making — but those guardrails sit inside the commercial agreement, not at the product level. They are a contract term, subject to renegotiation.

Anthropic's Responsible Scaling Policy codifies two product-level prohibitions: no fully autonomous weapons, and no mass domestic surveillance. When the Pentagon asked for the autonomous-weapons restriction to come out, the company refused. The Department's "national security supply chain risk" designation followed. Anthropic also terminated commercial access for firms linked to the Chinese military, foregoing what the company described as "several hundred million dollars" in revenue.

This is the difference scored. On No War, No Weapons, OpenAI lands at -20 and Anthropic at 0. The gap is not whether you take defence money. The gap is whether the lines you negotiate hold under pressure.


2. Copyright and settlement exposure

In September 2025, Anthropic settled a class-action lawsuit for $1.5 billion. The complaint alleged the company had trained Claude on millions of pirated books sourced from unauthorised libraries. It is among the largest civil settlements in the technology industry's history. Anthropic resolved it in a single payment.

OpenAI has paid no comparable settlement to date. Its litigation surface is broader — more than twenty-five active lawsuits, including a centralised multi-district copyright proceeding and a $10 billion claim from the California Newspapers Partnership. Recent judgments have either favoured the company or yielded only nominal damages.

These are different shapes of the same risk. Anthropic absorbed a single concentrated hit and closed it. OpenAI carries an open litigation pipeline whose aggregate exposure is still being priced.

On Honest & Fair Business, Anthropic scores -20 and OpenAI -10. The OpenAI score reflects current evidence and is plausibly understated relative to the open caseload.


3. Whistleblower governance

In July 2024, current and former OpenAI employees filed a complaint with the Securities and Exchange Commission. The complaint alleged that the company's employment contracts contained non-disparagement clauses that did not exempt securities violations, required company consent before employees could disclose information to federal authorities, and forced waivers of whistleblower compensation owed under federal law. If enforced, those provisions would constitute systematic suppression of internal reporting.

Anthropic operates anonymous whistleblower channels under its Responsible Scaling Policy with escalation paths to the board. The company is structured as a Public Benefit Corporation governed by a Long-Term Benefit Trust — an arrangement designed to insulate board decisions from ordinary shareholder pressure.

Two governance structures, running in opposite directions. One has been documented restricting internal reporting. The other has been documented protecting it.


4. The labour record in the data-labelling supply chain

OpenAI contracted Kenyan workers, via the firm Sama, to label graphic and traumatic content for ChatGPT training. Workers were paid between $1.32 and $2.00 an hour. The local benchmark for receptionist work in Nairobi was $1.52. Workers reported being unable to save money or meet basic household needs. Time first published the findings in early 2023 and they have been corroborated repeatedly since.

Sama publicly framed the rates as aligned with its "living wage" mission. The workers and the reporters interviewing them disagreed.

No comparable record exists in the public domain for Anthropic. We do not interpret absence as endorsement — it is absence of evidence, not evidence of compliance. It is, for the moment, the actual signal.

OpenAI scores -10 on Fair Pay & Worker Respect; Anthropic 0.


5. Environmental disclosure

Neither company publishes Scope 1, 2 or 3 emissions. Neither has Science Based Targets validation. Both rely on broad sustainability statements and on the climate commitments of their cloud-infrastructure providers.

OpenAI's environmental profile is the worse of the two, and the difference is documented in primary sources. Its Stargate data-centre programme includes building "behind the meter" natural-gas generation alongside solar — a direct fossil-fuel investment to secure compute capacity. The company has lobbied for secrecy provisions to classify energy and water usage at its data centres as commercially confidential. Researchers have inferred water consumption per query (around 0.32 millilitres) because OpenAI does not publish it.

Anthropic claims to conduct annual carbon-footprint analyses and to purchase offsets for what it calls "net-zero climate impact" — without disclosing methodology, verifier or offset standard. There is no formal net-zero target year and no SBTi validation.

On Planet-Friendly Business, OpenAI scores -40 and Anthropic -20. Neither is acceptable; one is materially worse.


Where Anthropic genuinely leads

Across the eleven dimensions in our framework, Anthropic outperforms OpenAI in only one: Safe & Smart Tech, where it scores +40 — the highest score either company received.

The substance is procedure, not branding. Anthropic's Responsible Scaling Policy, now in version 3.0, mandates tiered safety levels from ASL-2 to ASL-4 and assigns oversight to a Responsible Scaling Officer reporting to the independent Long-Term Benefit Trust. The company publishes a Risk Report every three to six months and a System Card for every model deployment. It runs ASL-3 security standards including egress-bandwidth controls and isolated cloud execution environments. Claude is evaluated by both the United States and United Kingdom AI Safety Institutes. Anthropic holds ISO/IEC 42001:2023 certification, the international standard for AI management systems. Published evaluations report a 99.78 per cent harmlessness rate across multi-language testing.

OpenAI carries the same baseline security certifications — SOC 2 Type 2, ISO 27001, ISO 42001 — and operates a multi-tier bug bounty programme, red-teaming exercises and a "Preparedness Framework" for high-capability model risk. The Italian Data Protection Authority, however, fined OpenAI €15 million in late 2024 for processing personal data without an adequate legal basis and failing to implement effective age verification for minors. ChatGPT was temporarily banned in Italy in 2023.

Both companies invest substantively in safety. Anthropic's investment is more codified, more externally audited, and more publicly disclosed. That is what +40 versus 0 reflects.


So, is OpenAI more ethical than Anthropic?

On Mashinii's framework — the same framework applied to more than 6,000 public companies — Anthropic averages 0 across substantive dimensions and OpenAI averages -17. By the data, Anthropic is the less compromised of the two. The qualifier matters.

Both companies underwrite supply chains and product decisions that score negative on environmental disclosure, dual-use military deployment and copyright sourcing. Only one of them has a documented labour-exploitation case in the public record — and it is OpenAI. Both are routinely litigated, and one of them settled at $1.5 billion in a single matter — and that one is Anthropic.

The shorthand "Anthropic is the safer, more ethical lab" is partially correct. It is correct on AI safety governance, where Anthropic's procedural lead is real. It is correct on labour, environmental disclosure and the willingness to forgo military revenue, where the gap is documented. It is not correct as a general statement — both companies are net negative.

The honest reading is that Anthropic is the less compromised of two compromised companies. Neither is an ESG portfolio's friend.


Aggregate scoring

Better Health, Fair Money, Kind to Animals and Zero Waste are scored not applicable for both companies — neither manufactures physical products, lends money or operates in healthcare. Averages are calculated only over dimensions where evidence exists; averaging across not-applicable dimensions would dilute the signal.


The ESG coverage gap

None of this appears in your ESG screen. Conventional ratings — MSCI, Sustainalytics, As You Sow — do not cover OpenAI or Anthropic. Neither is publicly listed; both sit outside the data feeds that ESG investors and advisors rely on for the rest of their universe.

The exposure, however, is real. Private AI laboratories are now among the largest single allocations of strategic and venture capital in portfolios that are otherwise screened on ESG criteria. The rating is missing.

Mashinii covers both, on the same eleven-value framework, with the same evidence base, applied across public and private companies. The same logic now extends to xAI, Stripe, SpaceX, ByteDance, Shein and Anduril — the next six private companies in our coverage.

The two best-known AI labs both took the Pentagon's money. They did not take it the same way. If your portfolio holds them — directly, indirectly, or in a fund that does — that distinction is no longer cosmetic. It is the entire point of integrity scoring.


References and further reading