AI Stocks and Ethics: How 8 Companies Score
Every major AI company publishes responsible AI principles. Every major AI company also holds military contracts. The gap between the two is the story of this analysis.
In February 2026, Google rewrote its AI principles to remove its longstanding pledge not to pursue AI for weapons or surveillance that violates international norms. When employees protested the company's $1.2 billion Project Nimbus contract with the Israeli military in April 2024, Google fired 28 of them. That is one company. The pattern extends across the sector.
Mashinii scores over 6,000 public companies across 11 ethical dimensions using documented regulatory actions, court records, and investigative reporting. We examined the eight largest AI-exposed stocks. Every single one scores negative on No War, No Weapons. Most score negative on Safe & Smart Tech.
AI Companies: Scores Across Five Key Dimensions
| Company | No War, No Weapons | Safe & Smart Tech | Honest & Fair Business | Fair Pay & Worker Respect | Planet-Friendly Business |
|---|---|---|---|---|---|
| Alphabet (Google) | -80 | -70 | -30 | -40 | -30 |
| Palantir | -80 | +20 | -20 | +10 | -20 |
| NVIDIA | -70 | -50 | +20 | +20 | -30 |
| Meta | -50 | -70 | -60 | -30 | -10 |
| Microsoft | -50 | -40 | -20 | +30 | -30 |
| Salesforce | -50 | +30 | +40 | -20 | +10 |
| Amazon | -40 | -50 | -50 | -50 | -30 |
| Apple | -30 | +10 | -20 | 0 | -40 |
Scores range from -100 (worst) to +100 (best). Source: Mashinii integrity intelligence platform.
Published Principles vs. Documented Actions
Before examining each dimension, consider the gap between what these companies say and what the record shows.
Alphabet published AI principles in 2018 pledging not to develop AI for weapons or "technologies that cause or are likely to cause overall harm." In February 2026, it removed those commitments. It holds a $1.2 billion contract providing AI-based facial recognition and object tracking to the Israeli military through Project Nimbus.
Palantir states its software is designed to "protect civil liberties and safeguard privacy." It derives 55% of its revenue from government clients -- a substantial portion from defence and intelligence agencies -- and holds a $178 million contract for AI-enabled military trucks with the U.S. Army. Storebrand Asset Management divested its Palantir holdings over concerns its work for Israel might violate international humanitarian law.
NVIDIA commits to "trustworthy AI" in its corporate communications. Its Jetson microcomputers were found in Russian military drones in May 2024. Chinese military procurement records show attempts to acquire NVIDIA chips for weapons systems. The Justice Department accused Chinese nationals of illegally shipping tens of millions of dollars worth of NVIDIA H100 chips to China for military use.
AI and the Military: How 8 Companies Score
Of the eight companies analysed, not one scores positive on No War, No Weapons. The scores range from -30 (Apple) to -80 (Alphabet and Palantir).
Alphabet (-80) shares in the $9 billion Joint Warfighting Cloud Capability contract with the Pentagon alongside its Project Nimbus work. The revision of its AI principles formalised what contract records had already shown.
Palantir (-80) has a strategic partnership with the Israeli Defense Ministry in addition to its U.S. military work. Its business model is built around government intelligence and defence applications.
NVIDIA (-70) also holds a partnership with MITRE for an AI supercomputer serving U.S. federal defence operations. Despite U.S. export controls, its most advanced chips have repeatedly reached sanctioned military end-users.
Microsoft (-50) derives an estimated 11.9% of annual revenue from defence contracts, including a $22 billion deal for HoloLens augmented reality visors for the U.S. Army. OpenAI, in which Microsoft is the largest investor, changed its terms of use to permit "national security use cases," removing a previous ban on military applications. Microsoft's own Responsible AI Transparency Report does not mention its military contracts.
Meta (-50) granted approval for U.S. national security agencies to use its Llama AI models for military applications and partnered with Anduril Industries for battlefield intelligence devices. Meta explicitly states it "will not have a say in how US agencies or its partners use the Llama technology." For more on Meta's broader record, see our boycott analysis.
Even Apple (-30), the highest-scoring company on this dimension, faces criminal complaints from the Democratic Republic of Congo alleging its supply chain uses conflict minerals.
For a deeper look at the defence sector specifically, see our ethical breakdown of defence stocks.
AI Data Privacy: Who Protects Your Information?
If the military dimension measures what AI companies do with governments, Safe & Smart Tech measures what they do with people's data. For a broader analysis of tech companies' privacy records, see our analysis of data privacy scores.
Alphabet (-70) settled a lawsuit for illegally tracking over 136 million U.S. users via Chrome's Incognito mode, requiring the purging of billions of files of personal data. The EU opened an investigation into Google's compliance with privacy laws during AI model development. Employees reported that the rush to release AI products led to ethical lapses and insufficient safety checks.
Meta (-70) carries a record of repeated data governance failures. A U.S. District Judge ordered Meta to face a class-action lawsuit alleging its Meta Pixel tracking tool captured and relayed healthcare data without consent. The pattern is documented across jurisdictions and years.
NVIDIA (-50) scored negatively due to a critical security vulnerability in its Container Toolkit that took 25 days to patch. The company piloted an AI Ethics Committee in 2024, but independent scrutiny of its data practices remains limited.
Amazon (-50) experienced a data breach in 2024 affecting 2.8 million records. Past issues include documented bias in its AI recruitment tool and racial and gender biases in its Rekognition facial recognition technology.
The relative contrasts: Salesforce (+30) has dedicated ethical AI leadership and achieved a 90% reduction in problematic outputs through red teaming. Palantir (+20) has maintained no significant data breaches and holds FedRAMP certification. Apple (+10) leads on consumer privacy with on-device AI processing and end-to-end encryption, though it paid a EUR 150 million fine in France over its App Tracking Transparency implementation.
Worker Pay, Climate, and Governance in AI Companies
Honest & Fair Business scores are low across most of the group. Amazon faces a $2.5 billion settlement for deceptive Prime practices. Alphabet paid a EUR 3 billion European Commission shopping fine and was fined $12.4 million by Indonesia's antitrust agency. Salesforce (+40) is the standout, having been recognised as one of the World's Most Ethical Companies for 16 consecutive years.
Fair Pay & Worker Respect varies substantially. Amazon (-50) scores lowest, with seven substantiated labour violations in three years and documented worker hardship across its warehouse network. Alphabet (-40) settled a $50 million lawsuit for systemic racial bias and a $28 million suit for racial favouritism. Microsoft (+30) reports verified pay equity across gender and racial/ethnic groups. NVIDIA (+20) reports a 2.5% voluntary turnover rate with a median employee compensation of $301,233.
Planet-Friendly Business is uniformly negative for nearly every company. NVIDIA's total emissions increased 87% in fiscal year 2025, reaching 7.15 million tonnes CO2 equivalent. Alphabet's total emissions increased 16% in 2024 to 23.4 million tonnes. Amazon's total carbon emissions rose to 68.25 million tonnes in 2024, with Scope 1 emissions up 162% since 2019. Only Salesforce (+10) manages a positive score.
For a broader picture of how these scores compare against the wider market, see our S&P 500 ethical scores analysis.
What AI Ethics Scores Mean for Portfolios
The AI trade has been enormously profitable. The integrity data reveals dimensions that stock charts do not.
Three findings from our analysis:
- No AI megacap scores positive on No War, No Weapons. The militarisation of AI is documented across every company in this group. Alphabet and Palantir score -80. Even Apple sits at -30.
- Published AI principles diverge from documented conduct. Alphabet removed its weapons commitments. Meta disclaims responsibility for military use of its AI. Microsoft's transparency report omits its military contracts. These are policy positions recorded in corporate filings and public statements.
- The companies with the strongest scores are not the largest. Salesforce scores positive on three dimensions. NVIDIA scores positive on worker respect and governance. Meta and Amazon score negative on every dimension except one.
For advisors assessing AI-heavy client portfolios, Mashinii's data offers a basis for evaluating ethical exposure beyond what conventional ESG ratings capture. Learn how Mashinii supports advisors.
How We Score
Mashinii scores companies across 11 ethical dimensions using independently documented evidence -- regulatory penalties, court records, and investigative reporting. Every score is cited. No corporate self-assessments are used.
Learn more about our methodology
Audit My Portfolio | Search Any Company | View Rankings
Mashinii provides integrity data for informational purposes. This is not financial advice.