MASHINIi

OpenAI.

OPENAI.P | Software publishing

OpenAI is a pioneering artificial intelligence research and deployment company that focuses on developing and distributing advanced AI technologies. Originally established as a non-profit in 2015, the organization transitioned to a 'capped-profit' model to facilitate large-scale capital investment f...Show More

Ethical Profile

Mixed.

OpenAI’s transition from a non-profit to a commercial entity has been marked by a July 2024 SEC complaint alleging the systemic suppression of internal whistleblowers through restrictive non-disparagement clauses. While the company maintains a clean record regarding regulatory fines, it faces over 25 active lawsuits, including a $10 billion copyright claim from the California Newspapers Partnership. Labor practices remain a concern, as reports confirm data-labeling contractors in Kenya were paid between $1.32 and $2.00 per hour. Environmentally, the firm lacks validated Science Based Targets and has lobbied to keep data center water and energy metrics confidential. However, OpenAI has codified ethical red lines in a $200 million military contract, explicitly prohibiting the use of its models for autonomous weapons or mass domestic surveillance.

Value Scores

Better Health for AllN/A
Not applicable to this business
Fair Money & Economic OpportunityN/A
Not applicable to this business
Fair Pay & Worker Respect-10
-100100
Fair Trade & Ethical Sourcing-10
-100100
Honest & Fair Business-10
-100100
Kind to AnimalsN/A
Not applicable to this business
No War, No Weapons-20
-100100
Planet-Friendly Business-40
-100100
Respect for Cultures & Communities-10
-100100
Safe & Smart Tech0
-100100
Zero Waste & Sustainable ProductsN/A
Not applicable to this business

Better Health for All

Not Applicable

Value not applicable to this business. OpenAI is a software publishing company focused on generative AI models; its core business does not involve the development of medical treatments, healthcare services, or public health initiatives.

Fair Money & Economic Opportunity

Not Applicable

Value not applicable to this business. OpenAI is a software publishing company focused on artificial intelligence and does not engage in financial services, lending, insurance, or the management of monetary assets.

Fair Pay & Worker Respect

-10

As a software company, OpenAI's core business model does not inherently advance or harm fair pay and worker respect; these outcomes are determined by the company's internal HR policies, compensation structures, and labor practices rather than the nature of the software itself. OpenAI’s scoring is primarily driven by documented labor practices within its data-labeling supply chain and internal equity policies. Regarding living_wage_coverage, evidence from multiple articles confirms that workers in Kenya, employed via the contractor Sama to train ChatGPT, were paid between $1.32 and $2.00 per hour.

1
2
3
4
While Sama claimed this aligned with their mission of providing a living wage, the rates were noted to be near or below local benchmarks for similar roles (e.g., $1.52/hr for receptionists), and workers reported being unable to save or meet basic needs.
5
6
7
This reflects a tier of -60, as a significant portion of the essential data-labeling workforce received sub-subsistence wages.
8
For insecure_contract_share, reports indicate that the data-labeling workforce operated under precarious conditions, with contracts ranging from monthly to daily durations.
9
This reliance on short-term, outsourced gig labor for core safety training justifies a tier of -40.
10
11
In terms of labor_violation_incidents, the company faced significant controversy regarding its internal equity and non-disparagement policies.
12
13
14
Articles detail how OpenAI utilized "ultra-restrictive" agreements that threatened to claw back vested employee equity if they criticized the company.
15
16
17
While CEO Sam Altman later apologized and the company committed to releasing former employees from these obligations, the documented use of such "egregious and unusual" legal threats constitutes a significant labor-rights infraction, resulting in a tier of -30.
18
19
20
No specific data was provided for CEO pay ratios, safety incident rates (TRIR), or collective bargaining percentages, though articles noted the formation of a content moderators' union in Africa and psychological trauma among workers.
21
22
23

Fair Trade & Ethical Sourcing

-10

As a software publisher, OpenAI's primary supply chain involves hardware infrastructure (servers/GPUs) and data labeling services. While not inherently harmful, the company's reliance on global, often opaque, third-party data annotation supply chains necessitates active management to prevent labor exploitation, making the value applicable but neutral until specific sourcing practices are evaluated. OpenAI’s scoring is based on its Supplier Code of Conduct and infrastructure procurement strategies. For **audit_frequency**, the company requires suppliers to conduct annual security audits and risk assessments,

1
but its general supply chain policy only mandates cooperation with 'any inquiry or audit' without a defined cadence for on-site labor or sourcing audits, placing it in the tier for occasional reviews (Tier -80).
2
Regarding **forced_child_labour_incidents**, there are no substantiated reports of violations in the provided evidence.
3
The company maintains a strict Supplier Code of Conduct that 'unequivocally rejects' forced and child labor, supporting a Tier -10 score for proactive enforcement.
4
In terms of **remediation_speed**, OpenAI has established clear timelines for technical and security-related violations, requiring critical vulnerabilities to be remediated within 60 days and credential revocation within one business day.
5
While these are security-focused, they demonstrate a structured remediation framework (Tier -40).
6
For **ethical_clause_coverage**, OpenAI mandates that all suppliers, subsidiaries, and subcontractors formally acknowledge and adhere to its Supplier Code of Conduct.
7
8
This code includes binding terms regarding sanctions, anti-corruption, and labor rights.
9
10
Given the broad application to its tier-1 and sub-tier base, it aligns with Tier 10 (~90% coverage with enforcement clauses).
11
Other KPIs like **fair_trade_cert_share**, **traceability_coverage**, and **materials_risk_index** were omitted due to a lack of quantitative data.
12
13
14
15
While OpenAI has announced a $500 billion 'Stargate' project to onshore hardware manufacturing and secure critical minerals, these are forward-looking strategic goals rather than current performance metrics.
16

Honest & Fair Business

-10

As a software publisher, OpenAI's core business is neutral regarding 'Honest & Fair Business' by default. While the company faces ongoing scrutiny regarding data transparency, copyright, and the 'black box' nature of its models, these are behavioral risks rather than inherent features of software publishing. OpenAI’s performance in Honest & Fair Business is characterized by a complex transition from a nonprofit to a Public Benefit Corporation (PBC), marked by significant governance commitments but overshadowed by serious whistleblower and legal controversies. Regarding regulatory fines, there is no evidence of ethics-related fines paid in the last three years.

1
2
3
While the company is involved in massive litigation, recent judgments (e.g., OpenAI v. Open AI, Inc. and Walters v. OpenAI) have largely favored the company or resulted in only nominal damages, supporting a tier of 10 for a clean fine record.
4
However, the whistleblower policy is severely penalized (Tier -100). Evidence from a July 2024 SEC complaint details systemic suppression of internal reporting.
5
Allegations include employment contracts with non-disparagement clauses that fail to exempt securities violations, requirements for company consent before disclosing information to federal authorities, and forcing employees to waive whistleblower compensation.
6
This indicates documented suppression of internal reports. Governance shows positive structural intent. Following a 2025 recapitalization, the PBC board is mandated to be composed of a majority of independent directors (Tier 50).
7
The board currently includes seven independent directors alongside CEO Sam Altman.
8
Furthermore, the Safety and Security Committee (SSC) has the authority to halt model releases regardless of pecuniary interests.
9
OpenAI’s controversy index is tiered at -70 due to high-profile lawsuits. The company faces over 25 lawsuits, including a centralized MDL for copyright infringement and a $10 billion claim from the California Newspapers Partnership.
10
These cases allege unfair business practices and unauthorized use of data, representing significant reputational and legal risk.
11

Kind to Animals

Not Applicable

Value not applicable to this business. As a pure-software and AI research company, OpenAI does not engage in animal testing, factory farming, or the use of animal-derived ingredients in its core business operations.

No War, No Weapons

-20

While OpenAI is not a weapons manufacturer, its AI models have dual-use potential for military applications, and the company has previously adjusted its usage policies regarding military and warfare-related tasks, creating an inherent tension with the 'No War' value. OpenAI’s scoring reflects a transition from a blanket ban on military use (2023) to active engagement with the U.S. Department of Defense (renamed Department of War) as of early 2026.

1
2
3
Regarding **revenue_arms_contracts**, OpenAI signed a classified contract in February 2026 worth up to $200 million (with some estimates ranging higher).
4
5
Given OpenAI's $110 billion valuation and significant subscription revenue, this likely represents <5% of total revenue, placing it in the -30 tier.
6
For **dual_use_technology**, the company’s models are now deployed on classified networks for mission planning and intelligence, though the company maintains a case-by-case evaluation and restricts deployment to cloud-only environments to prevent use in edge-based autonomous systems (-50).
7
8
9
10
OpenAI scores positively in **ai_military_safeguards** (20) and **surveillance_transparency** (30) due to its public disclosure of specific contractual guardrails.
11
12
13
The agreement explicitly prohibits mass domestic surveillance, the direction of autonomous weapons without human oversight, and high-stakes automated decision-making.
14
15
16
OpenAI maintains "personnel in the loop" (cleared engineers) and retains the right to terminate the contract if these red lines are breached.
17
18
19
In **ethical_red_lines_coded**, the company receives a -10 as it has codified a ban on lethal autonomous weapons and mass surveillance within its military contracts, though it has moved away from its original 2023 policy which prohibited all military and warfare applications.
20
21
22
The company also publicly advocated against the government's "supply-chain risk" designation of its competitor, Anthropic, and requested that its own ethical framework be offered as a standard to all AI labs.
23
24
25

Planet-Friendly Business

-40

While software publishing is generally low-impact, OpenAI's core business relies on massive-scale data centers that are extremely energy-intensive, creating a significant carbon footprint that inherently challenges environmental stewardship. OpenAI’s environmental profile is characterized by significant infrastructure scaling and a lack of direct corporate transparency, as evidenced by the provided articles. Regarding emissions and targets, OpenAI has not published its own Scope 1, 2, or 3 totals.

1
While the AI sector generally targets net-zero by 2030, OpenAI lacks validated Science Based Targets (SBTi), with reports describing its commitments as vague or lacking clear methodology.
2
3
Consequently, sbti_aligned_targets and net_zero_target_year are scored at -90. For renewable energy, OpenAI relies heavily on Microsoft Azure, which has been carbon neutral since 2012 and is transitioning to 100% renewable energy by 2025.
4
5
However, OpenAI's own 'Stargate' initiative includes building 'behind the meter' natural gas projects alongside solar,
6
indicating a mixed energy strategy. The tier for pct_renewable_energy is set at -20, reflecting the 100% green tariffs/goals of its primary infrastructure provider while acknowledging the company's direct investment in fossil fuel capacity. Supply chain transparency is a significant weakness. Multiple sources
7
8
9
highlight that OpenAI does not disclose total energy or water usage for its data centers and has lobbied for secrecy provisions to classify such metrics as confidential.
10
This lack of disclosure regarding Tier 1-3 impacts results in a -100 for supply_chain_climate_transparency. While specific water usage per query is estimated (0.32 mL),
11
there is no company-wide water-to-revenue data. Similarly, while GPT-3 training emissions were cited (502 tCO2e),
12
this represents a single historical event rather than the current annual corporate footprint required for the emissions KPI.

Respect for Cultures & Communities

-10

As a software publisher, OpenAI's core business does not inherently harm or advance community rights; however, its massive computational infrastructure relies on hardware supply chains (semiconductors, data centers) that involve mineral extraction and energy consumption, which can impact frontline communities. OpenAI’s scoring reflects a transition from purely research-based operations to large-scale physical infrastructure and data extraction projects, which have introduced documented community concerns and procedural gaps. Regarding Indigenous rights and cultural heritage, the 'OpenAI to Z Challenge' in the Amazonia biome was flagged by the Society of Brazilian Archeology (SAB) for bypassing mandatory IPHAN approval processes and failing to adhere to ILO 169 Convention standards for Free, Prior, and Informed Consent (FPIC).

1
The project utilizes indigenous oral histories and archaeological data under CC0 licenses that require contributors to waive all rights, which critics argue contradicts CARE Principles for Indigenous Data Governance.
2
This represents a documented deficiency in FPIC and heritage protection processes (indigenous_fpic_violations: -30; cultural_heritage_destruction: -30). In terms of water rights, while OpenAI has committed to closed-loop cooling for its 1.2-gigawatt 'Stargate' campus in Abilene, Texas, the project is situated in a region facing a 'water-energy nexus crisis.'
3
4
General community concerns regarding competition for water have been noted, though no specific litigation or major conflict has yet materialized for OpenAI specifically (water_rights_conflicts: -10).
5
6
On community engagement, OpenAI has launched a $50 million 'People-First AI Fund' and conducted listening sessions with 500+ organizations.
7
8
9
However, the Amazonia initiative was criticized for lacking a structured mechanism for local communities to negotiate terms or receive reciprocal benefits.
10
While a general Supplier Code of Conduct exists, the lack of a specific, functional grievance mechanism for these international data projects suggests variable resolution quality (community_grievance_resolution: -30).
11
12
Positively, the company has demonstrated solid local economic programs in the U.S., including the 'OpenAI Academy' in Texas and a $175 million commitment from partners for local infrastructure and water restoration in Wisconsin (local_employment_and_procurement: 20).
13

Safe & Smart Tech

0

As a developer of frontier AI models, OpenAI's core business is directly central to the 'Safe & Smart Tech' value; however, its impact is currently neutral as it balances pioneering safety research and alignment efforts against the inherent risks of bias, hallucination, and data privacy concerns associated with generative AI. OpenAI exhibits a complex profile in the 'Safe & Smart Tech' domain, characterized by robust technical security frameworks offset by significant regulatory and data-handling challenges. On the positive side, the company maintains a comprehensive suite of certifications, including SOC 2 Type 2, ISO/IEC 27001, 27701, 27017, 27018, and the AI-specific ISO/IEC 42001.

1
Its security infrastructure includes a multi-tiered bug bounty program (Security and Safety) and regular third-party audits.
2
3
The company has also implemented user-centric privacy features, such as training opt-outs and data residency options.
4
However, OpenAI has faced severe regulatory setbacks. In late 2024, the Italian Data Protection Authority (Garante) fined the company 15 million euros for GDPR violations, including processing personal data for model training without an adequate legal basis and failing to implement effective age verification for minors.
5
This followed a temporary ban of ChatGPT in Italy in 2023.
6
7
While OpenAI has since implemented 'enhanced transparency' and a six-month awareness campaign as remediation, the findings of 'unlawful' data collection and transparency breaches weigh heavily on its regulatory compliance and unauthorized data use scores.
8
9
In terms of AI governance, OpenAI utilizes red-teaming, system cards, and a 'Preparedness Framework' to manage model risks.
10
While these represent strong industry practices, the company's score is tempered by the documented instances where transparency and data processing obligations were found lacking by European regulators.
11
No significant data breaches were reported in the provided evidence, though the company remains a high-value target.
12
13

Zero Waste & Sustainable Products

Not Applicable

Value not applicable to this business. OpenAI is a pure-software company; as it does not manufacture physical goods, the principles of zero waste, circular product lifecycles, and physical packaging management are not applicable to its core business model.

Common Questions

Is OpenAI ethical?

OpenAI (OPENAI.P) received a "Mixed" ethics rating from Mashinii. OpenAI’s transition from a non-profit to a commercial entity has been marked by a July 2024 SEC complaint alleging the systemic suppression of internal whistleblowers through restrictive non-disparagement clauses. While the company maintains a clean record regarding regulatory fines, it faces over 25 active lawsuits, including a $10 billion copyright claim from the California Newspapers Partnership. Labor practices remain a concern, as reports confirm data-labeling contractors in Kenya were paid between $1.32 and $2.00 per hour. Environmentally, the firm lacks validated Science Based Targets and has lobbied to keep data center water and energy metrics confidential. However, OpenAI has codified ethical red lines in a $200 million military contract, explicitly prohibiting the use of its models for autonomous weapons or mass domestic surveillance.

What is OpenAI most controversial for?

OpenAI scores lowest on Planet-Friendly Business (-40), No War, No Weapons (-20), Honest & Fair Business (-10) based on court records, regulatory actions, and investigative journalism. These are the dimensions where the strongest negative evidence is documented.

How does OpenAI score across ethical dimensions?

negatively on Planet-Friendly Business (-40), No War, No Weapons (-20), Honest & Fair Business (-10). Each dimension is scored on a -100 to +100 scale using documented evidence rather than corporate self-reports.

How does Mashinii score OpenAI?

We score OpenAI across 11 ethical dimensions — including human rights, environmental damage, corruption, and labour practices — using court filings, regulatory actions, investigative journalism, and NGO reports. Our data is adversarial: it comes from sources companies cannot edit or suppress, not from corporate ESG disclosures. Each claim is cited. Read the full scoring manual

Own OpenAI?

Upload your portfolio and see how all your holdings score across 11 ethical dimensions.

Audit My Portfolio

AI-generated analysis based on publicly available data. Not financial advice. Ratings are expressions of opinion derived from automated models and may contain inaccuracies. See our Risk Disclosure for full details.