MASHINIi

The Algorithm Problem: How Social Media's Core Innovation Became Its Ethics Trap

Insights
August 8, 2025

When Facebook abandoned chronological feeds for algorithmic ranking in 2009, it rewrote the internet's operating manual. What began as a fix for information overload has become an ethical trap: a system whose very architecture produces the harms now blamed on "bad actors" or "moderation failures."

The Architectural Shift That Changed Everything

Before algorithms, social media worked like a bulletin board: you saw what you followed, in order. Once platforms started ranking posts for "relevance," the economics flipped. The feed stopped being a neutral list and became a constantly re-optimised prediction engine, fine-tuned to whatever kept people scrolling.

Every major platform now runs on this architecture: Meta, TikTok, YouTube, X, LinkedIn. The design imperative is the same: maximise time on platform. The social consequences aren't accidental. They're the predictable outputs of a machine built to optimise for engagement above all else.

How Algorithms Create Systematic Ethics Violations

The Amplification Engine

TikTok can push misogynistic content to a new teen account within minutes, triggering what researchers call a "snowball effect" (BBC News, 2023). The problem isn't an intent to promote misogyny; it's that inflammatory content draws engagement, and the algorithm doesn't know the difference between civic debate and corrosive extremism.

Our scoring shows the same dynamic: Meta scores negative 50 out of 100 on Safe & Smart Tech precisely because the AI that predicts what three billion people want to see is indifferent to whether the outcome is healthy or harmful. Internal Facebook documents confirm that leadership knew Instagram harmed teen mental health, yet engagement metrics still won out (Wall Street Journal, 2021; New York Times, 2021).

The Death of Organic Reach

Once, small businesses could reach their followers for free. That ended when the feed became a pay to play marketplace. Facebook's pivot to Reels left many creators invisible overnight, not because their audiences disappeared, but because the algorithm stopped showing their content.

It's a structural choke point. Platforms control the distribution and set the tolls. Meta's 0 out of 100 score on Fair Money & Economic Opportunity isn't about blocking commerce; it's about inserting itself as the compulsory intermediary. The U.S. Department of Justice's settlement over discriminatory ad delivery shows how these systems can also quietly exclude entire groups (DOJ, 2022).

The Extremism Gradient

Algorithms don't just map preferences; they nudge them. YouTube's recommendation engine drives 70% of watch time and is well documented in steering users from moderate videos toward more extreme material. A fitness query can end up in eating disorder territory within hours (Center for Countering Digital Hate, 2022).

The logic is simple: extreme content produces higher engagement rates. Facebook's role in ethnic violence in Myanmar (Reuters, 2018) and Ethiopia (Kenyan High Court Ruling, 2025) wasn't a fluke. It was the engagement model functioning as intended.

The Regulatory Response: Symptom Management

The EU's Digital Services Act (European Commission, 2024) now requires non-algorithmic feed options, but they're buried behind settings menus. TikTok's EU feed toggle (TikTok Newsroom, 2024) and Meta's chronological view are technically available, but defaults keep users in the engagement loop.

Even billion euro fines barely dent the business. Meta's €1.2 billion GDPR penalty was under 2% of annual revenue (Financial Times, 2024). In India alone, Meta removed 18.6 million policy violating posts in one month (Tech Policy Press, 2024), all of which were initially boosted by the same algorithm it claims to "fix."

The Financial Architecture of Attention

The pattern is clear: the more advanced the algorithm, the worse the ethics score.

  • Meta: Negative scores on 9 of 11 values (Better Health for All, Fair Money & Economic Opportunity, Fair Pay & Worker Respect, Fair Trade & Ethical Sourcing, Honest & Fair Business, Kind to Animals, No War & No Weapons, Safe & Smart Tech, Zero Waste & Sustainable Products)
  • X: Lost approximately 50% of ad revenue post Musk after amplifying extremism (Reuters, 2023)
  • TikTok: €345 million GDPR fine for targeting minors (European Commission, 2023)

Meanwhile, lower tech players like Sprout Social score positively on worker respect and transparency.

This is the scale paradox: Meta's resources allow it to invest billions in renewable energy and Indigenous language AI (Planet Friendly Business score: plus 40 out of 100), yet the same infrastructure continues to amplify content that corrodes civic health.

The Youth Mental Health Case Study

Instagram's algorithm doesn't just surface "aspirational" posts; it prioritises them because they keep teens engaged, even as they worsen body image for one in three teen girls (Wall Street Journal, 2021). The U.S. states suing Meta aren't claiming oversight failures; they allege harm is intentional, baked into the design (Attorneys General of 41 States, 2023).

The stakes are rising fast. Pending U.S. litigation, new EU enforcement powers, and ongoing African court cases mean 2024 to 2025 is a turning point: regulators now have the legal hooks to demand algorithmic change, not just content takedowns.

Why Change Is So Difficult

The engagement trap: Platforms that tone down optimisation lose users to those that don't. TikTok's rise was fuelled by pushing harder than anyone else.

The revenue lock in: Meta's $134 billion in 2023 ad revenue depends on algorithmic targeting. Apple's 2021 privacy changes, which reduced tracking, not ranking, cost Meta approximately $10 billion annually (Meta Earnings Call, 2022).

The technical debt: These systems are vast, brittle, and deeply embedded. Instagram Kids failed not because it wasn't wanted, but because the core assumption, engagement first, couldn't be unpicked without breaking the product.

Paths Forward

  • Regulatory circuit breakers: Automatic throttling when harm thresholds are met, for example, if suicide related content spikes among teens. This shifts intervention from after the fact moderation to structural prevention.

  • Public algorithm options: User selectable feeds optimised for learning, connection, or community rather than default engagement. Making this a mandated default choice screen would force genuine competition between models.

  • Engagement taxes: Just as carbon taxes price in environmental damage, these would impose escalating costs based on harmful engagement metrics (e.g., session length for minors, extremist amplification rate). This aligns profit incentives with harm reduction.

Positive steps exist: YouTube's ban on unproven cancer cures (YouTube Blog, 2023), TikTok's teen privacy defaults, and Meta's Indigenous language AI projects. But history suggests change happens only under pressure.

The Investment Reality

Our scoring framework suggests that, today, financial success and ethical performance are often inversely correlated. That's not just about Meta; it's a structural feature of engagement driven platforms. Still, there are signals: X's revenue collapse after losing advertiser trust, rising regulatory costs in the billions, and talent drift toward safer brands.

Conclusion: The Algorithm Question

This isn't about bad moderation; it's about the feed itself. Engagement optimised algorithms will always tilt toward the inflammatory, divisive, and addictive because that's what keeps people hooked.

Without structural change through regulation, redesign, or replacement, we'll keep treating symptoms, not causes. The platforms know it. The documents prove it. And until the incentives change, the most aggressive algorithm will keep winning.


Based on analysis of regulatory filings, internal documents, court proceedings, and operational data from 2020 to 2025, using Mashini Investments' 11 value scoring framework.