
TikTok and Meta risked safety to win algorithm arms race, whistleblowers say
Key Takeaways
- Internal research showed outrage boosted engagement, fueling an algorithm arms race and safety risks.
- Whistleblowers allege decisions allowed more harmful content on feeds, covering violence, sexual blackmail, terrorism.
- More than a dozen insiders provided testimony about these safety risks.
Whistleblowers on safety bets
BBC interviews with more than a dozen whistleblowers and insiders claim that Meta and TikTok chose growth over safety, exposing users to more harmful content.
“- Published Social media giants made decisions which allowed more harmful content on people's feeds, after internal research into their algorithms showed how outrage fuelled engagement, whistleblowers told the BBC”
They say internal research showed the platforms' algorithms fuelled outrage to boost engagement, prompting management to allow more 'borderline' content such as misogyny and conspiracy theories.

One Meta engineer said staff were told 'they sort of told us that it's because the stock price is down.'
A TikTok employee provided the BBC with access to internal dashboards showing decisions to 'maintain a strong relationship' with political figures to avoid regulation or bans, not because of risks to users.
TikTok dashboards show prioritization
Nick, a TikTok trust-and-safety staffer, showed the BBC an internal dashboard illustrating priorities: political-content cases were rated higher for review than several reports of harm to teenagers.
He said management instructed staff to 'maintain a strong relationship' with politicians and governments to avoid regulation, not because of user safety.

He described volumes of cases becoming unmanageable, and that cuts and a reorganisation—some roles replaced by AI—had limited the ability to protect children.
TikTok rejected the claim that political content is prioritized over child safety, saying specialist workflows exist and that safety measures are robust.
The company noted teen accounts have more than 50 preset safety features and settings automatically turned on.
Meta Reels growth vs safety
Meta's Reels launched in 2020 without sufficient safeguards as Meta sought to catch TikTok.
“- Published Social media giants made decisions which allowed more harmful content on people's feeds, after internal research into their algorithms showed how outrage fuelled engagement, whistleblowers told the BBC”
Internal research showed Reels posts had significantly higher harm: bullying and harassment 75% higher, hate speech 19% higher, and violence or incitement 7% higher than the main Instagram feed.
Motyl said there was a 'power imbalance' because safety staff needed the Reels team's agreement to introduce safety features, and incentives favored engagement over safety.
He and others described how leadership, including Mark Zuckerberg, prioritized growth to appease investors.
Brandon Silverman recalled Zuckerberg's paranoia about competition and said safety teams struggled to hire the numbers needed while Meta expanded Reels.
A former Meta engineer, Tim, said as competition with TikTok intensified, the focus shifted to revenue and executives encouraged 'do whatever we can to catch up' by allowing more borderline content.
Internal documents described how content likely to trigger outrage drove engagement, with executives noting that the 'path to maximizing profits' often conflicted with users' wellbeing and that 'the current set of financial incentives our algorithms create does not appear to be aligned with our mission.'
Responses and broader context
UK counter-terror police specialists say there has been a normalisation of antisemitic, racist, violent and far-right posts in recent months.
Nick’s blunt advice to parents with children using TikTok is to 'Delete it, keep them as far away as possible from the app for as long as possible.'

Meta denied the whistleblowers' claims, saying 'Any suggestion that we deliberately amplify harmful content for financial gain is wrong,' and TikTok called the claims 'fabricated' while stating it has technology that prevents harmful content from ever being viewed.
Both platforms highlighted safety measures: Meta pointed to extensive safety investments and a Teen Accounts feature, while TikTok said teen accounts have more than 50 preset safety features and settings and maintains strict recommendation policies, with technology to prevent harm being seen by users.
More on Technology and Science

Cuba Suffers Island-Wide Blackout as U.S. Energy Blockade Deepens Crisis
14 sources compared
Trump Announces White House Chief of Staff Susie Wiles Has Breast Cancer
26 sources compared

Apple Unveils AirPods Max 2 With H2 Chip, Live Translation, and Improved ANC
21 sources compared

Short circuit triggers ICU fire at SCB Medical College in Cuttack, killing 10
24 sources compared