Meta and TikTok knowingly turned ragebait into revenue, whistleblowers claim
A new BBC documentary reveals whistleblower claims that Meta and TikTok knowingly tolerated harmful “ragebait” content to boost engagement and compete with rivals. Insiders describe how algorithms surfaced conspiracy theories, deepfakes, hate speech, and borderline harmful posts, while moderation decisions allegedly prioritized political sensitivities over child safety. Internal data and testimonies suggest that both companies were aware of the risks but allowed problematic content to remain visible to drive growth. Meta and TikTok deny the allegations, stating they have implemented stronger protections and reject claims of political favoritism.
Read the full story on Cybernews.com →
Editor's Note: In the race for clicks and watch time, major social platforms routinely look the other way as algorithms amplify content that is offensive, manipulative, or simply mind‑numbing. Their engagement‑driven systems consistently push the most provocative and emotionally charged posts to the top, because outrage and shock are profitable. This dynamic turns low‑quality “ragebait” and trivial entertainment into a revenue engine, often at the expense of user well‑being and the overall health of the online ecosystem.