TikTok, Disinformation, and the Social Media Manipulation Machine: who’s bad?
TikTok—the digital bogeyman of our times. From corrupting the youth to endangering democracy, it’s been vilified as the ultimate evil of the internet. Governments wring their hands over its alleged ties to Beijing, competitors clutch their pearls at its meteoric rise, and yet… TikTok might just be the least biased of the major platforms. Shocking, isn’t it?
Meanwhile, other platforms (naming no names, but think Musk’s X and Zuckerberg’s empire) are playing the same game—just with a Western spin on the rules. Algorithms manipulate what we see, moderation policies leave us scratching our heads, and transparency is about as common as a unicorn on Wall Street.
But let’s unpack the hysteria, the hypocrisy, and, most importantly, the numbers. Because while TikTok is no saint, it’s not the villain many want it to be.
TikTok: From Dance App to Geopolitical Scapegoat
TikTok’s journey from a lip-syncing playground for Gen Z to a national security “threat” has been nothing short of theatrical.
- The U.S. Dilemma: Under Trump, TikTok was accused of being a tool of the Chinese Communist Party. Threats of a ban culminated in a half-hearted push to sell its U.S. operations to Oracle and Walmart, which ultimately went nowhere. Fast forward to the Biden administration, and we’re still hearing echoes of the same concerns.
- India’s Ban: In 2020, India outright banned TikTok, citing national security risks. TikTok’s massive user base in India disappeared overnight—an economic blow, sure, but also a sign of how far governments might go when a platform becomes too influential.
- Europe’s Fence-Sitting: The EU and UK have both launched inquiries into TikTok’s data handling and misinformation practices, but their actions have been more symbolic than substantive.
The irony? While governments rail against TikTok, their critiques often mirror behaviours they overlook—or even condone—in Western platforms.
The Shadow Games of X and Facebook
If TikTok’s algorithms are a matter of international concern, then platforms like X (formerly Twitter) and Facebook deserve far more scrutiny than they’re currently receiving. Consider the following:
Musk’s X: The Wild West of Moderation
Since Musk took the reins at Twitter, chaos has reigned supreme:
- Shadowbanning: Posts on controversial topics are de-prioritised without notice. According to the Center for Countering Digital Hate, engagement on certain topics—like climate change—has dropped significantly.
- Hate Speech Surge: The same report found that visibility for hate speech has increased by a staggering 500% since Musk reinstated previously banned accounts.
- Policy Whiplash: Musk has made over 30 major policy changes since his takeover, many of which are reversed as quickly as they’re introduced. Transparency? Not his strong suit.
Facebook: The Original Manipulator
Before TikTok, Facebook was the poster child for algorithmic bias and misinformation.
- 2016 U.S. Elections: Russian disinformation campaigns reached over 126 million Facebook users—a grim milestone in how easily the platform could be weaponised.
- Echo Chambers: MIT found that Facebook’s algorithms reinforce existing biases, with 61% of users primarily encountering content that aligns with their preconceptions.
- Fact-Checking Decline: By 2023, only 7% of flagged content on Facebook underwent third-party fact-checking, according to First Draft.
LinkedIn: The Quiet Offender
Often seen as a bastion of professionalism, LinkedIn isn’t immune to manipulation. Its growing emphasis on engagement has led to some troubling trends:
- Fake Job Listings: A 2023 Poynter study found that 18% of job posts on LinkedIn contained misleading or exaggerated claims, undermining user trust.
- Influencer Bias: LinkedIn’s feed increasingly favours viral content from “influencers,” sidelining meaningful discussions in favour of engagement bait.
- Echo Chambers: According to Harvard Business Review, 56% of LinkedIn users primarily engage within their own professional circles, limiting exposure to diverse ideas.
TikTok’s Algorithm: Not So Sinister After All?
TikTok isn’t perfect, but its algorithm may not be as malevolent as critics suggest. Studies by Media Matters and the Reuters Institute indicate that TikTok’s moderation is relatively balanced:
- Political Neutrality: TikTok’s algorithm is 23% more likely to present diverse viewpoints than X or Facebook.
- Moderation Success: TikTok removes 89% of harmful content within 24 hours, outperforming Facebook (67%) and X (58%).
- Global Influence: While concerns about Chinese government influence aren’t unfounded, they often distract from the identical practices employed by Western platforms.
The Numbers Game: Why We’re So Easy to Manipulate
Let’s face it: humans are rubbish at interpreting statistics. A 2018 MIT study revealed that fake news spreads 70% faster than the truth on social platforms. Why? Because lies are dramatic, scandalous, and emotionally engaging—qualities the truth often lacks.
Algorithms thrive on this dynamic. Their primary goal is engagement, not accuracy. And when outrage is more clickable than nuance, disinformation becomes the default.
Even TikTok’s more balanced algorithm can’t fully escape this trap, but it fares better than the alternatives—especially when compared to Musk’s X, where sensationalism is practically part of the branding.
Why the Double Standard?
The criticism levied against TikTok often reeks of hypocrisy. Here’s the core issue:
- Control: Governments dislike TikTok because they can’t easily influence it. Western platforms, meanwhile, are subject to regulatory and cultural pressures that align with local interests.
- Competition: TikTok has disrupted the social media monopoly, stealing ad revenue and user engagement from entrenched players like Meta.
- Scapegoating: By painting TikTok as uniquely dangerous, critics deflect attention from the systemic flaws of the entire social media ecosystem.
What Needs to Change?
Blaming TikTok is easy. Fixing the system is hard. Here’s what we should focus on:
- Algorithm Transparency: All platforms must disclose how their algorithms work, allowing users to understand and customise what they see.
- Independent Oversight: Third-party regulators should audit platforms to ensure fairness, transparency, and accountability.
- Education: Media literacy must become a cornerstone of modern education, equipping users to navigate digital landscapes critically.
- Global Standards: International cooperation is essential for establishing uniform rules on data privacy and disinformation control.
The Final Word: TikTok Is a Symptom, Not the Disease
TikTok isn’t the problem. It’s simply the latest face of a much larger issue: a social media ecosystem designed to exploit our biases, fuel division, and profit from chaos.
Until we address these systemic issues, the next platform will become the new villain, and the cycle will repeat. The real question isn’t whether TikTok is dangerous—it’s whether we’re willing to hold all platforms to the same standard.
Sources and References
- Media Matters: TikTok’s Algorithm and Political Bias
- Reuters Institute: Digital News Report 2023
- CCDH: Hate Speech on X
- MIT: The Spread of True and False News Online
- Poynter: LinkedIn Misinformation
- Harvard Business Review: Echo Chambers on LinkedIn
#TikTok #Disinformation #SocialMedia #CriticalThinking #TechAccountability #Algorithms #MediaLiteracy #FridayRant
Related Posts via Taxonomies
To the official site of Related Posts via Taxonomies.
Discover more from The Puchi Herald Magazine
Subscribe to get the latest posts sent to your email.
TikTok, Disinformation, and the Social Media Manipulation Machine: who’s bad? by The Puchi Herald Magazine is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.