New research reveals much online hate speech stems less from malice than hunger for approval, ensuring trolls keep posting to chase “likes” from their bile. Cornell University investigators found users glowing with praise for early toxic posts reliably return with amplified attacks next, hooked on the viral attention rather than targets’ hurt.
“It now appears that the same dynamics that can make some online relationships intensely positive can also fuel friendly feelings among those who join together online in expressing enmity toward identity groups and individual targets,” said scholar Joseph Walther of Harvard.
Walther argues digital bonds formed through shared hate emotionally reinforce trolls even as targets suffer, driving them to impressive fellow denizens with ever more shocking remarks. Sites like Gab and Parler emerged in part from users craving such safe spaces to impress peers with racial barbs and conspiracy theories too spicy for mainstream platforms.
The Cornell findings support past research on “heavy users” losing sight of accuracy and reason as they chase engagement highs. Likewise, modern hate movements often use insider symbols and coded hate speech only recognized within their circles rather than broadcasting openly.
Both trends suggest social rewards and in-group status, not public harm, represent primary motivations. Yet the rise in reported US hate crimes indicates online speech still correlates with real-world violence. That leaves regulators struggling to balance free speech protections with visible spikes in racist and discriminatory attacks.
Some argue successfully muting hate online requires addressing root causes like economic instability or social alienation rather than simply blocking content. Others push for more assertive moderation by sites like Facebook and Twitter, arguing poison left in circulation inevitably spreads.
But insight on trolls personally benefiting from going viral highlights why exclusively censoring posts often backfires.
“They get attention and garner social approval from like-minded social media users,” one expert told Phsy.org. Simply removing messages or suspending accounts rewards trolls with evidence of their provocative talents. More holistic remedies targeting root motivations make sense.
Of course, insight alone halts no harms. With legislative fixes stalled, responsibility falls to media platforms and law enforcement to trace dots between radicalizing online clashes and the streets. But understanding better why trolls troll puts weakened social fabrics under closer scrutiny. Perhaps the true toxin lies in bonds broken elsewhere long before.