Major social media platforms' strict content moderation and censorship policies are making people more isolated and polarized, causing more dangerous extremism in society, a new study shows.
Tech giants such as Facebook, Twitter, and Google are facing all-time highs in hate speech and misinformation, with such content increasing twentyfold between 2017 to 2021 on Facebook, suggesting that the company's approach to censorship doesn’t work, according to a new study conducted by Daryl Davis, a race relations expert, and Bill Ottman, a free speech activist and CEO of crypto social network Minds.
The study, which was edited by Dr. Nafees Hamid, a senior research fellow on radicalization at King's College London, among other academics, examined the effects of restricting extremist content from large-scale social media platforms by looking at the behavior of extremist groups, among them white supremacists, neo-Nazis, and Islamic extremists.
“When you deny people the ability to express opinions and engage in cancel culture, the data shows you send them to nefarious platforms where much worse behavior occurs,” Ottman told the Washington Examiner.
“People who get canceled or deplatformed just move to somewhere with an echo chamber that reinforces their beliefs, and [that] leads to shootings at synagogues and mosques and what happened in Charlottesville,” he added.
He added that rampant censorship and the blocking of people on social media have caused more people to become “lone wolves,” which has directly resulted in a rise in radicalization.
The study also showed that censorship does not reduce hate speech or violence-inducing misinformation but merely moves it to other corners of the internet.
“When platforms like Facebook or Twitter limit hateful conversations and censor controversial content, this only moves it elsewhere. Big Tech says censorship is working, but really, it’s just hiding the problem,” said Davis.
Large tech platforms such as Spotify, Airbnb, and GoFundMe have also ratcheted up bans and censorship due to public criticism.
Facebook and other social media platforms have routinely justified stricter content moderation and censorship by pointing to internal surveys that show that users do not like certain controversial content, including divisive political speech.
Davis and Ottman said that although they sympathize with the difficult position social media platforms are put in to moderate large amounts of content to keep a diverse community of users satisfied, they expect more transparency from the companies.
“We’re not trying to take down the Big Tech platforms — we just want them to back up their content moderation policies with research and data,” said Ottman.
Ottman concluded that “we feel that our research justifies more of a First Amendment-based content moderation policy with more free speech that in the long run, over years, would lead to less radicalization and violence."