The social media platform Reddit recently hit the news in taking down over 2,000 communities after a review of its content policy towards hateful content, including one of the largest online Pro-Trump communities /r/The_Donald.
But this action has significance in a larger debate over the degree of freedom given to users to talk, vs. the degree of protection that should be given to the audience. For years, arguments have been raised against one of the oldest values of the internet, absolute freedom of speech: citing continued scandals over misinformation, the abuse of users, and even the incitement of violence to particular demographics.
Reddit, seen as the bastion of free speech, becomes the last of the large social platforms to scrap its commitments to absolute freedom of speech, following U-turns from the likes of Facebook and Twitter. As such, what we’ve seen is more than a policy shift from one website, it’s an entrenchment of the new norm for how social platforms manage content from users.
An end of a love affair with the absolute freedom of speech
Much of the internet that came as a result of web 2.0 stuck to the early values of the internet. Often referred to as the philosophy of the ‘hacker’ ethic, with a strong belief sharing, openness, and intellectual freedom that can only be facilitated in a space without limits on the exploration of ideas. A marketplace of free exchange driven by meritocracy and freedom of speech. Many American companies that supplied the early infrastructure of the web engrained these values into the web and their services. One such example being Google’s code of conduct – “Don’t be evil”.
“The Internet is a rare example of a true, modern, functional anarchy. There is no “Internet Inc.” There are no official censors, no bosses, no board of directors, no stockholders. In principle, any node can speak as a peer to any other node, as long as it obeys the rules of the TCP/IP protocols”B Sterling’s A short history of the internet (1993)
As a result, free-speech absolutism was ubiquitous across Silicon Valley, managing to fend off the challenges brought by moral outrages, and multiple attempts to apply obscenity laws to the internet. Even when such (watered-down) law was put into place, internet websites and later social media websites were granted some immunity. Enshrined by Section 230 of the Communications Decency Act that stopped websites being held responsible for the content of their users.
In effect, social media websites are not legally responsible for the content that’s held there. And thus, allowed to continue their commitments to freedom of speech.
Even until relatively recently, it was a value widely supported. Defenders of this position point to the idealistic dream of what the internet could be, how the ability to talk freely gives people democratic power, such as in the Arab Spring and pro-democracy movements, or how banning free-speech would lead to a domino effect and the beginnings of a 1984 state.
But with the normalisation of the internet and social media, came a series of negative media reactions to social media. In many ways, it was inevitable. After all, what could be expected when hateful people are allowed access to an audience and given an unlimited degree of free speech?
Numerous reports developed of how targeted online assessment towards women and minorities resulted in these groups being pushed out of online discussions. How those in public life, such as MPs, are targeted for abuse. How these platforms allowed content that resulted in negative impacts on mental health. Or how the absolute freedom of speech might even be allowing attacks on democracy, not facilitating it, due to the advent of misinformation campaigns by both domestic and state-based campaigns.
Social media websites themselves started to be implicated as part of these reports. Tumblr became embroiled in scandal when numerous blogs promoting self-harming to teens were found on the site in 2012. After series of tweets by Milo Yiannopoulos attacking women, and liking likened rape culture to Harry Potter (“both fantasy”) were unsurfaced, significant criticism was given that the platform did not protect women or people of colour. With the platform branded a ‘toxic space’ – which ultimately led to the company reviewing its hateful conduct policy. While in 2016 Facebook found more than 126m user accounts on the platform that were found to be part of Russian campaign of misinformation or electoral interference in the 2016 US Presidential election. These are only a small number of examples of the damming impact due to a lack of content moderation and unlimited free speech.
There is no direct or single event which led to the end to the love affair with the absolute freedom of speech. But it seems that the period 2016 was when media pressure heated up for social media companies to do more regarding content regulation. By 2018, 56% of Americans said tech companies should take steps to restrict false information online, even if it limits freedom of information. In a recent 2020 report by the Knight Foundation, it was found 54% had said that Section 230 had done more harm than good, and that social media companies should be partly responsible for dealing with issues such as harassment and misinformation, even if most still supported keeping the law.
The response by social media companies
In response to scandal, and changing public support, social platforms that have previously made steadfast commitments to absolute freedom of speech on their platforms have u turned. According to the 2018 book, Custodians of the Internet, by Tarleton Gillespie, it’s suggested that this is a move both fuelled by the desire for social platforms to keep their advertisers happy, and to retain their audience – which has been known to vote with their feet when unhappy with the platform and provided with a viable alternative (Anyone remember Myspace?).
It’s easy to see when these platforms shift gear. For Facebook, it was through a change in its mission statement in 2017. Swapping out their previous line of “Facebook gives people the power to share and make the world more open and connected” to “Our new mission is to bring the world closer together” signifying a more active role in removing hate speech and fake news. The result has been Alt-Right groups such as Britain First banned from the platform, to generally positive acclaim by the public.
Likewise, Twitter a previous advocate for absolute free speech, changed its policy in a 2015 blog post announcing new rules on abusive behaviours and harassment. More recently it has announced steps to counter misinformation, and taken the more symbolic actions against hate-speech, such as flagging several Donald Trump’s Tweets for glorifying violence.
But through all of this, Reddit the 6th Most popular website in the US, above Wikipedia, Instagram, and Twitter had continued their commitment to free-speech absolutism even when this approach had surrounded the brand in scandal.
Following on from media attention, and eventual removal of both the /r/jailbait community, which was dedicated to sharing sexually suggestive images of underage women, and /r/TheFappining, a community which shared hacked illicit photos of celebrities, company representatives made continued commitments to free speech, with the suggestion that community removal only happens when they violate the law.
“We uphold the ideal of free speech on reddit as much as possible not because we are legally bound to, but because we believe that you – the user – has the right to choose between right and wrong, good and evil, and that it is your responsibility to do so[…] We will try not to interfere – not because we don’t care, but because we care that you make your choices between right and wrong.”Reddit Blog, 2014.
As a result, Reddit was given the persona as one of the last mainstream social media platforms on the internet to enforce and stand by unlimited freedom of speech. But the announcement on Wednesday 1st June put an end to that. The website bolstered its ruleset against hate and abuse, putting an end to the commitment that anything was allowed on their site so long as it did not break the law – and that finally, it’s the platform and not users who should be entrusted with the final say with what is right and what is wrong.
The core principles from the 90’s vision of the internet is dead. There, I said it. While some websites still advertise themselves as committing to freedom of speech, like the old internet, such as Gab, Voat, or Parler – they often do so under the guise of facilitating hate which is otherwise ostracised from mainstream platforms. And these platforms have subsequently faced issues in attracting ad revenue, being removed from their website hosts, or stagnating user numbers.
Reddit was the last (mainstream) nail in the coffin for absolute free speech on the internet. But does it mean free speech is dead on the internet Entirely? No. In the UK Governments white paper on online harms, it was argued most citizens still place a value on the freedom of speech – but at the same time suggest that what people want is a more considered approach to content moderation, that respects peoples freedoms of speech, but also the freedom not to come into harm from others.