Lies are still being shared further than the truth. So how do we fix it? A proposal to Twitter

People are easily confused, and here I speak as one who has spent centuries in courtrooms. Apparently, they say, a lie can run around the world before the truth has got its boots on.

Terry Pratchett, The Truth (2000).


The problem

Metaphorical maxims about the speedy dissemination of lies and the much slower pace of corrective truths have a long history. One of the first can be attributed to Jonathan Swift in 1710. Follow-up quotes have been attributed the likes of Mark Twain, Winston Churchill, and Terry Pratchett. Now over 308 years since 1710 and we are still talking about the speed of truth and lies.

To a certain degree, it’s an issue that is more poignant than ever thanks to social media. Anyone can tweet truth or lie to the entire audience of a social network with little or no fact checking or editorial control you’d see previously in the press. You only have to look at the latest mid-terms election in the United States. Areas of the virtual city that is Twitter was awash with fake news. The one instance I can highlight is this Tweet by @fintruthQ.

Example of misinformation, this video claims to be evidence of vote rigging

This video has made the around across a number of different social media networks and users. The first instance I could find of this video being posted was from an Instagram account by a supposed ‘Red Piller’. It reports itself to be evidence of voter fraud or election rigging. Showing that the user had voted for a Republican, but the confirmation print out showed a vote for a Democratic candidate.

It has 6,344 retweets and has been liked 5,880 times.

You can probably see where I am going with this. The video is disinformation. The printer attached to the voting machine, rather than the voting machine itself was broken, and the print was of a vote cast earlier in the day. It was also later found out, that in either instance, a print would not be started until the voter had actually confirmed their vote – which you cannot see the recorder do.

A response by a local political reporter, Andrew Tobias, (@AndrewJTobias) who followed up the case and found the truth only received 12 retweets. He has posted to his own feed, and in the comments to the original video highlighted above.The original video that reported the supposed election fraud, was still being retweeted, despite evidence of the contrary.

This is just one case, but it’s not in isolation. Across the internet, there are viral tweets, images, and videos being shared that contain fake news, while the truth is too slow to matter. The current situation is simple and can be explained with a simple metaphor. There’s a shiny attractive race car driving around dumping huge piles of shit everywhere. The fact checkers have a busted-up Ford Orion and a very small garden trowel to pick it up.

What’s the cause of this phenomenon?

There’s several political communication theories that can explain what’s happening. It’s an injustice to try and list them all in a blog post but let me try anyway.

Emotion: – People online,react, and share information in a pool of strangers. They don’t really know who they are talking to like they do in a pub, and there are no real pre-existing social relations. So, unless you know the person, you have a low probability of interacting with them. But there are ways you can increase this probability. The first of which is emotion. According to Berger & Milkman (2010) one way to increase interaction with your content is to fill it with sentimental arousal. People are more likely to react on social media when the topic is particularly emotive. News about crisis, breaking news, violence, deceit, corruption, has long since made the most popular news headlines – the same is true for social media.

If you see something bad and scary to you, this will lower your pre-existing issues with sharing something by a stranger you otherwise wouldn’t. Social media has created an incentive to post more negatively charged content to get more likes.

Confirmation Bias: – People have an innate characteristic of not wanting to be challenged. When we see information, part of our brains decides if it’s good for us, or bad for us. Generally, if the new information agrees with us, we are more likely to take notice of it (and go off to share and retweet). But if the information is contrary to what we know, our brains will try and discard it in order to maintain cogitative consistence. When information agrees with your beliefs, it takes no time to confirm it; when information disagrees, it takes many, many contrary facts before we even consider changing our minds. If you see a negative post about something you already dislike, such as the government, we are move likely to share it. A positive post about the government to the same person at the same time is more likely to be discarded.

This makes us vulnerable to false claims that confirm what is familiar but might be wrong.

Gatekeeping of the news:- Fake news and disinformation isn’t new, an article by Allcott & Gentzkow (2017) highlights that conspiracy theories have long been prevalent in American society.One such example is from 1835 when the New York Sun published a series of articles on the discovery of life on the Moon. However, they argue that one reason why fake news has propagated so much in the last few years is due the nature of the internet and social media itself. Previously, news had a high barrier to entry. In order to get people to buy your paper you had to be reputable. As a result, they are discouraged from producing falsehoods.

Now anyone can create a website, put some advertising space on it, produce some lies with a catchy title and still make profit. While on social media anyone can say anything, either to draw clicks to a website, or for a little bit of that viral fame. The previous economic incentives that created fact-based reporting are not present on social media.

Resources: – This is probably the most important reason why fake news and disinformation can propagate. The incentives for creating fake news is high, yet the economic incentives for countering fake news is low.

Teenagers from the town of Veles, Macedonia, managed to create a successful network of over 100 fake news websites that raised them tens of thousands of pounds (Subraminan, 2017). Similar examples include a 24-year old man from EasternEurope currently manages Endingthefed.com, a website responsible for4-out-of-10 most popular fake news stories on Facebook (Townsend, 2016). Another example is a US company called Disinfomedia that owns many fake news sites, including NationalReport.net,USAToday.com.co, and WashingtonPost.com.co, and its owner claims to employ between 20 and 25 writers (Sydell 2016). All these websites are low-cost high-profit enterprises.

Compare that to a fact checking organisation. For example, FullFact.org, an independent charity that seeks to fact-check statements made in the media and online is reported to cost £865,000a year to run (FullFact, 2018) Much of their funding comes from the corporate or charity sponsors. Donations from the public are limited, and a crowd funding campaign during the EU referendum only raising £42,260. Nowhere near enough to cover their operating expenses during an emotionally-charged referendum rife with falsehoods. Or compare it to the efforts of journalists, who try and counter falsities. They don’t get near the number of social media shares or views.

The issue is simple, Fake news can take seconds to think up. Countering it takes hours to gather evidence, researching,or going through financial data. By the time you’ve factchecked the first claim made, there’s been an additional 6 produced. Even worse than that, for all that effort, you still only get seen by a tiny proportion of the public compared to the fake news story. It goes some way to explain why the majority of fake news goes unexposed (Newman et al, 2018).

What is needed in a solution?

The big question of how to solve the issue of fake news being spread on social media is to address what a solution needs to incorporate. Firstly, it goes without saying that no social media, nor public will, is behind any solution that somehow involves silencing or deleting social media content. For instance, Twitter refused to ban far-right fascistic paranoiac Alex Jones after he had already been removed from iTunes and Spotify. Free speech is something ingrained into the publics psyche, and when you look at the solutions and actions of social media platforms (Twitter in particular) they are unwilling to delete disinformation content unless it breaks the law. Alex Jones was finally suspended from the service, for one week, after he made a tweet making a call for his followers to get their “battle rifles” ready. He was only permanently banned from the service after he continued to use the platform for harassment.

So what elements are needed in a solution?

-It needs to empower fact checkers and fact checking organisations. Fact checkers do not get nearly enough exposure to the audience that the original fake news account does.

-It needs to be bold, and it needs to be unavoidable. Any action should be bright enough that people will notice it, above their confirmation bias.

-It needs to be verified. Social media sites need to work with credible 3rd parties to deliver the majority of the factchecking – after all you can’t expect Twitter to fact check everything. Letting the everyman create fact checking reports leaves the door open for the system to be politicised. Twitter should therefore only accept reports by trusted people or organisations. After this, Twitter still must verify that the fake news report is accurate by their own team to ensure that any action isn’t taken in bad faith.

-It needs to be transparent. Reports need to clearly display why the tweet is fake or provides disinformation and give ample evidence of why this is the case. In addition to this, it needs to be informative and educate users why this fake news piece is bad, and why disinformation is bad for society.

-It needs to be retrospective. Many people share content and forget. People should be made aware if they have interacted with an article of fake news in the past.

What about a solution through education?

There have been many attempts by a range of actors to stop fake news by making the public more aware. For example, the excellent work by DROG, a multidisciplinary team of academics, journalists and media-experts who have come together to try and build educational programmes to help people build a resistance to fake news. (https://www.aboutbadnews.com/). They have created a game called Bad News (https://getbadnews.com/#intro) where you can learn how fake news operates. There are a number of other projects attempting to do the same thing. The BBC has its own scheme to help school children spot fact from fiction, and The News Literacy Project

All this education about disinformation is fantastic, and its importance cannot be understated. But there isn’t a silver-bullet to the problem of fake news, and no solution can work in silo.

What’s been tried to stop disinformation before:

Washington Post in 2016 launched a browser plug in, that provided context to any tweet made by Donald Trump that was incorrect, misleading, or pure false. It was a great initiative that heavily influenced the solution presented in this post. However, it has had some issues.

First off, the plugin was optional, and only really used by people who already followed the Washington Post, or dislike Trump. You’re most likely to use the plugin if you’re already sceptical of Trump to begin with. Secondly, most users of Twitter access the website by mobiles or tablets, where you can’t use this browser plug in. Thirdly, any solution should be native to the platform for maximum effect.

Facebook’s in-house solution: Facebook also created its own solution. Working with 3rd parties, Facebook displayed a sign on dubious news websites to better inform its users of who is making the news, and why the news item itself is disputed.

They also use this data to place items of dubious content away from your social feed entirely. Going some way to bring back the economic incentives that led to newspapers to focus on the factual accuracy of the news.

However, Facebook stopped the programme after while finding it effective at stopping fake news. It did not also provide context to why. This led to users having a backlash against Facebook and entrenching their views. It went on to replace the flag with “related articles”to give more context. A little of me wonders if Facebook was growing concerned with losing users over stopping fake news and decided to tone down the program to save face. However, either way the original solution doesn’t go far enough. It only reports based on the link being shared, rather than the content of viral messages.

Twitter Trial. Twitter once tested it’s own solution – letting users flag fake news. However, it never moved beyond a prototype – with (now ex) spokesperson Emily Horne saying that the company had “no current plans to launch …[and] There are no current plans to launch any type of product along these lines.”.

I suspect the reasoning for this is simple.If you democratise reporting, you also politicise it. Letting users create fake-news comments on things that they might simply disagree with would flood Twitter with reports… probably too many to manually verify.

Behind the scenes at Twitter. One area that is more difficult to comment on is the work going on behind the scenes at Twitter. Reports from the latest US elections suggest that whatever they are doing behind closed doors, it’s working. With foreign and domestic bots both failing to get any significant attraction. This follows significant time investment to create systems that automatically mute bots or bot-like activity. This is to be applauded. But one disinformation video that contained a edited video of a news anchor laughing to the burning of a US flag was still shared 4,000 times before it was caught. Twitter needs to somehow reach out to them potentially 4,000 people and alert them to the disinformation. Another report from the Knight Foundation found that 80 of disinformation accounts from the 2016 US election are still active today. So what ever Twitter is doing, it still has some work do to.

The solution mechanism presented

Bring in some friends: – The solution lies with a bit of realism. Twitter doesn’t have the resources to police their platform, and while technology has been reported to be a solution – creating machine code to detect fake news. But we have heard that before, and yet they still haven’t really come to the rescue.

At the same time, there are teams, journalists, charities, and other trusted individuals already doing what needs to be done – fact checking. Many of these do it for the exposure. The first step of my solution involves partnering up with a network of people who are willing to fact check posts made on Twitter, particularly those by accounts with large followings or tweets that have gone viral.

Flag false tweets: – Rather than just flagging the link or website, tweets that contain fake information should be reported. The flag needs to be bold and unavoidable. Users must first understand why the Tweet has been flagged before they can interact with it in anyway. This will prime members of the audience to be readier to accept that what they may read, while might agree with them, is false.

Allow users to educate themselves. Each flagged tweet should come with a small report that indicated why the tweet’s content was false and provides a hint of reality. The report also allows Twitter to be transparent with who reported and provided the information to Twitter, and how Twitter verified the report themselves.

Retroactive notification: – Many people fire and forget when it comes to retweets, and much of the fact checking could come after the tweets‘viral hump’. Users should be notified if a tweet they have previously interacted with (commented, liked, or shared) was fake news. The notification should also make it easy for users to review the interaction they have made with the offending tweet and allow for them to easily un-like or un-retweet a post.

Conclusion

Obviously, any solution should be one in a number of measures to help solve the issue of disinformation. In addition, there might be some really experienced staff who have thought of this but not explained the reason publicly for why it’s not been put into place. But it’s becoming clear that Twitter does need to something. Disinformation has had a negative impact on democracy, and some of the institutions that support it. Indeed, it has been the press who have found themselves in the targets of many who create this fake news.

The approach here, while simple in some elements, have also hopefully solved most of aspects that any solution requires as I highlighted earlier in the post. I expect that the major blockage to implementing a solution like this will not be technological, but rather political. Social media are playing a balancing act with truth and profit and in the end it’s democracy that’s ending with a bloody nose.

If there is one thing I want this post to demonstrate is that solutions are available, and they are being presented. It’s now up to us, as social media users and as citizens to start acting for them to be put into place.

Notes: This is a solution I previously wrote in a much more condensed summary on Twitter. I’m thankful for a the few people who spoke to me privately about the proposed solution.


Sources (cited and uncited in the bodytext):

Berger, J., & K.L. Milkman, (2010). Social Transmission, Emotion, and the virality of Online content. http://opim.wharton.upenn.edu/~kmilkman/Virality.pdf

Bump, P. (2016, Dec 19.) Now you can fact-check Trump’s Tweets. The Washington Post. https://www.washingtonpost.com/news/the-fix/wp/2016/12/16/now-you-can-fact-check-trumps-tweets-in-the-tweets-themselves/?utm_term=.eb5b5b9a5524

DROG  (N.D). DROG: A Good way to fight bad news. https://www.aboutbadnews.com

Dwoskin, E. (2017, June 29). Twitter is looking for ways to let users flag fake news, offensive content. The Washington Post. https://www.washingtonpost.com/news/the-switch/wp/2017/06/29/twitter-is-looking-for-ways-to-let-users-flag-fake-news/?utm_term=.c58fc8c114b1

Facebook. (2017, Dec 20). Replacing Disputed Flags with Related Articles. https://newsroom.fb.com/news/2017/12/news-feed-fyi-updates-in-our-fight-against-misinformation/

Facebook. (2018, June 14). Hard Questions: How is Facebook’s Fact-Checking Program working? https://newsroom.fb.com/news/2018/06/hard-questions-fact-checking/

FullFact. (2018). Find out about Funding. https://fullfact.org/about/funding/

Hunt, A., & Gentzkow, M. (2017). Socialmedia and fake news in the 2016 election. Journalof Economic Perspectives, 31(2). https://www.aeaweb.org/articles?id=10.1257/jep.31.2.211

Knight Foundation  (2018, Oct 4). Disinformation, ‘Fake News’ and Influence Campaigns on Twitter. https://www.knightfoundation.org/reports/disinformation-fake-news-and-influence-campaigns-on-twitter

McNair, B. (2017). Fake News: Falsehoods, Fabrication and Fantasy in Journalism. https://www.amazon.co.uk/Fake-News-Fabrication-Journalism-Disruptions/dp/1138306797/

Newman, N. with Fletcher, R., Kalogeropoulos, A., Levy, D.A.L., & Nielsen, R.K. (2018). Routers Institute Digital News Report 2018. http://media.digitalnewsreport.org/wp-content/uploads/2018/06/digital-news-report-2018.pdf?x89475

Subramanian, S. (2017, Feb 15). Inside the Macedonian Fake-News Complex. Wired. https://www.wired.com/2017/02/veles-macedonia-fake-news/

Sydell, L. (2016, Nov 23). We Tracked Down A Fake-News Creator In The Suburbs. Here’s What We Learned. NPR. https://www.npr.org/sections/alltechconsidered/2016/11/23/503146770/npr-finds-the-head-of-a-covert-fake-news-operation-in-the-suburbs

Townsend, T. (2016). The Bizare Truth Behind the Biggest Pro-Trump Facebook Hoaxes. Inc. https://www.inc.com/tess-townsend/ending-fed-trump-facebook.html