Within the Forde Report is a disclosure of a validation tool used by the Labour Party. Analysing what is known about this through the lens of content moderation, we can see how such a tool might have only increased the levels of bias and discrimination.
I have been going through the much-awaited Forde Report. If you did not know, that is the independent report commissioned by the Labour Party, and conducted by Martin Forde QC, to examine the factionalism within the party’s efforts to combat antisemitism. By this point, it has been delayed by two years, had a significant number of legal issues, and is surrounded by factional in-fighting. Considering the complex legacy of Jeremy Corbyn, this is a document which is undoubtedly going to be argued over throughout the upcoming days/weeks/months/years (delete as appropriate).
The results are bound to be contentious, and already, political commentators from multiple factions are arguing how the report vindicates their side. I have even seen advocates for Proportional Representation voting systems saying the report only further gives momentum to their cause. I would love to see their workings on the matter…
It is safe to say I’ve been on the internet long enough to not comment on it. That is not to say something else in the report caught my eye.
The ‘Validation Tool’
One of the findings that stood out was the disclosure of The Labour Party’s in-house tool to detect people applying for membership (or were current members) who might be joining in bad-faith for upcoming leadership elections through social media posts. Or ‘validation exercises’ as they called it.
This tool matched applicants’ email addresses with Twitter & Facebook accounts, then undertook keyword search for content containing a library of 1,959 phrases. These phrases supposedly indicated to the party that said person was ineligible for party membership (and thus unable to vote in upcoming leadership elections). This included phrases which:
- Indicated support for other political parties (“I voted Green, “I voted Tory”);
- 35 Abusive phrases which included words such as “Blairite”.
- A further 15 abusive phrases which included “Corbynite” and another 15 included “trot”.
- Other phrases containing the names of Twitter handles of specific MPs, amongst all who were centre or of the right of the party.
There is a good reason for a system like this. It would go against the interests of the wider Labour membership to have people joining who have openly acted against the party or are visible members of opposition parties. However, the application of the system is troubling, as we will see.
In total, this system acted on 4,000 potential or existing members. Although it should be said the system was not relied upon alone, and membership rejections had to go through Labour’s ruling committee, the NEC, before action was taken. But as the report highlighted, in the stated form the system ultimately led to a bias against members intending to vote for Jeremy Corbyn. This is certainly a troubling finding – whether intentional or not.
But what fascinates me the most is this is a system which mixes a ‘people search engine’ (such as Pipl or Webmii), and what sounds awfully like an extremely basic content moderation system, doesn’t it?
Take tools like Reddit’s Automoderator – a system used to help moderate subreddit communities. It is not unheard of for them to apply keyword approaches to identify abusive words or phrases used as a heuristic for other troublesome behaviour. Similar tools are also available for Facebook Group moderators. This approach might be fine for a community of a few thousand, but when applied to a political party of around half a million people, has far too many embedded structural issues which can result in further discrimination.
What’s wrong with such a system?
Analysing the tool as disclosed through the lens of content moderation highlights a multitude of areas that will allow for bias. Here is a quick round-up of a few these, and the negative consequences when applied to any decision making process on party membership – with the potential to create structural inaccuracies or discrimination at multiple stages:
Anonymisation
The easily identifiable pitfall is the identification approach as described. It is not uncommon for people to have multiple online personalities, many of them anonymous. Simply using a different email for social media and your day-to-day life admin (which is a advisable approach) might throw off the system. Likewise, most people – even in 2017 – knew that if you’re going to be an asshole on the internet, it’s prudent to distance your online activities from your real name. If this was Labour’s main approach to weeding out bad faith actors, they were either getting lucky, or catching idiots – not the majority of people setting out to cause the party harm.
Data Sources & Human Discretion
There is much more the report does not say about the system than it does. So many of the details are very low to the ground. But what is stated raises issues of data. For example, when collecting data through Twitter’s API it would only be able to go back 7-days (assuming Labour didn’t pay for expensive firehose access). Likewise, content from Facebook is quite difficult to collect and analyse. This raises questions if the party collected a sufficient amount of data at all, or if it had been collecting data outside of platform T&Cs.
If the system is not as automated as suggested, and contains staff manually going through a members social feed, then this is yet another point of failure for a “fair system”. Office staff might seek to overlook some users (or more harshly assess others) based on demographic information available from profile pictures, if available at the time. Political Twibbons or Facebook’s Profile Frames, or apparent gender, age, sex could all be used for discriminatory practices at this stage. Discretion might have been applied unfairly based on offices staffers use of heuristics without any system to mitigate this.
At some point Twitter users might have come across a new account and gone “ahh that person follows someone I know, maybe they’re not so bad after all?” That’s not bad if you’re deciding to retweet someone, but if there’s the potential this happened in an office deciding a membership verification, then that’s a problem.
Abuse is subjective
As we found in our published paper on political abuse (Ward & McLoughlin, 2020), any sort of classification of abuse is open to normative assessments, and this would certainly be true within a small London-based office making these decisions. Cultural and socio-demographic factors make abusive words hurt more than others. For instance, we know when abuse targeting person’s veracity or intelligence has been directed at women, it also comes loaded with hundreds of years of sexism. A man might be more able to simply brush off such an accusation with little damage (see our outgoing Prime Minister). While a women’s personal standing might be damaged by said accusations. Similarly, politically charged terms will mean different things to those of differing political persuasions.
This means there is a huge potential for these 1,959 phrases to target members from one political side and ignore others. As a result, I think it is hugely important that the full list is published for analysis to indicate the bias at this stage.
Indeed, the report highlighted that there were significant issues with how these keywords and phrases were deployed, and the potential for this to ingrain further factionalism.
Keyword approaches are easy to avoid by users
Within content moderation, it is well known that language is one of the biggest issues when it comes to the detection of troublesome content. And no more is this true within online communication. Words are often misspelt, and purposefully obfuscated with substitutions to avoid detection. People talking critically of Elon Musk on Twitter will type his name as El0n or M*sk and other derivatives to avoid being targeted by his overactive fanbase. So too are slurs changed in ways which are perfectly understandable to us while reading, but misunderstood by software tools such as replacing A with @ or the use of emoji.
Likewise, language changes, and new slurs arrive which office staff in London might not have the cultural awareness to add to the filter list. Similarly, dog-whistles or ideograms are used by those seeking to speak to particular audiences without detection. An example of this is the number 110, which has been used as a call to “expel Jews” from a location. Is this covered by the 1,959 phrases?
The Human Impact
One issue raised time and time again when someone’s job is to go through social media content is the impact on mental health. Forcing someone to actively seek out abuse on a day-to-day basis often results in psychological trauma, an issue written about by Sarah T Robert’s Behind the Screen: Content Moderation in the Shadows of Social Media (2019). So one of the action points I will be writing about to the party will be if office staff members who had to look at this content had any plan in action to mitigate this at all.
Ethics
Even with the tool having some purpose (even if the deployment is flawed), another question has to be raised: Was this ethical?
Looking through the current terms and conditions of the Labour Party there’s no evidence that members are told that given contact details will be used in conjunction with other data sources (such as social media sites) for this purpose. But this exactly wasn’t the case in 2018, suggesting potentially, that such checks did once happen, are no longer undertaken.
In 2018 the T&Cs stated that:
By supplying personal data to register as an affiliated supporter you agree that the Labour Party may use the data you have provided to check your eligibility to participate in, and administer the ballot process for relevant internal elections. This includes any election for Leader or Deputy Leader of the Labour Party, and any other internal election in which affiliated supporters are eligible to participate in as determined by the Labour Party’s National Executive Committee.”
and
This may include using information you provide as part of your application; information that the Labour Party otherwise has a lawful basis to process; and information from publicly available sources [emphasis mine] to confirm that you support the Party’s aims and values and meet the eligibility criteria set out in these terms.”
Earlier Privacy Policy statements from April 2018 say that a user’s email and name is collected upon registration to “provide you with information to match your interests” and alert users to campaigns. Again, but it is not stated it is used to match with a user’s social media accounts.
The later privacy policy does state they will collect information on users’ social media accounts but only when they have interacted with the party’s channels, and again does not outline that users emails or other information will be used to find their social media channels.
I actually hope someone takes a closer look at this, because normatively, you should be telling people directly, and in clear English, if you’re going to be using any given information upon signup to collect further information on a user.
So what?
This all points to my wider argument that the keyword approach taken here would have been imperfect as viewed through someone wearing content moderation lenses. It is a tool potentially created with a phrase list founded in a milieu of a small office in London. Also within is a system containing multiple stages where political and demographic biases could have been engrained into decision-making. And finally, ethical questions have to be raised of not only staff who had to scan this content, but also the not giving applying members proper notice of how their supplied information was to be used.
To those working with a lack of resources, the tool might have made sense. A cheap solution to an issue. But when it comes to applying systems like these at scale, small biases ingrained within a system quickly scale up. Hopefully, this serves as a useful reminder of why content moderation and platform governance is such a (rightfully) huge and unwieldy beast.