The first half of 2021 has seen tech platforms (and social media in particular) gather pace as hubs for communication, news, large scale campaigns, and, sadly, terrible abuse. The latter was perhaps most starkly illustrated in July 2021, shortly after the England football team lost in the final of the European Championships. Abhorrent racial abuse was directed at the team (almost exclusively via social media platforms) and most particularly at the three young Black players who missed penalties in the final.
By August 2021 there had been a significant number of arrests relating to this abuse. Rightfully, a huge amount of police resource was used to enable this – resource which realistically does not exist for every instance of abuse, or abuse in lower profile situations.
A widespread problem
The open and unashamed abuse of public figures is not unusual. There were racist images circulated when the Duke and Duchess of Sussex had a child, and the MPs Diane Abbott and David Lammy regularly receive online racial abuse – with the latter often replying on Twitter to try to highlight and educate.
The abuse is by no means exclusive to race. It is not uncommon to see regular trolling of female comedians, sportspeople, politicians and authors, with feminist author of best-selling book Invisible Women, Caroline Criado Perez, being a frequent target of threats including rape and murder. Margaret Hodge MP says she receives thousands of abusive and anti-Semitic tweets every month. And for anyone working with victims of image-based sexual abuse, social media is a common facilitator whether through private or public channels.
When posts have an origin and an identifiable person, it is easier to pursue justice and restitution. Individuals can be banned from platforms, prosecuted for criminal offences or even punished by their employers. In the wake of this year’s UEFA EURO 2020 tournament, university applicants had offers revoked and a senior estate agent was suspended from his job at Savills for posting online racist abuse. But what about anonymous accounts, using false names? The anonymous nature of accounts might be perceived to enable ultimate freedom of speech, but they also provide an untraceable public (or indeed private) platform to abuse others.
The legal and political landscape
In 2018 the Law Commission (“a non-political independent body, set up by Parliament in 1965 to keep all the law of England and Wales under review, and to recommend reform where it is needed”) considered reform for criminal law to protect victims from online abuse. The scoping report noted:
“The anonymity (real or perceived) and the disinhibiting effect of the internet […] can also contribute to people saying and doing things online that they might not do in person in a communication offline. This might include explicit hate speech.
However, the report and recommendations were not focused on anonymous online abuse, just potential changes which could be made to legislation to make the prosecution of identifiable perpetrators easier.
Online Harms White Paper
In 2019 the Government released The Online Harms White Paper (updated in 2020), which recognised the need to “help shape an internet that is open and vibrant but also protects its users from harm”. It put forward a number of proposals for consultation relevant for companies which host / share user generated content, including:
- Stop tech platforms ‘self-regulating’ and form a new system of accountability and oversight in a regulatory framework with an independent regulator.
- Introduce reporting requirements.
- A statutory duty of care to make companies take responsibility for the safety of their users, with the onus on companies to provide clear terms and conditions, easy to access complaints procedures, timely responses and actively combat the dissemination of illegal behaviours.
The paper specifically looks at anonymous abuse, noting the need to consider whether current laws are sufficient to tackle the behaviour; but makes no mention of user verification. It did note that reporting online crimes will be easier through the online police force (Digital Public Contact programme) under the Policing Vision 2025 – however, there are no specifics here as to how online abuse will be addressed.
No doubt increased police powers and training will help locate the most serious perpetrators – this is to be welcomed. However, given the scale of the problem and the time it takes to track an anonymous social media user (especially if they are using a VPN), it is hard to imagine that the police will have the resources to make a meaningful impression on the current situation.
Petition for the introduction of online verification
In July 2020 model Katie Price appeared at the Commons Petitions Committee to highlight the issues of online trolling (directed at her disabled son). She launched an online petition to debate the issue (due to close on 5 September 2021). Her call was simple:
“Make it a legal requirement when opening a new social media account, to provide a verified form of ID. Where the account belongs to a person under the age of 18 verify the account with the ID of a parent/guardian, to prevent anonymised harmful activity, providing traceability if an offence occurs.”
The Government responded to Price’s petition in May 2021 stating:
“[…] restricting all users’ right to anonymity, by introducing compulsory user verification for social media, could disproportionately impact users who rely on anonymity to protect their identity. These users include young people exploring their gender or sexual identity, whistleblowers, journalists’ sources and victims of abuse. Introducing a new legal requirement, whereby only verified users can access social media, would force these users to disclose their identity and increase a risk of harm to their personal safety.”
The Government also noted that an estimated 3.5 million people do not have valid photo ID and that the online safety regulation framework will have measures to tackle anonymous abuse:
“Services which host user-generated content or allow people to talk to others online will need to remove and limit the spread of illegal content, including criminal anonymous abuse. Major platforms will also need to set out clearly what legal anonymous content is acceptable on their platform and stick to it.”
They stated that under the new online safety regulation framework there should be improved ways to report harmful content, fines issued by Ofcom (up to 10% of annual global turnover) and a range of powers to identify those who attempt to escape law enforcement through anonymity.
“Anonymity underpins people’s fundamental right to express themselves and access information online in a liberal democracy. Introducing a new legal requirement for user verification on social media would unfairly restrict this right and force vulnerable users to disclose their identity. The Online Safety legislation will address harmful anonymised activities online and introduce robust measures to improve the safety of all users online.”
Online safety bill
In May 2021, seven days after their response to Katie Price’s petition, the UK Government published the Online Safety Bill, which is currently being scrutinised by Parliament. Key features relating to social media companies include:
- Imposing duties of care on providers/companies towards their users, including children and vulnerable individuals.
- Providers will have to take into account the risks their sites pose to both adult and child users.
- Providers will have to take proportionate steps to minimise the presence of priority illegal content, the length of time it is present for, its dissemination and “swiftly take down” illegal content (Section 9(3)). This includes words, images, speech or sounds. (Section 41).
- Duties to protect rights to freedom of expression and privacy, democratic importance, and journalistic content (Sections 12-14, Section 23).
- A new criminal offence carrying a maximum sentence of two years’ imprisonment for senior managers for failing to comply with information notices from the regulator (Sections 72 and 73).
- New statutory codes of practice which companies will need to follow and which will be regulated by Ofcom, which will have the power to impose fines of up to £18 million or 10% of global turnover (whichever is highest).
While some of these are very significant changes (and to be welcomed), there is no mention of requiring user verification.
Why the reluctance?
The reluctance, by both government and the social media companies, to introduce user verification is – on the face of it at least – born out of a desire to protect freedom of speech.
The ideology of many social media companies could be seen to combine the First Amendment and the libertarian values of Silicon Valley. Put simply, their view is that the more speech the better, and that the truth and positive speech will drown out lies and abuse. User verification would, they say, lead people to self-censor, and would, for example, be of real concern to opposition activists living under authoritarian regimes, who may be fearful for their lives if they thought that their identity might be exposed.
This ideology is of course very convenient, as the more users there are, creating more content (especially controversial, divisive content), the more money social media companies make. If they were required to bring in user verification, it would likely deter users (and there would also be costs involved with the verification process).
The UK Government does not have a direct financial interest in this issue. It is however (rightly) concerned about protecting civil liberties. It likely also does not want to be seen by the Biden administration as seeking to undermine the business model of the social media companies, which are a major component of the US economy and a significant source of its soft power.
Instead, in the Online Safety Bill, the UK Government has sought to put the onus onto the social media companies to take greater responsibility for the harmful content that is posted. Assuming the Bill is passed in its current form, this would be significant progress, and may ultimately push the social media companies towards user verification. But it is not clear how this will play out. In the meantime, there is an urgent need to hold those responsible for harmful speech to account, and it is hard to see how that will happen without some form of user verification.
Is there a middle ground?
There are potential solutions whereby users would still be allowed to use social media if they weren’t prepared to identify themselves. For example, it could be the case that those who have been through user verification procedures could be allowed wider use of a platform – perhaps public posts could only be made by those with user verified accounts. Or, it could be that any user that is found to have breached the platform’s rules would be required to identify themselves if they wanted to continue to use the platform (but unless a user was found to have broken the rules, no user verification would be required).
These ‘middle ground’ options may nudge people in the right direction and send a clear message that there is no right to anonymously abuse another, without the need to introduce an absolute requirement for user verification (although ultimately this may be required).
It could also be the case that the user would not need to be publicly identified, and could continue to use an alias, but that the social media company would store their real name. There is, arguably, no need for other users to know a user’s identity.
The right to free speech is not absolute and there is no ‘right’ to commit hate crimes. At face value, there are a range of public order (and other) offences to prosecute this behaviour that can apply equally in the ‘real’ and digital worlds.
Further, anonymity is not the same as freedom of speech, especially in the UK where individuals are not in danger if they speak out against the Government. Social media companies would need to navigate the difficulties of being global platforms and how accounts work in different countries – but this should be perfectly possible for such technologically advanced and wealthy organisations.
There are a number of options whereby user verification could be introduced whilst protecting freedom of speech as best as possible. Clearly there would be tensions, and no solution will be perfect. However, the current situation is disheartening at best and appalling at worst, with limited deterrents against abusive online behaviour. The platforms (failing which, the Government) need to take serious steps to ensure that those who are posting this harmful content are held to account. In our view, this will almost certainly require some form of user verification.
Read more about the Legal Advice Centre at Queen Mary University of London.