Triangle

How does the legal system need to evolve in the age of social media?

Over the last couple of months two 'celebrity trials' dominated the media headlines in the UK and USA, with Johnny Depp vs. Amber Heard and the 'Wagatha Christie' case between Coleen Rooney and Rebekah Vardy becoming the daily water-cooler topic for many.

But with both trials playing out across social media − and the Wagatha Christie case in particular originating through the publication of an Instagram post − is defamation law still fit for purpose? 

Johnny Depp vs Amber Heard

Sofia Ashraf (LLM International Human Rights Law, 2008)

Barrister at St Philips Chambers

I’m asked to comment on how the legal system needs to evolve in the social media age. That’s a rather broad question and notwithstanding the growing numbers of internet and social media users in this country alone1, I query the necessity of the question. I suspect, given the recent surge in defamation claims involving the use of social media and given the recent high profile matters heard in this jurisdiction involving celebrities, the answer appears obvious.

Sofia-Ashraf

Frequently asked questions

I confess that it is not at all obvious to me. Do we always make changes to the law or a legal process in response to the prevailing zeitgeist? Do we always need to? After all, the widespread use – and misuse - of social media is hardly akin to the increase in, say, knife crime, identity theft, and/or, online fraud. And we haven’t exactly rushed to introduce new legislation or make changes to existing legislation and/or legal processes regarding these concerning aspects of modern life.

I will focus this commentary on defamation claims arising involving social media. When considering such matters, I am firmly of the view that the applicable legislation should not be paternalistic – and that this is an approach which should equally be avoided when considering whether the applicable legal processes need to evolve.

We are not required to use social media; the choice to do so is personal and, presumably, made of one’s own free will. Likewise, the choice to express an opinion or make a statement of fact via a social media platform.

Accordingly, individuals who use social media should be alive to the fact that anything they distribute via this medium – including, potentially, via a ‘share’ or ‘retweet’ – can be deemed a permanent, published statement capable of reaching the defamation threshold as with any statement they have published in, say, an article or a book. Equally, individuals should be prepared to take responsibility for all that they distribute via a social media platform.

I am not at all inclined to believe that the legal system should evolve any more than it already has done so to meet this situation.

The Defamation Act 2013, which came into force on 1 January 2014, removed the need for a trial by jury and introduced a “serious harm” threshold test for Claimants to meet in order to successfully bring a claim. Further, for claims issued after 5 April 2019, a successful Defamation Claimant can no longer recover a Conditional Fee Agreement success fee from the unsuccessful Defendant(s)2.

Whilst many considered these interventions a bar to potential claims, the High Court reported a 60% increase in Defamation claims in 2014 following the introduction of the 2013 Act.

Indeed, in 2017, “The Media and Communications List” was established by the High Court in order to deal with the volume of defamation claims arising from social media. These claims are now subject to a new set of Civil Procedure Rules, a streamlined procedure, preliminary hearings to determine the meanings of the words/phrases used by the Defendant, and, are dealt with by a specialist media law judge. Despite this, the volume of such claims continues to rise. The High Court reported a 70% increase in 2018 and these claims are continuing to rise.

I am entirely unsurprised by this. It seems to me that many consider their right to freedom of expression as absolute; and/or that they can freely share and/or publish whatever they wish on their personal social media accounts without checking the veracity of the same or clearly labelling it as a personal opinion) - and without further recourse.

This may well be an issue of education. Perhaps the social media platforms need to be clearer with their users about what is likely to constitute defamation and the associated risks, perhaps it is a matter for the Government to address via a national public safety style campaign and/or through PSE style teaching at Schools.

Many of us were likely taught the old adage: ‘if you have nothing nice to say, don’t say anything at all’. More pertinently, I was also taught: ‘just because you can, doesn’t mean you should’. I wonder whether this should be the adage for the modern age. It certainly seems appropriate.

192% of adults in the UK are regular users (Internet Users, UK: 2020, ONS) and 84.3% of the UK population are regular users of social media (Active social media audience in the United Kingdom in Feb 2022, Statista)

2s.44 Legal Aid, Sentencing and Punishment of Offenders Act 2012

Abiodun Olatokun (Law, 2014)

Human Rights lawyer and Head of Public and Youth Engagement at The Bingham Centre for the Rule of Law

Social media is a marketplace of ideas that can represent either the very best or worst of humanity, and so we are met by a dilemma; a choice between championing freedom of speech on the one hand and minimising the spread of potentially harmful content on the other.

I believe that these aspirations are both vital to a healthy, functioning democracy, but we trade them off against one another all the time.

Abiodun-Olatokun

Frequently asked questions

This became apparent to me when running an event on social media recently. I spoke with fellow travellers who taught me a lot, and I’d like to talk about three lessons I took from that for the law in the social media age.  

Dr Holly Powell-Jones of Online Media Law taught me about the lack of awareness that many people have of potential risks that can arise from use of social media, such as contempt of court, sexual offences claimants' anonymity, defamation, and parliamentary privilege. Her research has found that young people blame themselves a lot for things that go wrong on social media. 

Some of the consequences that can result from these risks include custodial sentences in prison and thousands of pounds of liability. More public legal education about the pitfalls of social media use would make a huge difference.  

Seyi Akiwowo taught me about how personally dangerous the online sphere is. Her organisation Glitch has been working to protect vulnerable people from online abuse. Their work has found that women are more at risk of online abuse and that there are very rarely any punishments for those that commit harm against women online. 

It seems clear that paltry prosecutions and rare investigations don’t create a deterrent effect for perpetrators of online abuse. Platforms should ban abusive accounts more regularly and take a proactive role in encouraging respectful behaviour.  

Cllr Areeq Chowdhury reminded me that much of what I see on my computer screen is not real. His work on misinformation and fake news is groundbreaking, and I encourage readers to watch his ‘deepfakes’, videos that represent the subject doing or saying something that they have never done. Did Corbyn endorse Johnson, or did Johnson endorse Corbyn? 

In an increasingly sophisticated era where it is possible to show viewers anything the poster wishes them to see, tighter rules must be drawn around the use of deepfakes in the political arena. I need to know when a message has truly come from a government source or whether it is a fabrication in order to judge how I will act as a citizen; it is a matter at the very heart of our democracy.  

Social media is very difficult to regulate, and I’ve argued above that we’re in a really difficult place where it is difficult to influence the actors involved, whether individually or collectively. One development that gives me some optimism is the Online Safety Bill currently before Parliament which does do some of these things above. 

If I could wave a magic wand I would create a legally enforceable international treaty on freedom of speech and protection from online harm that would seek to balance these two competing aims. That’s not going to happen though, and so I believe as a starting point that every social media company should have an objective standard, a ‘harm threshold’ past which content should be deleted or the poster asked to mitigate the harm they have caused. Harm should be construed broadly to include political, personal or financial harm.  

With this principle in place I believe the legal system could evolve flexibly to create precedents for understanding the role of the law in relation to our social media.

Adam Kayani (Law, 2017)

Barrister at Harcourt Chambers and Director of Leducate

The internet offers an unparalleled opportunity for the frictionless free-flow of information all around the world. We are in a golden age of content creation which has morphed far beyond the traditional newspaper print media and broadcaster-led television content that we have been used to. It is now possible for academics to publish their papers and articles directly to their own blogs and websites, Courts to live-stream their proceedings on their own platforms and for Court judgments to be published and analysed directly by anyone without necessarily needing to visit a law library.

Adam-Kayani

Frequently asked questions

It has never been easier for people to access good quality, up-to-date information about what the law is and what their rights are and social media has made this even easier. Commentators are able to offer analysis on breaking stories or case law developments almost instantly, with the ensuing ‘threads’ being a treasure-trove of interesting insights – often from unique and varied angles. 

But this is a double-edged sword. The ability for literally anyone to produce content makes it more important than ever to be able to identify good-quality, reliable information from either inaccurate, poorly put together, material or the more dangerous types of outright disinformation. The recent defamation case involving Jonny Depp and Amber Heard and the plethora of American-based legal dramas whet people’s appetites for legal content but can lead to fundamental misunderstandings about the differences between the American and British legal systems – for example people wrongly thinking that we have the ‘right to remain silent’ (which we don’t!).  

There are even more sinister problems than simple confusion. Disinformation is rife on the internet, amplified by social media, and is very difficult to identify. One particular legal myth is the commonly-believed idea that by living together for a certain number of years you can enter into a ‘common-law marriage’ (which is not correct) and we all saw the chaos and loss of life that occurred in America when people were led to believe that Mike Pence (as the Vice President) had the constitutional power to maintain Donald Trump’s position as President under a manipulated explanation of the complex rules surrounding the electoral college system. There is a growing body of people in the UK who subscribe to a mythological legal system, peddled on online forums, called the ‘Freemen on the Land’ which claims to be based on ancient concepts of English common-law which has led people to erroneously believe that they do not need to follow certain laws or be bound by certain types of commercial agreements. This usually ends with those people either being incarcerated for their misbehaviour or having their homes or assets seized for non-payment of debts – all while believing they were in the right. 

All of this highlights the importance and dire need for good quality and widespread education on what people’s legal rights and responsibilities actually are, in this increasingly complex world. There are more laws now than ever before, governing almost every aspect of how we live our lives. Without a solid, foundational understanding of what the law is, how it applies to us and how we can use it to protect our fundamental freedoms – we are all in danger of being led astray in the cacophony of information swirling all around us.  

Dr Klara Polackova Van der Ploeg

Assistant Professor, School of Law

Protection of human rights is a core component of the rule of law and, arguably, one of the main purposes of law as such. Social media has often played an important role in challenging oppressive governments and has given voice to certain marginalised groups.

However, social media has also proved to pose significant risks to human rights, not least by private business corporations creating and supervising socially profound but monopolistically run spaces of human interaction in which they regulate user-generated content.

Dr-Klara-Polackova-Van-der-Ploeg

Frequently asked questions

Any business corporation may negatively impact human rights of individuals and communities wherever they operate and, at minimum, has the responsibility to respect human rights, as established, for example, in the United Nations Guiding Principles on Business and Human Rights. However, given the nature of social media, including their reliance on algorithms and artificial intelligence, the operations of social media companies have raised novel human rights issues and concerns, in particular in relation to the freedom of expression or free speech, the right to privacy, and non-discrimination.

Due to the capacity of social media companies to define the parameters of public discussion and key political processes, such as elections, their actions also impact the institutional foundations that underpin human rights protection. Some states have already taken action to curb the power of social media companies, and social media companies themselves have sought to articulate voluntary industry standards, such as the Santa Clara Principles on Transparency and Accountability in Content Moderation, and to provide human rights-centred mechanisms, such as Facebook’s Oversight Board.

However, the existing legal frameworks are inadequate for the requisite protection of human rights of social media users, employees and others, as the ‘business and human rights’ field exposes. Human rights protection requires limiting unchecked power, including that of business corporations, and states must fulfil their duty to protect human rights also in the context of social media.   

The field of 'business and human rights' explores the issues of accountability of business corporations for human rights and environmental harms. There is a dedicated 'business and human rights unit' within the Human Rights Law Centre at the University of Nottingham, led by Professor Robert McCorquodale. A highlight of the 2021/22 academic year was a conference on 'Impact of Business and Human Rights on International Law' in March.

Richard Hyde

Professor of Law, Regulation and Governance and Deputy Head of the School of Law (Education and Student Experience)

The Government’s Online Safety Bill has huge potential to change how we interact online. If passed into law, the bill will alter the content we see online and the way that we interact with each other in virtual spaces.

The bill functions to regulate user-to-user services (where content that is generated by one user may be encountered by another user). Familiar social media sites like Facebook, Twitter, Instagram and TikTok will be user-to-user services, as well as video sharing sites like YouTube. OFCOM will therefore become the regulator for large swathes of the internet, with only sites that permit no interaction outside the scope of the bill.

Richard-Hyde

Frequently asked questions

The bill places duties of care on user-to-user services to undertake risk assessments and take steps to manage identified risks, to put in place systems and processes designed to prevent users encountering certain types of content and to put in place systems which empower users to manage their risk of encountering certain types of content. Stricter duties apply to user-to-user services that are likely to be accessed by children. 

The bill, in its current form, will require more age verification on the internet. Businesses will need to assure themselves that their services are not “likely to be accessed by children” if they do not wish the more stringent obligations to apply. The bill will also impact on the sorts of content that can be viewed online, with lawful or harmful content within the scope of the risk assessment duties in the bill. Whilst a reduction in online harms may be applauded, the identification of harmful content that is permitted by law has the potential to inhibit freedom of speech (notwithstanding the duty to take freedom of speech into account) and to potentially target the wrong types of content, particularly in circumstances where much of the assessment of content will be automated.

Therefore, as the bill passes through Parliament, legislators must be vigilant to ensure that the value, vitality and vision that can be found on some user-to-user services is not stifled, and the Online Harms Bill doesn’t throw the baby out with the bath water.

Continue the discussion

Join our School of Law LinkedIn group, open to alumni working in the legal sector.