The Great Troll War: Social Media & Harmful Content

INTRODUCTION

Social media is one of the greatest revolutions since radio and the invention of television. It is often praised for the benefits it has brought to society and to individual users.[1] In terms of general benefits that social media has brought, it does widen our understanding of friendship and how we connect to each other.[2]  Yet alone the global platform it creates for all social media users.[3] However, this author will argue that while social media has brought many social advantages in terms of communication and connectivity, it has also brought many negatives that often muddy the waters in terms of the positives it creates.[4] This author will argue the legislation afforded to victims is sufficient and that the current mechanisms adopted by the courts act as a check and balance system. Additionally, this author will also highlight that social media companies have empowered many social media users to take action into their own hands, rather than being reliant on the courts to remedy every instance of harmful communication. Therefore, the courts should not be lowering the threshold which would allow an increase of cases to be brought forward. This author will also highlight that the courts respect the right to freedom of expression but have installed a classification and hierarchy on speech, therefore allowing a court to concentrate its efforts on the truly abhorrent content worthy of criminal proceedings.

THE LAW

Before diving into the main argument, it is best to transpose what current protections are afforded under current legal provisions to someone who is subject to harmful or aggressive content via social media platforms. With that in mind, there are two pieces of legislation that govern harmful content. These include:

1)     The Malicious Communications Act 1988 (“MCA”), and

2)     The Communications Act 2003 (“CA”).

In relation to the MCA, it is an offence to send to another person an electronic communication conveying a message that is indecent, grossly offensive, threatening or contains information that is known or believed to be false. Additionally, the guilty party must have sent it in order to cause distress or anxiety to the recipient or to any person this communication was aimed towards.[5] In relation to the CA, a person is guilty of an offence if he or she ‘sends by means of a public electronic communications network a message or other matter that is grossly offensive or of an indecent, obscene or menacing character’. Additionally, it is also an offence under this provision to send a message known to be false using a public, electronic communications network for the purpose of causing annoyance, inconvenience or needless anxiety to another person.[6]

HOW THE COURTS INTERPRET THE LAW

With the legislation laid out above, one could argue that there is no clear path to determining whether comments made by one person to another via social media or online warrants a prosecution. Perhaps focus needs to be on the infliction of harm claimed by the victim. One could argue that things said online do not actually cause harm in the traditional sense, since they do not amount to or cause physical harm towards the victim.[7] Indeed, we are all human and each of us will feel and perceive harm in different ways. In most situations, the harm caused in terms of embarrassment or anger do, in most part, pass away very quickly. Indeed, temporary harm to our feelings should fall outside the legislative mechanisms. Therefore, the Crown Prosecution Service (“CPS”) have several tools at their disposal in which to benchmark what is illegal and harmful.

In deciding to prosecute someone for making comments online, the CPS will follow its two-stage Full Code Test. First is the evidential stage, which requires there to be enough evidence to provide a realistic prospect of conviction.[8] Naturally, if a case does not have sufficient evidence, then the CPS cannot progress the prosecution. The second limb of this test involves the CPS determining whether it is necessary in the public interest. This will involve considering, for example, the seriousness of the offence committed, the culpability of the suspect, and the circumstances of and the harm caused to the victim.[9] So long as these considerations are completed and satisfied, then the CPS can advance the case.

Regarding communications sent via social media platforms, the CPS has created four different categories in which a defendant can be prosecuted under. They include the following:

1) communications that may constitute credible threats of violence or damage to property.

2) communications that specifically target an individual(s) which may constitute stalking or harassment.

3) communications that may be a breach of a court order; or

4) communications that do not fall into any of the categories above, which may be considered grossly offensive, indecent, obscene or false.[10]

For the purpose of this article, attention will be drawn to limb four, as this proves to be the more challenging category in terms of evidencing what is truly offensive.

 THE FLOODGATE POSITION

The rationale for having such evidential stages and tests is so that it prevents the courts from opening the floodgate to every type or claim put forward by someone being offended or harmed by online content. Indeed, this author recognises that there is a need to have a check and balance in place so that when a case is presented in court, it warrants a sufficient level of seriousness so to avoid a caseload which becomes unmanageable and unrealistic.[11] Indeed, the thinking from Westminster is that not all forms of unpleasant expression on social media platforms ought to be deemed illegal and therefore, pursued in the courts.[12]

This author is of the opinion that the courts are right to install such thresholds of harm. Indeed, there are tools afforded to social media users which allow them to control and resolve issues on social media where they find themselves subject to harmful content. For instance, Facebook allows its users to mute certain friends so limit the content being published. Facebook also has other tools at its disposal, such as a blocking functionality and the ability to report other users to Facebook where behaviour or content is deemed inappropriate across a multitude of considerations. These tools are designed to allow users to monitor what they want to see and ultimately, where they are offended and harm is incurred, they are empowered to act accordingly and to take matters into their own hands by utilising the tools afforded to them by social media companies. This author does not recognise that this should be an opportunity to allow the courts to reduce the thresholds it currently has in place. Only the most relevant cases are presented to the court to act on, and notions of ignoring the conduct of certain social media users and consequently ignoring the merits of a case are superficial.[13] A line must be drawn in the sand so that a court can rightfully and accurately challenge instances of genuine harmful communication.

 WHAT ABOUT SPEECH?

If the courts were to lower the threshold in which cases can be brought before them, this would create a damaging impact on the dissemination of information.[14] This is because users should be entitled to publish information that may risk offending someone else. There has to be an element of debate and robustness. If the courts were to allow a lower threshold to exist, then this may prevent social media users, in fact online users more generally, from expressing their opinions or beliefs. This is because many users will be fearful of causing harm to another that may lead to prosecution.[15] Indeed, social media is heavily used and has, to some extent, become part and parcel of our everyday lives. However, a court should not be seen to police what is harmful on a daily basis. Indeed, there have been instances where a court has monitored the sensitivity and publishment of material. For example, the court in R (ProLife Alliance) v British Broadcasting Corporation.[16] However, this is completely different to the example where someone publishes something online which could then be deemed harmful. In the case of ProLife Alliance, the topic of abortion clearly offends those from various religious backgrounds. Indeed, the broadcast was for educationally purposes, but considerations around who the audience would be and that this content would also be available to younger viewers, ought to be borne out as factors that differentiate the censorship of online activity via social media platforms. One could also argue that the case also brought with it a politically charged movement in terms of the ‘for and against’ abortion. This author also argues that society has since moved on in the almost twenty years since the case was decided. Programmes such as ’24 hours in A&E’ produce graphic content which is to the same standard as what would have been aired by the BBC back in 2003. It is also important that as a corporation, the BBC have a heightened duty of care to consider the impact of what it does broadcast. The average user on social media should not be expected to have the same level of duty of care, but instead, ought to have consideration towards other online users.  But to be clear, this does not invite the courts to monitor whether each online user has demonstrated an element of consideration.

FOCUSING ON THE AGE OF A USER

This author argues that a key driving factor in determining whether a prosecution should be brought forward ought to be based on the age of the supposedly guilty party. This is especially true where the social media user is below the age of eighteen. It is argued that social media users below this age do not have the mental maturity to appreciate the outcome of their actions when posting content which is potentially harmful.[17] To be clear, this author is not suggesting that young children should not be held accountable to their actions. Instead, this author is merely arguing that whilst the young children may be afforded more tolerance from the courts in terms of freedom of expression, surely there is a difference in terms of appreciation of consequence when a fourteen-year-old posts something in comparison to a twenty-four-year-old. Perhaps the old proverb ‘the pen is mightier than the sword’ has some prevalence here which a young child may not appreciate. Indeed, the age of criminal responsibility in England and Wales is 10,[18] which supports this author’s view that younger users cannot avoid criminal prosecution entirely. In fact, this acts as a check and balance, since young children may not appreciate or intend to harm other social media users. However, young children do recognise the difference between right and wrong, therefore, when content is published which has a degree of criminality, then the courts are entitled to prosecute accordingly.  

FREEDOM OF EXPRESSION: CAN I SAY WHAT I WANT AND WHEN I WANT TO?

It has long been recognised that freedom of expression is regarded as an important right within a freethinking and forward society.[19] This is nothing new within the United Kingdom and it is something that has historically existed for quite some time. Article 10 of the Human Rights Act 1998 sought to centralise the ideals and rights borne out of post war Europe in the 1950s, which effectively enshrined and centralised the right to freedom of expression. This right allows a person to rely on a level of protection, even if the expressed views cause a few noses to be knocked out of place.[20] However, limits have been imposed and we cannot merely say what we want and when we want to.

There now exists a hierarchy of speech, in that the courts will distinguish between speech which is political, educational and artistical.[21] This classification and categorisation on speech has effectively curtailed the freedom of expression.[22] However, this author argues that such a classification now means that the courts can properly assess what is truly harmful to the recipient. Indeed, the right to freedom of expression is afforded to every person within the United Kingdom, but context must be provided in terms of when this right cannot be relied on to escape criminal proceedings. We can all agree that matters of a political nature carry the utmost importance, since they have a weighted public interest against them.[23] Therefore, what is considered harmful to the recipient ought to be disregarded as a matter of public interest since it prevents the abuse of political power.[24] By the same standard, speech of an educational nature has also been recognised as an example where the courts will uphold the right to freedom of expression where harmful content is brought into question.[25] This author argues, in similar fashion, that where content is shared online which is educational in nature, then criminal proceedings should not be progressed even if the content is harmful to certain recipients. This is because, in a freethinking society, information that is designed to educate the populace at large ought to be available. To merely sensor this type of speech because it offends a few would not only be uncivilised, but it would also hinder the development of a more educated society. Beneath this lies speech related to commercial and/or artistic expression. This is not as clear cut as educational or political entailed speech, and it is often viewed as having less protection.[26]

This leaves us with the remainder of speech, which is considered to be truly offensive, harmful, dangerous and incident to all types of recipients. This author recognises that speech like this is often similar to everyday conversations, which have no added benefit to society, that are often spontaneous and based on emotive reactions, and therefore, lack the depth and research required to make a fully informed assessment which often means the content is highly opinionated and without academic backing.[27] In these instances, social media users should be subject to the Malicious Communications Act 1988 and The Communications Act 2003; as by virtue of their very existence these legal provisions aim to regulate this sort of speech. This, combined with the mechanisms and considerations imposed by the courts allows for the most genuine cases of criminality to be brought forward for resolution.

SOCIAL MEDIA: TO PROTECT OR TO ATTACK?

As previously discussed in this paper, social media companies have empowered many users to take action into their own hands by regulating what they find offensive, harmful or distasteful. These tools afforded to all users allows the courts to focus its attention on acts that have a genuine threshold of criminality. One of the most powerful tools afforded to social media users is anonymity.[28] This functionality allows many users to hide their real identity and assume different personas. Indeed, this is seen as a positive means in allowing certain people to communicate their right to freedom of expression.[29] This author notes that many people are often afraid to share their own ideas and opinions in fear of being ridiculed for it. This is especially true where the opinion or idea is a minority held view and the majority are against such notions or concepts.[30] Therefore, this allows social media users to publish content which, whilst it may offend the recipient, the offence is borne out of a disagreement of opinion that the sender is protected from.

However, where something is created with the intention to introduce a positive outcome, there will be those who seek to exploit such tools to achieve a negative outcome. Therefore, this author argues that a paradox has been created in that this particular tool afforded to all users can be used as a double-edged sword. In more recent times, this has caused other social media users to commit suicide because of the level of harmful content being published by anonymous users.[31] Indeed, if you are able to identify the user then satisfaction can be drawn by the fact you can reconcile a situation in person; or the very least a receipt can relate the actions to a named person. However, where the sender is anonymous, this creates a stronger level of anxiety and frustration because the recipient will often feel powerless since they are unable to reconcile the situation fully or partially.[32] However, it is submitted that recipients in these example scenarios are still able to remedy the situation themselves. They have the ability to chose to avoid what has been published and have the ability to walk away by logging off to avoid further harm.[33] In a similar fashion, if this situation were to occur in person, the recipient would have the option to say what they wanted to say and then walk away to entirely defuse the situation. If this makes the recipient feel like they are the one who making the ultimate sacrifice and to a certain extent, feel like they are the one who being punished for the wrongdoing of the sender, then they also have the ability to block the sender.  They also have the ability to report the sender to the social media community standards board if they feel this is a better course of action.

CONCLUSION 

This author has demonstrated that the courts have created an appropriate check and balance system regarding cases that can be brought forward under The Malicious Communications Act 1988 and The Communications Act 2003. An appropriate level of protection has been afforded to the freedom of expression. Here, the courts have rightfully categorised the different types of speech that ultimately determine whether a recipient of supposedly harmful content is entitled to rely on the aforementioned legal mechanisms as protection.

What the courts have rightly sought to do is to create a sensible threshold in which only the most genuine cases of criminality are brought before a court. This author purports that this is the better position, since a court cannot be the only entity to regulate what is deemed as harmful content. With considerations such as the age of the sender and the potential lack of maturity and awareness of what someone below the age of eighteen is publishing, ought to remain a key determining factor for establishing criminality. Indeed, many users have several aids at their disposal which are given to them by social media companies. These aids allow all users to regulate online content and empower recipients to take matters into their own hands by either blocking the sender, utilising the anonymous facility or even reporting certain behaviours directly to the social media companies. Without these aids, and indeed, without the considerations and thresholds installed by the courts, this author argues that given the size and nature of the online world, the courts would merely become overwhelmed in determining every instance of potential harmful communication.

Written by Mr Jake Richardson LL.B (Hons)

LinkedIn Profile: www.linkedin.com/in/jake-richardson-a83570113

REFERENCES

[1] See generally Gavin Sutter, ‘Nothing New Under the Sun: Old Fears and New Media’ (2000) 8 International Journal of Law and Information Technology 338.

[2] Tony Fitzpatrick, ‘Critical Cyberpolicy: Network Technologies, Massless Citizens, Virtual Rights’ (2000) 20(3) Critical Social Policy 375, 382.

[3] See generally Peter Coe, ‘The Social Media Paradox: An Intersection with Freedom of Expression

and the Criminal Law’ [2015] Information & Communications Technology Law 16.

[4] Maya Hertig Randall, ‘Freedom of Expression in the Internet’ (2016) 26 Swiss Review of International and European Law 235, 247.

[5] The Malicious Communications Act 1988, S 1(b).

[6] The Communications Act 2003, S 127(2).

[7] Martin H Redish, ‘Self-Realisation, Democracy, and Freedom of Expression: A Reply to Professor Baker’ (1981–82) 130 University of Pennsylvania Law Review 678.

[8] CPS, ‘About CPS’ <https://www.cps.gov.uk/about-cps> accessed 08 August 2021.

[9] For a full list of considerations, see CPS, ‘The Code for Crown Prosecutors’ <https://www.cps.gov.uk/publication/code-crown-prosecutors> accessed 08 August 2021, para 4.8.

[10] Director of Public Prosecutions, ‘Guidelines on Prosecuting Cases Involving Communications Sent Via Social Media’ (2013) CPS <http://data.parliament.uk/DepositedPapers/Files/DEP2013-1025/social_media_guidelines.pdf> accessed 29 July 2021.

[11] Director of Public Prosecutions, ‘Guidelines on Prosecuting Cases Involving Communications Sent Via Social Media’ (2013) CPS <http://data.parliament.uk/DepositedPapers/Files/DEP2013-1025/social_media_guidelines.pdf> accessed 30 July 2021, para 33.

[12] See generally, HM Government, Internet Safety Survey Green Paper (2017).

[13] See generally, Noam Gur, ‘Ronald Dworkin and the Curious Case of the Floodgates Argument’ (2018) 31(2) Canadian Journal of Law & Jurisprudence 323.

[14] See generally Peter Coe, ‘The Social Media Paradox: An Intersection with Freedom of Expression

and the Criminal Law’ [2015] Information & Communications Technology Law 16.

[15] See generally, Bethany Stein, ‘A Bland Interpretation: Why a Facebook Like should be Protected

First Amendment Speech’ (2014) 44 Seton Hall Law Review 1255.

[16] [2003] UKHL 23, para [91].

[17] Director of Public Prosecutions, ‘Guidelines on Prosecuting Cases Involving Communications Sent Via Social Media’ (2013) CPS <http://data.parliament.uk/DepositedPapers/Files/DEP2013-1025/social_media_guidelines.pdf> accessed 29 July 2021, para [46].

[18] Crime and Disorder Act 1998, s 34.

[19] Reynolds v Times Newspapers [2001] 2 AC 127, [200].

[20] Handyside v United Kingdom [1976] EHRR 737, [49].

[21] Campbell v MGN Limited [2004] UKHL 22, [148].

[22] Mark Elliot and Robert Thomas, Public Law (3rd edn, OUP 2017) 843.

[23] Lyon v Daily Telegraph [1943] KB 746, [752].

[24] J G Fleming, The Law of Torts (9th edn, Law Book Company 1998) 648.

[25] Campbell v MGN Limited [2004] UKHL 22, [29].

[26] Richard Clayton and Hugh Tomlinson, The Law of Human Rights (2nd ed, OUP 2000) [15].

[27] See generally, Jacob Rowbottom, ‘To Rant, Vent and Converse: Protecting Low Level Digital

Speech’ (2012) 71 (2) Cambridge Law Journal 355.

[28] Law Commission, Abusive and Offensive Online Communications: A Scoping Report

(Law Com No 381, 2018) [3.68].

[29] Nadine Strossen, ‘Protecting Privacy and Free Speech in Cyberspace’ (2001) 89

Georgetown Law Journal 2103, 2106.

[30] Maya Hertig Randall, ‘Freedom of Expression in the Internet’ (2016) 26 Swiss Review of International and European Law 235, 247.

[31] Joe Shute, ‘Cyberbullying Suicides: What Will It Take to Have Ask.fm Shut Down?’

The Telegraph (London, 2013) <https://www.telegraph.co.uk/news/health/children/10225846/Cyberbullying-

suicides-What-will-it-take-to-have-Ask.fm-shut-down.html> accessed July 2021.

[32] See generally, Nicole L Weber and William V Pelfrey, Cyberbullying: Causes, Consequences and

Coping Strategies (LFB Scholarly Publishing 2014).

[33] Tony Fitzpatrick, ‘Critical Cyberpolicy: Network Technologies, Massless Citizens, Virtual Rights’ (2000) 20(3) Critical Social Policy 375, 382.

Disclaimer: This article (and any information accessed through links in this article) is provided for information purposes only and does not constitute legal advice.