Contributor - Jonathan Compton is partner at city law firm DMH Stallard. Jonathan is a solicitor and barrister, holds a Master’s Degree in European Law and is a Member of the Chartered Institute of Arbitrators.
UK watchdog Ofcom will get greater powers to compel social media firms to act over harmful content.
Social media regulation may be censorship by another name

Later today (12th February 2020), the Government (Digital Secretary, Baroness Nicky Morgan) will announce new measures to regulate social media platforms. The purpose of this note is not to take sides in the debate but to ask some questions and sound a note of caution.
The powers will be exercised through Ofcom. The aim is to reduce “harmful content”. It is a truism that when people talk face to face they are more or less polite. When they get online, however, some lovely and perfectly inoffensive people turn into monsters…
As things stand, platforms like Tiktok, Facebook, YouTube, Instagram, Snapchat and Twitter have largely been self-regulating. There is a debate to be had here. On the one hand is the important freedoms of speech and expression that we guard in a liberal democratic society. We cherish these freedoms. On the other, we have trolls and drawings and forums encouraging self-harm and truly offensive hate speech. Where should freedom end and regulation begin? Does the freedom of speech include the right to shout “FIRE!!!” in a crowded auditorium? One man’s “regulator” is another woman’s “censor”. Regulators and censors are the same thing, just framed in different language and language is important here, indeed, it is the very thing we are talking about…
If a distinction can (for the sake of argument), be drawn between what is and what is not acceptable in terms of a set of definable principles, how do we deal with issues of practicality? Who do we appoint as the “censor”. Presumably the government will appoint the officers of Ofcom? Maybe the civil service will appoint them? Who, then, will “guard the guardians”. Will there need to be an appellate body? If so, the members of this body will need to be appointed by someone. The “great and the good” may well have a very different view from us as to what is and what is not acceptable for you and I to see.
On top of these competing issues comes the sheer practical question of how can the platforms manage the billion or so posts they get per day? Are we going to sanction or fine platforms who fail, innocently perhaps, to detect a neo-Nazi posting. Indeed, let us take a neo-Nazi post as an example. Neo-Nazism is, I think most people would agree, a distasteful thing. Most people would condemn neo-Nazism. Most people would seek to distance themselves from such views. But who decides what is neo-Nazi? On what basis? Many would consider the British National Party or Britain First as being of the far right. Some would disagree. Some would call such organisations neo-Nazi. Some would ban them. Others would say that in a free society, such views and organisations (however repugnant or not) must be heard. The same arguments may apply to Alternative For Germany (AfD). Now, let us take the US president. Many find his views utterly repugnant. Many will recall his mimicking of a disabled person at a televised election rally. Do we say that this video should be banned from social media? However many people find such views reprehensible, would they ban the video clip from social media platforms? Would they fine a platform that refused or just failed to take this post down?

And so, if you now put your imagination into gear and picture yourself as the decision maker at Facebook. You are getting a billion posts a day. You are alerted by a member of the public to what she considers an offensive post. You look at it. The views are offensive to you. The views are critical of Israeli policy in the West Bank. The post criticises what it calls “Israeli Zionism and expansionism in the West Bank”. If you overrule the complaint and allow the post, are you committing Facebook to a stance on the conduct of Israeli policy in the area? Could Facebook be seen as anti-Semitic? If you ban the post, are you open to criticism from those who would see themselves as reporting in an unbiased fashion what they say is going on?
As the decision maker at Facebook, you will receive perhaps tens or hundreds of referrals from your staff who monitor complaints. You have maybe 40 seconds or 2 minutes at the most to deal with the complaint, and make the decision – before you have to move onto the next video/ post.
In certain cases, the decision to ban will be instant and easy. If a post is urging vulnerable people to take their own lives and provides a “how to” guide to achieve this end (forgive me, I do not mean to be crass), I think most people would feel comfortable with removing the post. In other cases, the decision will be far more nuanced. I can see lawyers everywhere warming up their engines/keyboards. I write as one of them.
Even as I write this, I know that a barrage of criticism is heading my way. Fingers on keyboards everywhere are limbering up to give me “the benefit” of their wisdom. I am venturing into a minefield. But, dear reader, before you press “send” and I have to delete your email, all I am doing is setting a scene and asking some philosophical and practical questions. If you re-read what I have written above, you will see not one single opinion. Just questions from a questioning mind.
I accept that platforms have so far defended their own rules about taking down “unacceptable” content. I accept many say, per contra, that independent rules are needed to keep people safe. But what is “unacceptable” content? Where is the border to be drawn between what you and I are allowed to see and what we are not? Between the “offensive” we are allowed to see and the “offensive” we are not? Who decides what we are allowed to see? Who appoints them? What qualifications do they have? Why are these people more qualified than I am to decide what I should see and what I should not? Does that choice not rest with me?
It is not clear, as I write on the morning before this afternoon’s announcement, what penalties Ofcom can enforce. My worry here is that if Ofcom is given the same powers as the ICO (Information Commissioner) then we could be talking about 4% of global turnover as a fine for defaulting platforms. With hundreds of millions of posts per day, penalties at this level could challenge even the financial might of Facebook. If the penalties are not significant, then these financial giants may simply shrug off the penalties. Ofcom regulates the TV, but with respect, does that equip it to deal with the social media model? TV shows are expensive to produce and put together and edited by professionals - by and large. The laws of defamation apply in a way that does not apply to social media platforms. TV and radio editors are therefore careful to get it right. The volumes of TV shows are small when compared to the uploads to social media platforms which run into the hundreds of millions per day.
I accept that there has been a growing call for platforms to take more responsibility for their content, especially in light of the death of Molly Russell, who took her own life after viewing self-harm content on Instagram. But it is an old adage in law, that “hard cases make bad law”. I do not urge a “Wild West” approach to social media platforms. The purpose of this article is to urge caution. Let us regulate/censor, however you prefer to term it, but let us consider carefully. Let us reflect coolly on some of the issues barely touched upon in this very brief note.
Comments