Prosecuting Scoundrels

Prosecuting Scoundrels & Impersonators
"Online communication is developing at such a fast pace, new ways of targeting and abusing individuals online are constantly emerging. We are seeing more and more cases where social media is being used as a method to facilitate both existing and new offences… Online abuse is cowardly and can be deeply upsetting to the victim.”                 Alison Saunders, Director of Public Prosecutions

This section explores the issue of prosecuting those who post 'fake news' or disinformation on line, or who permit it to be carried on their platforms, knowingly or not. This is a hot topic, especially in relation to the regulation of social media companies. However, politicians disagree on how best to deal with the problem and ensure that free speech and democratic accountability are protected — who should decide what is fake/untrue and who should have the power to order it be taken down? These are difficult questions.

Page Content

Some countries are today imposing fines on internet providers and social media platforms if they don't remove fake websites or false and defamatory content within a set period. But there are growing concerns about possible unintended consequences... There are also discussion in western democracies about how to deal with RT and other Russian media outlets given their role as a mouthpiece for the Russian Government..
1   Legislation Around the World
In July 2018 Poynter published a useful guide which lists countries' attempts to legislate against online misinformation. It has entries on: Belarus, Belgium, Brazil, Croatia, France, Germany, India, Indonesia, Ireland, Italy, Kenya, Malaysia, The Philippines, Singapore, South Korea, Spain, Sweden, Tanzania, Uganda, United Kingdom and United States. [1] Here are some examples:
Germany’s law against hate speech is one of the most widely discussed efforts to quell potentially harmful content online. The law, 'NetzDG' (Netzwerkdurchsetzungsgesetz), came into effect on 1 Jan 2018; it doesn’t deal specifically with fake information, however it does force online platforms to remove “obviously illegal” posts within 24 hours or risk fines of up to €50 million.
The law targets social networks with more than 2 million members and requires them every six months to publish a report detailing how many complaints they have received and how they have dealt with them. It also provides for fines of up to €5 million for the person companies designate to deal with the complaints procedure if they do not meet the requirements. Enforcement of the law has not however been without controversy: the satirical magazine Titanic published a piece with insults and was banned from Twitter, and even the Minister of Justice (who helped author the NetzDG) had his tweets censored!
In France  (in Jan 2018) President Macron told journalists that he would be presenting a new law to fight the spread of fake news during elections. He did so the following month, with legislation designed to give authorities the power to remove fake content spread via social media and to block the sites that publish it. It also forces more financial transparency for sponsored content of up to five months before election periods.
The law (which builds upon an 1881 law that outlaws the dissemination of ‘false news’) contains three major provisions: France’s media regulator, the Superior Audiovisual Council, will be permitted to fight “any attempt at destabilization” from TV stations controlled by foreign states and have the power to suspend or revoke certain media from outlets that are deemed to be false. Platforms such as Facebook, Twitter and YouTube will be required to publish who has purchased sponsored content or campaign ads and for what price. [2] The law also enables citizens to procure summary rules from judges on what is and what is not fake news in order to stop its spread. Following consultations the text was amended to target the “manipulation of information” rather than “fake news.” This is intended to protect satire from penalization under the regulation.
Italy has introduced a bill aimed at "preventing the manipulation of online information" with fines for whoever publishes or disseminates “fake, exaggerated or tendentious news regarding data or facts that are manifestly false or unproven.” However, this provision only applies to those publications available online that are not registered as ‘online newspapers’, and as a result, only ‘non-journalistic’ websites, blogs and pages on social media would be punishable.
The bill also punishes whoever fabricates and spreads online “rumours or fake, exaggerated or tendentious news that may provoke public alarm or acts in a such a way as to damage public interests or mislead public opinion” with at least one year in jail and a fine up to €5,000. The jail term and the fine are doubled if the activity consists of “hate campaigns against individuals or aimed at undermining the democratic process, even for political motives”.
In early 2018 the United Kingdom Governmentset up a National Security Communications Unit tasked with “combating disinformation by state actors and others.”[3] This is the latest office or agency with some say (direct or indirect) in what is published on line. Others include:
•       The Information Commissioner's Office (ICO),
•       The Health & Safety Executive,
•       The Charity Commission,
•       The Advertising Standards Authority, and
•       The Crown Prosecution Service (CPS).
These agencies are responsible inter alia for regulating service providers/NGOs, maintaining the law and standards of public decency, intellectual property rights, etc. For example, the ICO was set up to "uphold information rights in the public interest, promote openness by public bodies and data privacy for individuals." If someone has posted wrong or malicious information online about an individual, they can report it and the ICO may in turn take this up with the CPS.
In March 2016 the UK's Crown Prosecution Service published new draft guidelines setting out how prosecutors should take tough action against anyone who attempts to humiliate or undermine someone else by publishing false or confidential information online. These guidelines cover cases where offenders set up fake profiles in others' names; they also advise prosecutors on the use of social media in new offences, such as revenge pornography and controlling or coercive behaviour in an intimate or family relationship.

Misuse of Electronic Media
There is a little-known provision in the UK’s Communications Act 2003 for prosecuting people who make ‘improper use of public electronic communications network’ by uttering statements that they 'know to be false.' This is under Section 127 Para 2a of the Act, which states that “if, for the purpose of causing annoyance, inconvenience or needless anxiety to another, [someone] sends... a message that he knows to be false” and they are found guilty, they “shall be liable, on summary conviction, to imprisonment for a term not exceeding six months or to a fine not exceeding level 5 on the standard scale, or to both.” One wonders why this provision has not been applied to those who told blatent untruths in the 2016 UK Referendum debate. Both sides were guilty.[4]
The preceding paragraph of the 2003 Act (Para 1a) concerns people who send, by means of a public electronic communications network, a "message or other matter that is grossly offensive or of an indecent, obscene or menacing character." There  have been many prosecutions under this paragraph, many concerned with violence against women and girls. [5
Others with sending grossly offensive material: here's an example: just 3 days after the dreadful Grenfell fire (in London in June 2017) a 43 year old man, Omega Mwaikambo, was imprisoned under Para 1a for posting ghoulish images of one of the victims on his Facebook account. (He claims he was in shock and thought it would help with identification.) He pleaded guilty at Westminster Magistrates Court and was sent down for three months. It was his first offence.

We are struggling to find prosecutions for lying (under Para 2a), other than people making false 999 calls. Can you help?

2   Silencing Dissent
In many countries it is a criminal offence to insult royalty, rulers or religion, and prosecutions for such insults are on the rise with the number of journalists who are being imprisoned is today at its highest point in over two decades.

Legal specialists at Columbia have noted that “Many countries in the Arab region include this offence in their penal codes, and some in South Asia provide similar penalties. More of these laws are being adopted around the world. In 2013, The Gambia introduced a new offence of ‘spreading of false news against the government or public officials’ punishable by up to fifteen years in prison or a fine of sixty-four thousand euros. In China, a 2013 law established that it should be a crime to be a rumour-monger, defined as an individual who intentionally posts a false rumour that is reposted five hundred times or more, or viewed five thousand times or more. And in Qatar, a 2014 law makes it a ‘cybercrime’ to create a website that spreads false news in order to jeopardise the safety or “general order” of the state. Egyptian prosecutors used a similar law in 2014 to prosecute the entire senior staff of Al Jazeera Television for broadcasting "false news... abroad regarding the internal situation in the country" even though there was no evidence that any of the broadcasts contained material that was unproven or falsified. Saudi Arabia has recently announced that it will punish online satire with up to five years in prison because it considers satire 'disrupts public order'.

Public Hearings on Deliberate Online Falsehoods [Singapore]
This video is part of a public Hearings on Deliberate Online Falsehoods held by the Government of Singapore. In the video Minister Shanmugum questions Ben Nimmo, Senior Fellow at the Digital Forensic Research Lab, about the possible forms and scope of legislation against deliberate  falsehoods.

Ben's view is that legislation should be "the very last resort".  [15 Mar 2018; 16:40 mins]

3   Chilling Effect on Free Speech
Human rights activists and the companies affected by measures such as those described above say that this kind of legislation risks privatising the process of censorship and could have a chilling effect on free speech. In Germany the government has already softened its hate speech legislation following criticism that too much content was being blocked. These revisions now exclude email and messenger providers and allow users to get incorrectly deleted content restored; they also put pressure on social media companies to set up independent bodies to review questionable posts.

In Italy it is being said that the measures, far from offering an effective solution to the problem of growing disinformation or hate-stirring propaganda, are incompatible with international standards as they discriminate between registered ‘professional’ online newspapers and other information platforms. Moreover, a blanket criminalisation of any fake news (regardless of a specific criminal intent in disseminating it) is disproportionate and unnecessary in a democratic society. The obligation on private online platforms to monitor and promptly remove any controversial information posted by others is bound to have a chilling effect on what will be allowed to be posted or not. How are people to judge whether information is truthful / correct?

Few would disagree with the need to legislate against email spam — you can find a useful list of legislation relating to this at SpamLaws.
4   The Right to Insult in International Law
Under international law, speech is primarily regulated by three treaties: the International Covenant on Civil and Political Rights, the Convention on the Elimination of Racial Discrimination, and the Convention on the Prevention and Punishment of the Crime of Genocide. What emerges from the wording of these legal provisions is that, at one end of the spectrum, speech that intentionally incites genocide must be criminalized, and speech inciting violence may be criminalized. At the other end of the spectrum, speech that is merely disturbing or shocking should not be a crime.

In between these two poles is insulting speech that incites hatred, hostility or discrimination, but not violence. This falls into a more controversial grey zone, but international human rights bodies currently allow states to imprison individuals for such speech. The article (from which these points are taken) argues that "international standards should be interpreted or amended to provide that such speech should not be criminalized. Instead, imprisonment for speech should be reserved for the most dangerous forms of insulting speech that intentionally and directly incite imminent or concretely-identified violence or criminal offences."
5   EU: Cautious on Legislating on Disinformation
In Nov 2017 the European Commission set up a high-level experts group (HLEG) “to help the EU craft policies to address growing concern about misinformation in Europe.” The report (published in March) acknowledges that, while not necessarily illegal, “disinformation can nonetheless be harmful for citizens and society at large"[6] but it explicitly recommends not regulating against misinformation.
The risk of harm from misinformation includes "threats to democratic political processes, including integrity of elections, and to democratic values that shape public policies in a variety of sectors, such as health, science, finance and more. In light of these considerations, the HLEG points out that disinformation problems can be handled most effectively, and in a manner that is fully compliant with freedom of expression, free press and pluralism, only if all major stakeholders collaborate. In addition, continuous research, increased transparency and access to relevant data, combined with regular evaluation of responses, must be permanently ensured.” It notes “this is particularly important as disinformation is a multifaceted and evolving problem that does not have one single root cause. It does not have, therefore, one single solution.”
The HLEG advises the Commission “to disregard simplistic solutions. Any form of censorship either public or private should clearly be avoided.” And it recommends a multi-dimensional approach involving: 1) enhancing transparency of online news, involving an adequate and privacy-compliant sharing of data about the systems that enable their circulation online; 2) promoting media and information literacy to counter disinformation and help users navigate the digital media environment; 3) developing tools for empowering users and journalists to tackle disinformation and foster a positive engagement with fast-evolving information technologies; 4) safeguard the diversity and sustainability of the European news media ecosystem; and 5) promoting continued research on the impact of disinformation in Europe to evaluate the measures taken by different actors and constantly adjust the necessary responses.

The intent is “to promote an enabling environment for freedom of expression by fostering the transparency and intelligibility of different types of digital information channels.” Accordingly, the HLEG asks public authorities, both at the EU and national level, “to support the development of a network of independent European Centres for (academic) research on disinformation. This network should be open to fact- and source-checkers, accredited journalists, researchers from different relevant fields and platforms.” Additional measures (aimed at strengthening societal resilience in the longer term) “need to be implemented in parallel… designed to support the diversity and sustainability of the news media ecosystem on the one hand (and on the other), to develop appropriate initiatives in the field of media and information literacy to foster a critical approach and a responsible behaviour across all European citizens.”

Is there anything wrong with this page? If you would like to comment on the content, style, or the choice or use of material on this page, please use the contact form. Thank you!


Notes
1      The guide notes that not every law relates specifically to fake news, but the issue is covered in some capacity.

2    This component takes a page from the United States’ Honest Ads legislation, which applies existing standards for TV and radio stations to social media.

3      Lawsuits against big tech companies are discussed on the videos page, including this one -- consumer campaigner Martin Lewis launches UK High Court proceedings in a bid to sue Facebook for defamation. The MoneySavingExpert founder says at least 50 fake ads bearing his name have appeared on the social media platform, causing reputational damage to him. Many of the adverts show his face alongside endorsements that he has not actually made, and often link to articles carrying false information. Facebook says misleading ads are not allowed and any reported are removed. View Lewis's video.
4      Whether it would be possible to prosecute those who told deliberate untruths during the UK's debate over Brexit is an open question. People would no doubt argue that their statements were not “for the purpose of causing annoyance, inconvenience or needless anxiety to another.” The fact that they did do these things, and in large measure, may not count... Another complicating consideration is uncertainty over the definition of 'public communications network', which still needs clarification. [Those drafting the legislation didn't foresee the enormous changes there have been in the social media scene.] There may also be a 3 year time limit on prosecution... With respect to the claim on the side of the Vote Leave Battle Bus, this was very carefully-worded: "We send the EU £350 million a week let's fund our NHS instead." How this slogan was subsequently presented in the media led most people to understand that if the UK left the EU the NHS would be getting the money (even though this was a gross rather than net figure for the UK's contribution).

5   The Open Rights Group has produced a useful guide to prosecutions under Section 127 of the 2003 Act.

6      Disinformation as defined in the Report includes “all forms of false, inaccurate, or misleading information designed, presented and promoted to intentionally cause public harm or for profit. It does not cover issues arising from the creation and dissemination online of illegal content (notably defamation, hate speech, incitement to violence), which are subject to regulatory remedies under EU or national laws. Nor does it cover other forms of deliberate but not misleading distortions of facts such a satire and parody.”

Share by: