Web Thorns

Web Thorns
"We can complain because rose bushes have thorns,
or rejoice because thorn bushes have roses." Alphonse Karr [2]

"There is no rose without thorns." Pam Muñoz Ryan
In the 2019 Dimbleby Lecture[1] Sir Tim Berners-Lee talked engagingly about his aspirations for the world wide web 30 years on from its launch. Amongst other things, he drew attention to some of the many 'roses' that had bloomed on the internet, but went on to describe the troublesome 'thorns' that have made their presence felt in recent years. This page looks at how  Sir Tim’s ‘thorns’ might be defined rather differently, and summarises Sir Tim's Contract for the Web.

Page Content

In his Richard Dimbleby Lecture Sir Tim described the combination of factors that led to the web’s creation and the factors that had influenced its development, and (not for the first time) called for ‘a mid-term correction’. A Fourth Industrial Revolution is well underway: half the world is now online; the Internet of Things (IoT), machine learning/artificial intelligence (ML/ AI) and virtual reality will soon be commonplace in our lives; and the web has turned from being a “fun luxury” into “something we have to think of as a human right.” But as Sir Tim pointed out, such progress still leaves half the world unconnected, and there are fewer women than men online. So there's still much work to do...
And Sir Tim didn’t hold back in expressing his concern at our collective behaviour online: he reflected on the fact that 10 years ago people would talk enthusiastically about the internet’s ‘roses’ — he singled out Wikipedia, Open Street Map and the Open Source Movement as some of the finest blooms; but if asked people's view today, he said, they are just as likely to bring up the internet’s ‘thorns’ — concern over privacy, data rights, security, tracking and advertising.
Sir Tim lamented the fact that there is a lack of oversight on the web and insufficient transparency and accountability, and that millions of automated systems are today facilitating the spread of hate speech and misinformation. He was also critical of the growing number of governments that routinely block opponents’ content online, or shut down the network altogether when people take to the streets. [3]
1   Web Thorns
I’d like to propose we define the web’s ‘thorns’ rather differently to make it easier for ordinary folk to better appreciate the nature of the challenge that we now face and what needs to be done to neutralise or eliminate the threat. Here’s my shortlist:
Thorn 1:     False information on the internet helping undermine public trust in government, the media, business and civil society, damaging confidence and morale, and destabilizing the political process.
Thorn 2:     The problem of not knowing what’s true anymore, especially when aspects of ‘fake news’ items are often correct, albeit with information misleadingly presented .
Thorn 3:     ‘Fake news’ turning out to be ‘stickier’ and more toxic than real news — it can be produced anonymously and at little cost; and it spreads significantly faster, corrupting public understanding and provoking distrust, hatred and violence. And for the victims, it can be difficult, time-consuming and expensive to counter (‘mud sticks’).
Thorn 4:     Social media’s tendency to bring out the worst in us, and to attract trolls, crooks, perverts and other mendacious individuals. Bad bots and cyborgs now infest the web, capitalising on Big Tech’s attention-seeking algorithms, and promoting and amplifying ill-informed or malicious voices. 
Thorn 5:     The Tech Giants perfecting ‘surveillance capitalism’ — they use a multiplicity of (unregulated) black box algorithms and business models that involve profiting from our private data and biometrics. They have also shown themselves to be unable or unwilling to purge their platforms of fake, extremist or illegal material, and have become too powerful to control.
Thorn 6:     A vocal minority of conspiracy theorists disseminating a toxic mixture of fabricated content and misleading argument, often in pursuit of some ‘deeper truth’. This promotes polarisation and constrains society’s ability to tackle existential threats, not least threats to public health and the environment.
Thorn 7:     Malign actors, extremists and hostile foreign powers engaging in information warfare, using disinformation to poison social intercourse, damage markets and discredit open society, and in the process putting at risk peaceful coexistence — there is no consensus on when a cyber-attack or spreading malicious material becomes an ‘act of war’.
Thorn 8:     Failure to regulate/control online content and cybercrime and protect people’s data, privacy and security; and poor coordination between agencies and organisations that are fighting fake and seeking the truth.
Much progress has been made over recent years towards understanding these ‘thorns’, however tackling them is another matter as it often turns out that there are deeper problems or unintended consequences — fact-check websites are now widely available but too many of those who should be using them don’t; and there is concern that measures to regulate hate speech (or politicians’ lies) may suppress free speech or be used to solve political rather than technical problems. Indeed, some argue that measures, like the General Data Protection Regulations — that were supposed to have given us some protection — have failed to deliver / protect our data from being shared, exploited or compromised (‘Thorn 5’). Efforts to tackle the 'thorns' are discussed here.
Worse to Come?
The sad reality is that we are still a long way from quantifying the social and political costs of the thorns — understanding the adverse impact of social media on the mental health of the young and content moderators is a case in point. Sir Tim's 'Contract for the Web' (see below) published one week after his lecture, is a good start towards implementing that ‘mid-term correction’ he's calling for, but much more is needed, not least because some thorns are due to become even more problematic as a result of advances in AI and the rolling out of the Internet of Things/Bodies. And regulation is always behind the curve, sometimes by a large margin.
2   The Race to Get Ahead
In today’s noisy, nervous and confusing world there is much greater public awareness of the threat posed by lies and fake information. This is encouraging. But it is difficult for anybody, even specialists, to keep abreast of developments across such a vast, complex and fast-changing digital piste — it’s not simply the deluge of books and specialist reports that are appearing, it is evident that an unprecedented historical transition is taking place, and on multiple levels. This increases the risk that important concerns will be overlooked in the race by entrepreneurs and nation states to gain advantage and get ahead. The general view is that as digital technology evolves and becomes more complex so too do the social and political ramifications; and society’s response is invariably behind the curve.
Indeed, a recent Chatham House paper  explains how: “The development of governance in a wide range of digital spheres... is failing to match rapid advances in technical capabilities or the rise in security threats.” This, it says, is “leaving serious regulatory gaps, which means that instruments and mechanisms essential for protecting privacy and data, tackling cybercrime or establishing common ethical standards for AI, among many other imperatives, remain largely inadequate.” [4]
The authors also point out that efforts to strengthen the security of global information and telecommunications systems stalled in 2017, “primarily due to fundamental disagreements between countries on the right to self-defence and on the applicability of international humanitarian law to cyber conflicts... The breakdown in talks reflected, in particular, the divide between two principal techno-ideological blocs: one, led by the US, the EU and like-minded states, advocating a global and open approach to the digital space; the other, led mainly by Russia and China, emphasizing a sovereignty-and-control model...
No clear winner has emerged from among the competing visions for cyberspace and AI governance,[5] nor indeed from the similar contests for doctrinal control in other digital domains. Concerns are rising that a so-called ‘splinternet’ may be inevitable — in which the internet fragments into separate open and closed spheres and cyber governance is similarly divided.”
And one doesn’t need to look far to find an example — China has imposed an Orwellian system of mass surveillance on the Uyghers in Xinjiang; its Integrated Joint Operations Platform uses AI for “predictive policing”, to decide if an individual’s behaviour is ‘suspicious’ and s/he should be picked up and neutralised (i.e. questioned, interned and 'retrained'). 
Thankfully a lot of clever people are now taking the problem of lies and disinformation seriously. See here for more information on this.
3   The Contract for the Web
Sir Tim’s Contract for the Web represents an important attempt to bring together governments, companies and civil society "to help shape a better future for the online world." Its focus is on ‘access’, ‘openness’, ‘privacy & data rights’, ‘positive tech’ — ensuring that technology creation and use is focused on human values / better experiences and addresses online harms — and ‘public action’.

With respect to this last item, the Contract contains very little detail. It simply notes that: “For the web to remain healthy and aligned with the public interest, we need to bolster public engagement. We need the citizens to remain the key driving force behind the web. As we reach a point in which 50% of the world is connected, this becomes more important than ever. We need the next 50% to join a space that empowers them and enables them to thrive.” (page 27)
The relevant Principles (7, 8 & 9) “are focused on developing an understanding of our digital citizenship, and the ways in which we can and should step up to the challenge of ensuring the web remains open for everyone. It includes components that aim to foster civic behaviour online, as well as mechanisms for people to develop a better understanding of how the web works and how they can fight for it.”

Currently, the Contract does not directly address the question of how the general public can better comprehend the impact that the world wide web is having on our life, and on politics and wider society; nor does it look at who’s doing what to clean up or protect it. 
No one can complain about a shortage of information on mis/disinformation, indeed, quite the reverse; we are overwhelmed by it. This risks 'reality apathy'.

There’s been a proliferation of specialist newsletters and a veritable avalanche of publications on 'fake news', post-truth, conspiracy theories, AI/big data and the like — I currently follow 10 specialist newsletters and am aware of another dozen, and I’ve logged well over 300 relevant books and reports published in the last 18 months or so, and that's just in English. (You can find copies of the publication covers here.) The problem for most folk is knowing how to keep up, who to follow, and what to believe.

Is there anything wrong with this page? If you would like to comment on the content, style, or the choice or use of material on this page, please use the contact form. Thank you!

Notes
1      The lecture was aired by the BBC from the Design Museum in London on 17 November.

2     This quote has been attributed to Abraham Lincoln, Alphonse Karr, B. Fay Mills, Roe Fulkerson,  J. Kenfield Morley and Anonymous. Quote Investigator finds that "the earliest evidence for this type of saying appeared in French in a book by Alphonse Karr who declined to give an ascription. Hence, this saying has no known originator though Karr was the primal locus of popularization. The attribution to Lincoln occurred very recently and is unsupported."

3      AccessNow reports 188 such incidents last year.

4    The authors note that the world’s evolving technological infrastructure is “not a monolithic creation. In practice, it encompasses a highly diverse mix of elements — so-called ‘high-tech domains’, hardware, systems, algorithms, protocols and standards — designed by a plethora of private companies, public bodies and non-profit organizations. Varying cultural, economic and political assumptions have shaped where and which technologies have been deployed so far, and how they have been implemented.”

5     It is germane to mention here Prof. Metzinger’s incendiary comments on the EU ethics guidelines for AI: he described the guidelines (published in April 2019) as ‘ethical white-washing’ — and he was a member of the expert group that drew up the report! “The underlying guiding idea of a ‘trustworthy AI’” he argues, “is conceptual nonsense. Machines are not trustworthy; only humans can be trustworthy (or untrustworthy). If, in the future, an untrustworthy corporation or government behaves unethically and possesses good, robust AI technology, this will enable more effective unethical behaviour.”
Share by: