Trust in Computer Systems and the Cloud. Mike Bursell
what if I have daughters who play rugby and who train at an adjacent rugby club that also wishes to expand? I may have joined the netball club with no intention of lobbying for increased resources for netball, but with the plan of lobbying the local government with an alternative proposal for resources, directed instead towards my daughters' rugby club. This might seem like an underhanded trick, but it is a real one and can go even further than external actions, with plans to change the stated aims or rules of the organisation. If I can get enough other members of the rugby club to join the netball club, it may well be that the constitution of the club, if not robust enough, might be vulnerable to a general vote to change the club's goals to stay with existing resources or even reduce the number of courts, ceding them to the adjacent rugby club.
Something a little like this began to happen in the UK around 2015, when animal rights campaigners demanded that the National Trust, a charity that owns large tracts of land, ban all hunting with dogs on its land. Hunting wild animals with dogs was banned in England and Wales (where the National Trust holds much of its land) in 2004, but some animal rights campaigners complain that trail hunting—an alternative where a previously laid scent is followed instead—can be used as a cover for illegal hunting or lead to accidental fox chases. The policy of the National Trust—at the time of writing—is that trail hunting is permitted on its land, given the appropriate licences, “where it is consistent with our conservation aims and is legally pursued”.45 Two years later, in 2017, the League Against Cruel Sports supported46 a campaign by those opposed to any type of hunting to join the National Trust as members, with the aim of forcing a vote that would change the policy of the organisation and lead to a ban on any hunting on its land. This takes the concept of “revolt from within” in a different direction because the idea is to try to recruit enough members who are at odds with at least one of the organisation's policies to effect a change.
This is different to a single person working from within an organisation to try to subvert its aims or policies. Assuming that the employee has been hired in good faith, then their actions should be expected to be aligned with the organisation's policies. If that is the case, then the person is performing actions that are at odds with the trust relationship the organisation has with them: this assumes that we are modelling the contract between an organisation and an employee as a trust relationship from the former to the latter, an issue to which we will return in Chapter 8, “Systems and Trust”. In the case of “packing” the membership with those opposed to a particular policy or set of policies, those joining are doing so with the express and stated aim of subverting it, so there is no break in any expectations on the individual level.
It may seem that we have moved a long way from our core interest in security and computer systems, but attacks similar to those outlined above are very relevant, even if the trust models may be slightly different. Consider the single attacker who is subverting an organisation from within. This is how we might model the case where a component that is part of a larger system is compromised by a malicious actor—whether part of the organisation or not—and serves as a “pivot point” to attack other components or systems. Designing systems to be resilient to these types of attacks is a core part of the practice of IT or cybersecurity, and one of our tasks later in this book will be to consider how we can use models of trust to help that practice. In the case of the packing of members to subvert an organisation, this is extremely close, in terms of the mechanism used, to an attack on certain blockchains and crypto-currencies known as a 51% attack. At a simplistic level, a blockchain operates one or more of a variety of consensus mechanisms to decide what should count as a valid transaction and be recorded as part of its true history. Some of these consensus mechanisms are vulnerable to an attack where enough active contributors (miners) to the blockchain can overrule the true history and force an alternative that suits their ends, in a similar way to that in which enough members of an organisation can decide to vote in a policy that is at odds with the organisation's stated aims. The percentage required for at least some of these consensus mechanisms is a simple majority: hence the figure of 51%. We will be returning to blockchains later in this book, as the trust models are interesting, and many of the widely held assumptions around their operation turn out to be much more complex than are generally considered.
The Dangers of Anthropomorphism
There is one last form of trust relationships from humans that we need to consider before we move on. It is not from humans to computer systems exactly, but from humans to computer systems that the humans believe to be other humans. The task of convincing humans that a computer system is human was suggested by Alan Turing,47 who was interested in whether machines can be said to think, in what has become known as the Turing Test (though he called it the Imitation Game). His focus arguably was more on the question of what the machine—we would say computer—was doing in terms of computation and less on the question of whether particular humans believed they were talking to another human.
The question of whether computers can—or may one day be able to—think was one of the questions that exercised early practitioners of the field of artificial intelligence (AI): specifically, hard AI. Coming at the issue from a different point of view, Rossi48 writes about concerns that humans have about AI. She notes issues such as explainability (how humans can know why AI systems make particular decisions), responsibility, and accountability in humans trusting AI. Her interests seem to be mainly about humans failing to trust—she does not define the term specifically—AI systems, whereas there is a concomitant, but opposite concern: that sometimes humans may have too much trust in (that is, have an unjustified trust relationship to) AI systems.
Over the past few years, AI/ML systems49 have become increasingly good at mimicking humans for specific interactions. These are not general-purpose systems but in most cases are aimed at participating in specific fields of interaction, such as telephone answering services. Targeted systems like this have been around since the 1960s: a famous program—what we would call a bot now—known as ELIZA mimicked a therapist. Interacting with the program—there are many online implementations still available, based on the original version—quickly becomes unconvincing, and it would be difficult for any human to consider that it is truly “thinking”. The same can be said for many systems aimed at specific interactions, but humans can be quite trusting of such systems even if they do not seem to be completely human. In fact, there is a strange but well-documented effect called the uncanny valley. This is the effect that humans feel an increasing affinity for—and presumably, an increased propensity to trust—entities, the more human they look, but only to a certain point. Past that point, the uncanny valley kicks in, and humans become less happy with the entity with which they are interacting. There is evidence that this effect is not restricted to visual cues but also exists for other senses, such as hearing and audio-based interactions.50 The uncanny valley seems to be an example of a cognitive bias that may provide us with real protection in the digital world, restricting the trust we might extend towards non-human trustees that are attempting to appear human. Our ability to realise that they are non-human, however, may not always be sufficient to allow it to kick in. Deep fakes, a common term for the output of specialised ML tools that generate convincing, but ultimately falsified, images, audio, or even full video footage of people, is a growing concern for many: not least social media sites, which have identified the trend as a form of potentially damaging misinformation, or those who believed that what they saw was real. Even without these techniques, it appears that media such as Twitter have been used to put messages out—typically around elections—that are not from real people, but that, without skilled analysis and correlation with other messages from other accounts, are almost impossible to discredit.
Anthropomorphism