Trust in Computer Systems and the Cloud. Mike Bursell
that the payoffs are immediate, clear, and motivating.
Teach players to care about each other.
Teach reciprocity (rewarding positive actions—typically cooperation—and punishing negative actions—typically betrayal).
Improve recognition abilities (being able to recognise what the other party's strategy is).
Although proposed as a thought experiment, it turns out that the Prisoner's Dilemma can be used to model rather closely a number of different situations. Game theory arguably suffered, in the last years of the twentieth century and the early years of the twenty-first, from being the blockchain of its day: it was seen by some as the future foundation for all rules and types of societal interaction. As Heap and Varoufakis12 noted, people's motivations are typically more complex than the somewhat simplistic models provided by game theory and are affected by what they called people's social location: the cultures and societies in which they live and their relative positions in terms of wealth, power, etc.
This does not mean, however, that game theory has nothing to offer us. What Axelrod's tactics to encourage cooperation seem to be promoting are ways to build some sort of trust between the two parties. Let us revisit our definition of trust and apply it to game theory:
“Trust is the assurance that one entity holds that another will perform particular actions according to a specific expectation”.
We need, of course, to consider our corollaries as well: how do they apply in this case?
First Corollary “Trust is always contextual”.
Second Corollary “One of the contexts for trust is always time”.
Third Corollary “Trust relationships are not symmetrical”.
The two entities are easily identified in this case: the two participants in the game. The context here is the game—one way of understanding the concerns of Heap and Varoufakis is that trying to extend the context beyond the game means we are extending the context too far. It is clear that time is a vital component of this relationship, given the impact of multiple games. And the final corollary is that although the trust relationships from each player to the other in this example are not necessarily symmetrical, the best outcome is achieved when they are. Most important, as the games proceed, each party is building an assurance that the other will perform certain actions—staying silent or betraying—when asked. It is interesting to note that in our definition of trust, there is no value associated with whether the outcome is positive or negative: each party can have an assurance that the other party will perform particular actions (always staying silent; alternatively betraying and then staying silent; staying silent in response to the first party's previous silence) without the outcome necessarily being positive.
Reputation and Generalised Trust
The Prisoner's Dilemma is not the only type of game covered in the field of game theory. There are many, of which most are two-player games, and most can also have multiple participants (with no theoretical limit). The two-player games serve to give an example of how assurances about future behaviour—what we are referring to as trust relationships—can be formed between two participants.
What about the case for multiple participants? When I set about forming a trust relationship “from scratch”—with no prior interactions—to someone (let us call her Alice), then I do so based on my expectations, biases, and interactions over time. If, on the other hand, somebody (we will call her Carol) asks me for information on Alice in order for her to form an initial opinion, and then asks multiple other people who have also formed a trust relationship to Alice for the same or similar information, then something else is happening: Carol is finding out information not first-hand, but based on information from others.
The standard term for this is reputation, and it does not map directly from a trust relationship that Carol has to Alice but is a second-order construct. Carol cannot directly map my views on my trust relationship to Alice, alongside the views of others on their trust relationships to Alice, directly to her trust relationship to Alice: rather, she derives enough information to describe a reputation that she can relate to Alice and use to decide how best to form a trust relationship.
In the Prisoner's Dilemma example, we discussed the best strategic approach but also noted that many humans often end up taking a much more positive approach than would be expected by theoretical analysis. One reason for this may be that humans do not always act rationally—that is, in ways that suggest informed self-interest. One alternative to a self-interested approach is known as generalised trust. Rather than assuming that all trust relationships need to be formed from an initial position of distrust, generalised trust suggests that the default should be to trust, in the absence of any evidence to suggest it would be wise to do the contrary.13 Given our interest in trust for security within computing, this approach may not be a very sensible one: it is much easier to assess risk from the point of view of starting from a position of no trust and build up a trust relationship built on known precepts than to reduce trust. Further, as Brian Rathburn points out,14 such trust relationships typically rely greatly on reciprocity; and given our focus on the asymmetry of trust relationships, we should be wary of relying overly on this approach.
The reputation approach is interesting because rather than having to start her trust relationship to Alice from scratch or with only the tools that she has (her experience, biases, and some expectations), Carol starts with some specific information about Alice that she can use. We can think of this type of information as inputs to the idea of “trustworthiness” proposed by some of the writers we have mentioned when considering human-to-human trust.
What other types of information might we have up front? Another way of looking at this is asking what pressures can be brought to bear on the trustee that allow me, the trustor, to have a high assurance that this will affect the trustee's behaviour in ways that are likely to influence them to act in ways consistent with my expectations. Bruce Schneier writes in detail about this in Liars & Outliers: Enabling the Trust That Society Needs to Thrive.15 He discusses societal pressures, moral pressures, reputational pressures, institutional pressures, and security systems, all in the context of society. Sanctions, punishments, and incentives all fit into this model of trust establishment and management, and reputation is one of the key concepts required to evaluate relationships.
We know in our day-to-day human interactions that reputations can be ill-formed or unfairly earned and also that they can change significantly over time. We will look at indirect trust relationships in Chapter 3, “Trust Operations and Alternatives”, and at the importance of time in Chapter 7, “The Importance of Time”, but another point arises from reputations: over the many years that humans have grown into larger and larger groups, creating societies and forming organisations, reputations are important for another type of trust relationship. This is the area on which Schneier spends much attention, as evidenced by his chapter headings on organisations, corporations, and institutions, and which we can broadly label institutional trust.
Institutional Trust
A great deal of the literature on trust revolves around our trust—and mistrust—in institutions: or, as we would clarify with our new understanding about these concepts, the trust relationships we have to institutions. This type of trust is related directly to our second case in Chapter 1: my trust relationship to my bank. Banks are, in fact,