Philosophy of Psychology. Lisa Bortolotti
electorate happier:
Program A is expected to reduce the yearly number of casualties in traffic accidents to 570 and its annual cost is estimated at $12 million.
Program B is expected to reduce the yearly number of casualties in traffic accidents to 500.
At what cost would program B be as attractive as program A?
Options 1 and 2 represent two different ways of eliciting people’s preferences for one of the two life-saving programmes. In option 1, participants are given all necessary information about the two programmes: how many lives they would save and at what cost. When preferences are elicited in this way (direct choice), two-thirds of participants express a preference for programme B (which allows more lives to be saved at a higher cost). In option 2, participants are told how many lives would be saved and the cost of programme A, but they are not told the cost of programme B. Rather, they are asked at what cost programme B would become as attractive as programme A. When the preference is elicited this way (price matching), 90% of participants provide values smaller than $55 million for programme B, thereby indicating a preference for programme A.
If we take the evidence concerning people’s responses to the Traffic Problem as ecologically valid and reliable, it tells us something interesting: people have inconsistent attitudes about what the Minister of Transportation should do concerning the Traffic Problem. They believe that the Minister should implement programme B to save the lives of 70 more people a year, even if the programme costs $43 million more than programme A. They also believe that the Minister should implement programme A, which would save fewer lives, unless programme B cost considerably less than $55 million. Depending on the method by which the preference is elicited, participants seem to attribute different monetary value to human lives.
1.4 Pessimism about Rationality
Making Sense of the Results
Overall, these studies reveal ‘systematic and severe errors’ (Tversky & Kahneman 1974, 1124) in human reasoning that have a ‘bleak implication for human rationality’ (Nisbett & Borgida 1975, 935). They provide powerful empirical support for the pessimistic conclusion about human rationality that human agents systematically fail to reason in accordance with the rules of deductive reasoning, such as in the Wason selection task, or in accordance with the rules of probability theory, such as the conjunction rule (the Linda experiment) or Bayes’ rule (the Jack experiment). Moreover, participants violate basic principles of decision-making (procedure invariance and description invariance), and change their preferences depending on the methods by which their preferences are elicited and the way in which options are presented.
According to Kahneman and Tversky, systematic failures in human reasoning are due to the fact that human agents do not rely on the rules of logic, probability, and decision-making that would guarantee accuracy, but rather rely on heuristics. Heuristics are cognitive shortcuts, or cognitive rules of thumb, that ‘reduce the complex tasks […] to simpler judgmental operations’ (Tversky & Kahneman 1974, 1124). Heuristics are reliable in many cases, especially in those cases where heuristics and the rules of logic, probability, and decision-making deliver the same answer. However, in other cases, heuristics can lead to systematic errors and deliver different answers from those one would arrive at by applying the rules of logic, probability, and decision-making.
In making a heuristic judgment, you replace a difficult question with an easier one and answer that question instead. For example, the question ‘How far is the mountain over there from here?’ is a relatively difficult question, which, if you want to answer it in a canonical way, requires you to find a map, identify your place on the map, identify the mountain on the map, measure the distance between them on the map, and then calculate the actual distance while taking into account the scale of the map. Instead of answering the difficult question, you can substitute it with another question, ‘How clear does the mountain look to me?’, which is a lot easier to answer. When it looks very clear to you, for example, you can conclude that the mountain is very close to you. This substitution strategy works in many cases, but it inevitably leads to systematic errors in other cases: for example, distances are often overestimated when the contours of objects are blurry and are underestimated when the contours of objects are sharp.
Similarly, it is not easy to answer probabilistic judgments such as ‘What is the probability of Linda being a bank teller?’ or ‘What is the probability of Jack being an engineer?’ However, instead of answering these questions, one can substitute them with questions of similarity, which are a lot easier: ‘How is Linda’s description similar to that of a stereotypical bank teller?’ or ‘How is Jack’s description similar to that of a stereotypical engineer?’ The substitution in this case is known as an application of the ‘representativeness heuristic’, in which ‘probabilities are evaluated by the degree to which A is representative of B, that is, by the degree to which A resembles B’ (Tversky & Kahneman 1974, 1124).
The representativeness heuristic works in many cases, but it inevitably leads to systematic errors in other cases. For instance, it leads to the violation of the conjunction rule when participants are asked to compare the probability of Linda being a bank teller with the probability of Linda being a feminist bank teller. When participants rely on the representativeness heuristic, they compare the similarity between Linda and a stereotypical bank teller and the similarity between Linda and a stereotypical feminist bank teller. Since Linda is not similar to a stereotypical bank teller at all, participants come to the conclusion that Linda is more likely to be a feminist bank teller than a bank teller, which is mathematically fallacious.
Kahneman and Tversky claim that reasoning errors are systematic and severe, but this does not mean that they are unavoidable. A finding is that the conjunction effect is reduced by asking about the frequency of an event (how often the event occurs) rather than its probability (how probable it is that the event will occur). A study by Klaus Fiedler (1988) compared two versions of the Linda experiment: the original version (probability), which asks the ‘probability’ of Linda being a bank teller and of her being a feminist bank teller, and a new version (frequency), which instead asks how many people out of 100 who fit Linda’s description are bank tellers and how many are feminist bank tellers. Participants are much more likely to give the correct answer to the new (frequency) version of the task (78% correct answer) than to the original (probability) version (9% correct answer).
Similarly, other versions of the Wason selection task were devised based on the hypothesis that performance improves when the rule that is tested is more concrete and refers to situations that participants experience in everyday life. Indeed, there was significant improvement in the results. In one version (Wason & Shapiro 1971), the following statement was tested: ‘Every time I go to Manchester, I travel by car.’ Participants were presented with four cards that showed on their open sides one of the following: ‘Manchester’, ‘Leeds’, ‘car’, and ‘train’ (Figure 2). On this occasion, two-thirds of participants were able to choose the right pair of cards to turn to test the rule (the correct pair being the cards with ‘Manchester’ and ‘train’ on the visible sides). These performance improvements will be an important issue in our later discussion.
Figure 2. Wason selection task with concrete options
Argument for Pessimism
The psychological experiments on reasoning seem to support the pessimistic conclusion that humans are vulnerable to systematic and widespread irrationality. According to the experimental results, people exhibit poor reasoning performance in a number of tasks, including logical, probabilistic, and decision-making tasks; the conclusion drawn is that people do not reason rationally in those circumstances.