Philosophy of Psychology. Lisa Bortolotti
a claim about the probability of an event (e.g., the probability of winning a coin toss is 0.5) is understood as talking about the frequency of the event relative to the relevant reference class (e.g., the frequency of winning relative to the total number of tosses of that particular coin). The Linda experiment asks the probability of Linda being a feminist bank teller. On a frequentist interpretation, this amounts to asking the frequency of Linda being a feminist bank teller. What does that mean? The frequency of Linda being a feminist bank teller relative to the total cases of her being a feminist bank teller in her life would be 1. Does that mean, then, that the probability of Linda being a feminist bank teller is 1?
Gigerenzer’s objection is that asking the probability of a single, non-repetitive event – like Linda being a feminist bank teller – is an ill-formed question. Since the question is ill formed and meaningless, there is no correct answer to it. Thus, it is not a mistake to assign a higher probability to Linda being a feminist bank teller than to Linda being a bank teller.
The philosophical and statistical distinction between single events and frequencies clarifies that judgments hitherto labeled instances of the ‘conjunction fallacy’ cannot be properly called reasoning errors in the sense of violations of the laws of probability. (Gigerenzer 1994, 144)
Moreover, as we have already seen (Fiedler 1988), when questions are explicitly framed in the format of frequency with a clearly specified reference class (‘Among the 100 people who fit Linda’s description, how many are bank tellers and how many are feminist bank tellers?’), participants’ performance improves. This reveals that people make apparent ‘mistakes’ when the question is ill formed (asking the probability of a single event with no specified reference class), but do not make mistakes when the question is well formed (asking the frequency of something being the case among a specified reference class). It would seem, then, that people are far from being irrational.
What we call the ‘meaninglessness objection’ can be developed in at least two ways. In other words, there are at least two versions of it. The first version, the ‘factual version’, states that the frequency interpretation of probability is true as a matter of fact and, hence, the probabilistic questions in the heuristics and biases experiments are nonsense as a matter of fact. The second version, the ‘psychological version’, states that frequentism is psychologically true in the sense that the information about probability is represented in the frequentist format in the mind of human agents. Hence, the probabilistic questions in the heuristics and biases experiments are nonsense psychologically (i.e., the participants cannot make sense of the questions).
The factual version of the meaninglessness objection is philosophically bold; it presupposes the truth of the frequency interpretation of probability. The interpretation of probability is a highly contested topic in philosophy, which goes beyond the scope of this book. But it is worth pointing out that the factual version involves a controversial argumentative strategy. After all, (the simplistic version of) frequentism has been criticized exactly because people think that they can meaningfully talk about the probability of a single event (cf., Hajek 2019). It is true that the probability of a single event does not make sense according to frequentism, but it only shows that frequentism is problematic.
The psychological version of the meaninglessness objection is less bold than the factual version. Unlike the factual version, the psychological version is not committed to a particular claim about the interpretation of probability. The problem with the psychological version, however, is that it is not bold enough; if frequentism is not true as a matter of fact but is merely true psychologically, then the probabilistic questions do make sense as a matter of fact and, thus, reasoning biases are real errors as a matter of fact after all. The psychological version is psychologically problematic as well. It is unlikely that the probabilistic questions in the Linda experiment are psychological nonsense. Apparently, participants do not regard the questions as nonsense; after all, they provided meaningful answers to the questions in the experiments, rather than refusing to answer or demanding clarifications (Samuels, Stich, & Bishop 2002).
The Ecological Rationality Objection
The next objection directly confronts the standard picture of rationality, which defines rationality in terms of the rules of logic, probability, and decision-making. This seems to be where Gigerenzer’s fundamental disagreement with pessimists lies. As Gigerenzer notes, his main disagreement with the pessimism in the heuristics and biases programme is that it ‘does not question the norms [of logic and probability] themselves’ and ‘it retains the norms and interprets deviations from these norms as cognitive illusions’ (Gigerenzer 2008, 6).
The standard picture is problematic, according to Gigerenzer:
Humans have evolved in natural environments, both social and physical. To survive, reproduce, and evolve, the task is to adapt to these environments, or else to change them. […] The structure of natural environments, however, is ecological rather than logical. (Gigerenzer 2008, 7)
The problem identified by Gigerenzer is that the standard picture neglects the role of environment, which is a crucial factor for biological success. As an alternative to the standard picture of rationality, Gigerenzer offers the ecological picture of rationality, which characterizes rationality in terms of cognitive success in the relevant environment. According to the standard picture, rationality requires a fit between the mind and the rules of logic, probability, and decision-making, while ecological rationality requires a ‘fit between structures of information-processing mechanisms in the mind and structures of information in the world’ (Todd & Gigerenzer 2007, 170).
When we evaluate human reasoning performance in light of the ecological picture rather than the standard picture, the pessimistic interpretation is no longer warranted. For example, the information about probability available in the ancient environment was represented in the frequency format (e.g., ‘3 rainy days in 10 days’ rather than ‘30% chance of rain’). As we have seen, human reasoning performance is relatively good when the questions are represented in the frequency format. Thus, human probabilistic reasoning seems to be ecologically rational; it worked successfully in the ancient environment in which probability was represented in the frequency format.
A similar argument can be made about the Wason selection task. One might speculate that information about cheaters (those who receive the benefit of cooperation without contributing to it) was particularly salient in ancient societies (Cosmides & Tooby 1992). Indeed, failing to detect cheaters is a serious challenge to the maintenance of altruistic behaviours (e.g., Trivers 1971). It turns out that the Wason selection task becomes less challenging when the content of the statement to be falsified is an example of a social exchange rule. Alternative versions of the selection task were devised to test the hypothesis that a participant’s performance improves when the statement tested is a cheater-detection rule (Cosmides 1989). For instance, Richard Griggs and James Cox (1982) asked participants to imagine a police officer checking whether people drinking in a bar respect the following rule: ‘If you drink alcohol, then you must be over twenty-one years of age.’ The cards had on their visible sides one of the following: ‘Beer’, ‘Coke’, ‘22 years’, and ‘16 years’. In this situation, the majority of participants (correctly) choose the ‘Beer’ card and the ‘16 years’ card. It could be argued, then, that human deductive reasoning is ecologically rational.
In effect, Gigerenzer introduces an alternative conception of rationality, namely the ecological conception of rationality, and argues that human reasoning meets the requirements of ecological rationality. Thus, we seem to have two conceptions of rationality that yield two different interpretations of the experimental results. Our conclusion is pessimistic (‘Humans are irrational’) when human reasoning performance is evaluated in light of the standard picture. However, our conclusion is optimistic (‘Humans are rational’) when human reasoning performance is evaluated in light of the ecological picture.
A problem with this ‘ecological rationality objection’ is that it seems to conflate biological adaptiveness and rationality. For instance, Stanovich writes:
Evolutionarily adaptive behavior is not the same as rational behavior.