Philosophy of Psychology. Lisa Bortolotti
that people make inferential errors. The errors seem to be due to lack of knowledge of certain inductive rules or an inability to apply them. If so, then people are not fully rational in that their inferences fall short of the best available normative standards. (Thagard & Nisbett 1983, 257)
We argue that the deviations of actual behavior from the normative model are too widespread to be ignored, too systematic to be dismissed as random error and too fundamental to be accommodated by the theory of rational choice and then show that the most basic rules of the theory are commonly violated by decision makers. (Tversky & Kahneman 1986, 252)
The argument for pessimism goes as follows. We provisionally answered the philosophical questions by adopting the standard picture of rationality according to which to be rational is to reason in accordance with the rules of logic, probability, and decision-making (Section 1.3). Then, we moved onto the psychological questions and saw that human reasoning systematically deviates from the rules of logic, probability, and decision-making (Section 1.4). Putting the philosophical and psychological discussions together, we reach a pessimistic conclusion about human rationality.
We now turn to the objections to pessimism, in particular the objections from the ecological rationality programme led by Gigerenzer. As Kahneman and Tversky note, Gigerenzer’s objections include two claims: ‘a conceptual argument against our use of the term “bias”’ and ‘an empirical claim about the “disappearance” of the patterns of judgment that we had documented’ (Kahneman & Tversky 1996, 582).
Gigerenzer’s ‘empirical claim’ is related to the psychological part of the argument for pessimism. Gigerenzer interprets the psychological studies differently, emphasizing the fact that reasoning performance can be improved when the problems are formulated in a different way. As we saw above, in the frequency version of the experiment run by Fiedler (1988), participants’ performances improved significantly. Notice that the proponents of the heuristics and biases programme and the proponents of the ecological rationality programme both recognize the possible performance improvements. Gigerenzer tends to stress the fragility of biases; he says that the biases ‘disappear’ (Gigerenzer 1991). Kahneman and Tversky, in contrast, tend to stress the robustness of the biases; they say that biases can be ‘reduced by targeted interventions but cannot be made to disappear’ (Kahneman & Tversky 1996, 589). But it is far from obvious that this is more than a difference in emphasis or rhetoric. Indeed, Kahneman and Tversky write: ‘There is less psychological substance to [Gigerenzer’s] disagreement with our position than meets the eye’ (Kahneman & Tversky 1996, 589; see also Samuels, Stich, & Bishop 2002).
A more substantial and philosophically interesting disagreement concerns Gigerenzer’s ‘conceptual argument’, which is related to the philosophical part of the argument for pessimism. He rejects the standard picture of rationality, or at least the way in which the standard picture is used in the argument for pessimism. He does not think that reasoning performance should be evaluated in terms of the rules of logic, probability, and decision-making. Among Gigerenzer’s conceptual or philosophical objections, we focus on three main objections, which we discuss in turn in the next section.
1.5 Objections to Pessimism
The Feasibility Objection
According to the first objection, which we call the ‘feasibility objection’, systematic failures to reason in accordance with the rules of logic, probability, and decision-making do not necessarily imply irrationality, because it is unfair and unrealistic to evaluate human reasoning performance in terms of such rules in the first place.
As we noted, the argument for pessimism is based on the standard picture according to which rationality consists in reasoning in accordance with the rules of logic, probability, and decision-making, such as the conjunction rule, Bayes’ rule, and the principle of descriptive invariance. But it would be unfairly simplistic and demanding to evaluate human reasoning in terms of the rules of logic, probability, and decision-making. As Gigerenzer notes, such an unfair and unrealistic evaluation ‘ignores the constraints imposed on human beings. A constraint refers to a limited mental or environmental resource. Limited memory span is a constraint of the mind, and information cost is a constraint on the environment’ (Gigerenzer 2008, 5).
Indeed, not everybody agrees with the claim that standards of good reasoning should be derived from the rules of logic, probability, and decision-making. Some philosophers (e.g., Harman 1999) have argued that human thought should have independent normative standards that reflect human cognitive capacities and limitations. In their accounts, normative standards of rationality are not modelled on formal principles of logic, probability, and decision theory.
Let us consider an example. The standard picture would demand logical consistency among a person’s beliefs, which means that a person should not have beliefs that are logically inconsistent with one another. However, the task of maintaining a logically consistent belief system is extremely demanding given realistic computational constraints. The job requires greater computational resources than those available to human cognitive systems. As Stephen Stich puts it, perhaps evaluating human reasoning in terms of the standard picture might be committed to the ‘perverse’ judgment that ‘subjects are doing a bad job of reasoning because they are not using a strategy that requires a brain the size of a blimp’ (Stich 1990, 27).
The feasibility objection certainly raises a fair worry about the standard picture in its idealized form: that is, rationality requires reasoning perfectly in accordance with the rules of logic, probability, and decision-making. The standard picture needs to be weakened in light of computational and other relevant constraints (although it is not easy to distinguish relevant constraints from irrelevant ones; see Stich 1990). In this vein, Christopher Cherniak argues against upholding logical consistency as an ideal of rationality for human belief systems and endorses a less demanding version of the standard picture of rationality that includes feasibility considerations. He calls it minimal rationality (cf., Cherniak 1990). According to the dictates of minimal rationality, for a belief-like state to be minimally rational it must conform to standards of correct reasoning that are feasible in light of the limitations of human cognitive capacities.
For our purposes, the normative theory is this: The person must make all (and only) feasible sound inferences from his beliefs that, according to his beliefs, would tend to satisfy his desires. (Cherniak 1990, 23)
However, the pessimistic argument cannot be avoided even when computational constraints are taken into account. It would not be fair to expect that rational agents maintain a perfectly consistent belief system when computational constraints are taken into account. However, even when the constraints are taken into account, it would be fair to expect that rational agents do not make particular errors and mistakes, such as the error of assigning a higher probability to Linda being a feminist bank teller than to Linda being a bank teller. After all, in all of the experiments discussed above, some participants do offer the logically or mathematically correct answer, and they do not have ‘a brain the size of a blimp’.
Stanovich (1999) empirically examined this issue by studying the relation between computational capacity (measured by SAT scores) and reasoning performance in reasoning tasks. He found a moderate positive correlation between computational capacity and reasoning performance but concluded that the computational limitation is responsible only for part of the reasoning failures. If correct, this supports the idea that reasoning errors that are found in the heuristics and biases research have little to do with computational constraints; thus, computational constraints cannot be an excuse for their reasoning errors.
The Meaninglessness Objection
The Linda experiment and other probabilistic tasks in the heuristics and biases programme ask participants to assess the probability of a single event (e.g., the probability of Linda being a feminist bank teller). However, according to a frequentist interpretation of probability, it is hard to make sense of the probability of a single