Arguments, Cognition, and Science. André C. R. Martins

Arguments, Cognition, and Science - André C. R. Martins


Скачать книгу
was a 5 percent chance of winning $12, the other, a 5 percent chance of winning $14. There was also the complementary 90 percent chance of winning $96, identical in both games. Many people picked the first gamble, even though that is clearly the worst choice, as the second game divides the chance to win $12 into the possibility of getting either the same $12 or a better outcome, $14.

      The details of the gamble might not seem to be a simple case. However, in a much simpler scenario, of winning bets with equal chances, Cohen and collaborators (1971) had already observed how people mistakenly think they are far more likely to win two consecutive games despite the odds not being in their favor. The actual probability of winning twice is 25 percent, yet the subjects they observed, on average, estimated the probability to be around 45 percent.

      It seems that we are not well adjusted to using probability values. Indeed, Gigerenzer and Hoffrage (1995) proposed that we can do a better job if we don’t receive probability estimates. Under some circumstances, it seems we make better estimates when we get those same estimates in terms of frequency. It might be better to state that something had been observed fifty times in a hundred attempts than saying the estimated probability was 50 percent.

      Those errors do not fit well in the heuristics explanation. Even if our brains use heuristic, some of those cases seem to make no sense. Why perform extra calculations to alter probability values and, with that extra effort, get worse results? An important clue came from experiments on how well people estimated correlations from data. In those experiments, no probability values were stated. Instead, people observed only data containing two variables. The researchers then asked if one of those variables seemed to be associated with the other or not.

      That situation was investigated by two teams of researchers in two different scenarios. In the first case, Chapman and Chapman (1967, 1969) tried to measure if people would see false correlations where none existed. They wanted to see what would happen when those false correlations were expected. They gave the volunteers pairs of words. Every pair was exhibited the same number of times, but the volunteers reported they thought that the pairs that made sense, like lion and tiger, appeared more often than the pairs that seemed random, as lion and eggs. When there was an expected relationship, people seemed to perceive that relation even when there was none.

      Expectations also played a role in the reverse case, observed by Hamilton and Rose (1980). In their experiment, they used variables where no correlation was expected; and, agreeing with their initial expectations, they observed their subjects failed to notice the data had any association when the influence was weak or moderate. If the association was too strong, their volunteers noticed it, but even in that case, they felt the association was weaker than what the data showed.

      That initial opinions would have any effect on data analysis might seem wrong—and it is, if the question is only what the data says—but things are different if you change the question a bit. If you are interested in not only the data but the original question that you wanted answered by the data, things are different. In that case, Bayesian methods (Bernardo and Smith 1994; O’Hagan 1994) provide a way to incorporate all you know about the problem into an answer, and they do so by telling us how we should combine what we already know with the new information we got. Assume you want to answer if an association is real, not only in the data, but in the real world. In that case, the correct way to answer is to use your prior knowledge as well as the new information. Normal people are far more interested in answering questions about the real world than just describing a specific data set.

      As a consequence, your final estimate might show some correlation when you initially expected it even if no correlation exists in the data. Bayesian rules state your expected correlation should move toward what the data says. That means, if you expect some and observe none, your initial expectation must become weaker, but it should not become zero. The same is true when your prior knowledge is that your variables should be independent. In that case, when combined with a dependency in the data, your final conclusion should be that there is some correlation. The final correlation estimate, however, will be weaker than what you’d get only from the new information.

      

      That does not mean that the volunteers were right. If the question was what was happening in the data, their answers were wrong. And if you are helping scientists in an experiment, it might not make much sense to have initial expectations. We do not live in laboratories, however; in real life, we might always have initial opinions. When there is some intuition that two things should be related, we may, at first, just assume that they are. It is certainly much faster than considering all the available information about the situation. Being fast does matter.

      It seems we might reason in ways that approximate the results of a Bayesian analysis. Since we are not naturally born probabilists, the next question is how good we are at getting close to a Bayesian estimate. To some extent, we do seem to be naturally born Bayesians. We might get the numbers wrong, but, from a qualitative point of view, our reasoning seems to follow the basic ideas of Bayesianism (Tenenbaum et al. 2007). We take our initial opinions and we mix them with any new observation, obtaining a posterior estimate. That is exactly how Bayesian methods work. Experiments show that we reason following those general guidelines even when we are about twelve months old (Téglás et al. 2011).

      As we look at the numbers instead of only the qualitative description, the story seems to change. Sometimes, we ignore part of the information that is available to us. We make easy probability mistakes (Tversky and Kahneman 1983). We fail at estimating our chances in two-stage lotteries. We also make mistakes in the case of the base rate neglect, first reported by Kahneman and Tversky (1973). That happens when we have information about the initial rate in which an event happens in a population. When we add to that an uncertain observation that give us clues on the probability of that event, we have a tendency to use only the observation. We just ignore the base initial rate. Apparently we only care about our own initial opinions, not about information we had not internalized somehow.

      One traditional example of how base rate neglect works, and how damaging its consequences can be, is the case of disease testing. Suppose there is a sickness that is very serious, but it is not present in most of the population. Only 1 person in 1,000 has it. There is also a reliable but not perfect test for that disease. It reports a positive result when a patient is sick with a 98 percent chance of accuracy most of the time. And it provides a negative result for healthy patients, also 98 percent of the time. You are a medical doctor and a new patient you know nothing about enters your office. She has the results of that test and she got a positive result, suggesting she might be ill. The question before you is how likely that woman is to actually have the disease. You have not examined the woman, that is all you know. The test might be right, but it might have failed.

      

      Most people estimate the chance that woman has the disease must be around 98 percent. After all, the test works 98 percent of the time. But, while the test reliability is a very important information for the final answer, it is not all. We also knew the base rate of the population, where only 1 in 1,000 are sick. Ignoring that part of the information can lead to very bad medical decisions. If we take the base rate as the initial probability, the initial opinion you should have, the actual chance the patient is sick is about 4.7 percent. That number is obtained by using Bayes theorem.

      While surprising, it is not hard to understand why the probability is so low. As we had an initial probability of the disease at 0.1 percent, we can see that the probability increased by a factor of 47. That is a huge increase. But, starting from a very small chance, we still end with a less small but not large probability. More importantly, as the result was positive, one of two things might have happened. The patient might have been sick and that had an initial chance of 0.1 percent, or the test might have returned a wrong answer. That happens 2 percent of the time. It is much more probable that something that had a 2 percent chance of happening caused the positive result than something that had a 0.1 percent chance.

      The correct solution to the problem is similar to how our reasoning works. We start with an initial estimate and correct it using the new data. But, from the point of view of our minds, that is not how the question is presented. The initial rate, while technically equivalent to an initial opinion, plays no such role in our brains. In problems where


Скачать книгу