Arguments, Cognition, and Science. André C. R. Martins

Arguments, Cognition, and Science - André C. R. Martins


Скачать книгу
trust memories.

      When we have an initial opinion, we have a tendency to keep it. But how strong is that tendency? Phillips and Edwards (1966) tested how much we change our estimates when we get new data. Their experiment had two bags filled with chips. The composition of each bag was known. They asked their subjects to estimate from where a series of chips had been drawn, assuming that, at first, both bags were equally likely. What they observed was that estimates did not change as much as they should. When their subjects changed that initial estimate of 50 percent to around 70 percent, the correct new estimate, dictated by probability rules, should have gotten them close to 97 percent. Phillips and Edwards coined the term conservatism. In this case, conservatism means our tendency to change our opinions less than we should.

      

      Peterson and DuCharme observed similar results and called this tendency the primacy effect (Peterson and Ducharme 1967). They also asked their volunteers to estimate from each urn a set of chips had been drawn. The difference between both experiments was that Peterson and DuCharme rigged their draws. In their experiment, the total number of draws their volunteers observed favored neither urn, but for the initial thirty draws, the chips seemed to come from urn A, while the next thirty draws reversed that effect. However, once the volunteers started thinking A was the correct urn, the same amount of evidence in favor of B was not enough to make them doubt. Indeed, after thirty draws that favored A, it took fifty new ones favoring B to compensate for that initial opinion.

      Conservatism is not observed only in laboratory experiments. It happens even when there is a lot of money to be made or lost, as for example in the world of corporate finance. Indeed, investors seem to react less than they should when they get new information (Kadiyala and Rau 2004). We seem to be too confident of our initial guesses. We might ignore data that we could use as an initial opinion and instead not use that information at all, as we have seen in the base rate neglect bias. But when the initial opinion is ours, we tend to keep it. It is as if we trusted our own evaluation better than external data, and we do that even when our opinion was formed recently from the same type of external data.

      That mistrust can actually work as an extra explanation to our biases. We trust our minds more than we trust other sources. For many circumstances, that makes sense. Others might have hidden agendas, some people might lie to us and try to deceive us. Outside sources are less reliable than ourselves, at least in symmetrical situations where both sides had access to the same amount of information and training. When others know as much as we do, it is reasonable to trust ourselves more. People lie, after all.

      Taking outside information with a grain of salt makes sense. If you do that, changing your opinions more slowly than a naïve Bayesian analysis might not be wrong at all. Indeed, to avoid deception, we should perform more rigorous calculations rather than putting our complete trust in the numbers provided at the experiments. We should include the possibility that others might be mistaken, lying, or exaggerating. Including those possibilities in a more realistic model of the world has the consequence of making us change our opinions less. In that case, we may be doing better work than assumed in those experiments—not perfect work. Ignoring base rates is ignoring relevant information. That is clearly an error, and our tendency to pay more attention to initial data than what we get later is also wrong. But we might have heuristics that, although fallible, could be compensating for the fact that the real world is far more uncertain than ideal situations in laboratory experiments.

      

      Indeed, if we look more carefully at the way we alter the probabilities we hear, the same effect can be observed. In most real-life situations, that is, outside labs and classrooms, when someone tells us a probability, that is a raw estimate. That estimate is uncertain, and it is often based on few observations. That raw estimate is not an exact number; it is data. And we can use that data to help us infer what the correct value is. If we include those ideas in a more sophisticated Bayesian model, we get estimates for the actual probability values. Suppose your teenage son tells you that you can let him drive your car. He claims there is only one chance in a million he will cause any kind of damage to it. And you know that is a gross exaggeration, even if, for the moment, we assume he is telling the exact truth about what he thinks. Part of his numbers come from a faulty estimate on his part. As he only has data on a few cases of his driving, he could never guess parts in one million. His estimate also suffers from something called survivor bias. If he had crashed the car before, it is very likely he wouldn’t think that. If only because that would be a poor argument, an argument that you wouldn’t believe as you reminded him of the incident. So, only lucky teenagers get to make that claim. Of course, in the real case, your son is also probably lying on top of his estimate, trying to convince you. The probability he provides is not the true value.

      Take a look at the lab experiments again. They were choices between gambles, and the gambles had exact probability values. Of course, the scientists are not your kid trying to get your car keys, but they are still people and you will still use your everyday skills if you are a volunteer to the experiment. When someone tells you there is only a 5 percent of chance of rain today, you should ask yourself how precise that is. That 5 percent becomes your data. If your initial opinion about the problem was weak, that would mean the chance of rain could be anything at first. The correct analysis uses that initial weak estimate and the 5 percent you heard. Your final estimate will be a probability between your lack of information and 5 percent. If the source for the 5 percent figure is very credible, it should dominate your final guess; if not, the 5 percent guess should matter little.

      In the original experiments, Kahneman and Tversky observed a curve that described what our brains seem to do when they hear a probability (Kahneman and Tversky 1979). They called that curve a weighting curve. Weighting curves are the function that gives us the value w our brains might be using as probability when they hear a probability value p. We do not know if our brains actually use weighting functions though. All that they observed was that they could describe our behaviors as if we did. If we use them, we can still understand some of our choices as if they obeyed decision theory, except we use altered probability values. Is there any reason why we would do that?

      Back to the rain problem, if you want to know how close your final estimate will get to the 5 percent, you need information on how reliable that value is. Is it a wild guess? Is it the result of state-of-the-art models that processed a huge amount of accurate climate information? Each case carries a different weight to the rain estimate. As we are looking for what would work well in most daily scenarios, the question is what a typical person would mean by a 5 percent value. Evidence shows people usually make guesses with little information and using small samples (Kareev et al. 1997). It is natural our brains would assume something like that.

      Put all those ingredients together and you can do a Bayesian estimate of the actual chance of raining. I was able to show that the shape of the estimated curves we obtain from those considerations matches those observed in the experiments (Martins 2005, 2006). Even the cases where we pick worse gambles can be described by an appropriate choice of assumptions. In the model, that corresponds to a proper choice of parameter values.

      Heuristics still seem to be a part of this answer. But here, it seems we have some specialized ways to interpret information about chances. Even when we should consider those chances as exact, we seem to think they are uncertain. It looks like we treat probabilities as the result of a guess by common humans. Common humans do not use probability nor statistics for their guesses. Our ancestors had to make their guesses based on the inaccurate estimates of their peers. That is also how we receive information when we are kids. Whatever our brains are doing, they might be helping us function better in a normal social setting. That might also make us fail badly in cases where we can get exact probability values, but those exact values only exist in artificial problems, usually with little impact on daily life.

      There is an approximate description of human reasoning that is a standard terminology in the psychology literature (Kahneman 2011). It is an approximation of how our reasoning works, and it is also a useful way to understand how we are shaped. While criticized, and almost certainly nothing more than approximation, it is still a useful terminology for illustrating some important points.

      That terminology claims our mental skills seem to work as if we had two types of systems in our


Скачать книгу