Arguments, Cognition, and Science. André C. R. Martins

Arguments, Cognition, and Science - André C. R. Martins


Скачать книгу
be fast, based on heuristics, and work automatically with no conscious effort. System 2 would be activated when we need to solve complex problems that need a conscious effort, as, for example, calculations. It would work much slower and we tend to associate it with mental tasks that involve decision and agency. It would be related to our conscious thought.

      A similar division was also proposed among artificial intelligence (AI) researchers, although they had a different question in mind. Moravec (1998) observed that creating artificial systems that behave in similar ways to our higher cognition functions is actually much easier to do than creating systems that perform tasks we consider trivial, such as walking or recognizing faces. Those trivial tasks are the ones we share with most animals.

      The division between the two systems, however, is not perfect. First of all, they interact. One system obviously communicates with the other. Also, some tasks can change between systems. If you have learned to drive long ago, you can relate to that. The first time behind the wheel of a car, you had to make a genuine effort to remember all the details and to coordinate everything you had to do at the same time. That is, you were basically using your System 2. After months of practice, things became more natural; and after a time, you no longer noticed all the individual decisions your brain was making. Driving became natural, instinctive, System 1–like. Some harder tasks, those out of your comfort zone, for example, like parking a car you have never driven before into a small space, might require you to go back to using System 2. You will need to pay attention, make conscious decisions, and you will feel you require more from your brain.

      And yet, System 1 usage is far from being effortless or easy; we are just not aware of it. Even if multiplying two numbers with three digits might feel a far harder task than walking, it is not. The number of calculations our brain has to perform to keep you moving and to keep you from falling is staggeringly larger. Old, very simple calculators can easily do the multiplication. State-of-the-art robots are still learning how to move as effectively and graciously as we can.

      Indeed, the parts of our brain that we might call System 1 can do many very hard tasks so fast and, most of the time, so precisely, we are not even aware of its workings. Most of us are incredibly good at detecting when someone is angry, from very few clues. We can understand our native language even when the sound is horribly mixed with other sources and noise. We recognize faces well, often even when the person has made significant changes, such as changing their hairstyle, adding glasses, putting on makeup, cutting their beard, and so on. System 1, based on whatever heuristics our brain uses, is a very efficient system.

      Sometimes, however, it fails; and, when we recognize a specific failure of System 1, so the terminology says, we can activate System 2. We can stop, wonder what went wrong, analyze scenarios, conduct whatever mental calculations we consider necessary. And try to learn, if possible. That is not a tale of incompetence. It is a tale of a system with some remarkable and efficient characteristics. It is not a perfect system—far from it—but we can still feel some pride in it. We must, however, learn how to use our brain better, understand its limitations, and recognize when our natural reasoning can be trusted—or not. While the terminology is not perfect, it highlights how we might use fallible heuristics and still not be considered a failure.

      

      Too Confident

      Probability and logic are not skills with which we are born—nothing new there. If our brains were only compensating from the natural uncertainty of the world, we could stop here. The lesson would be that we only had to trust our instincts less and that we had to trust our complete calculations—when they do exist—more. But the assumption that our brains are only trying to do the best with a complicated situation has some implications, and we can check if that is the case.

      Assume all that mattered was finding the best answers with less effort. In that case, having a sense that we might be making mistakes would be useful. Even if we did not try to estimate how likely those mistakes are, simply awareness that they might happen would be good. It would help make it easier to reevaluate our opinions when we noticed the world seems to be at odds with them. That is, it is reasonable to expect we would be cautious and not too confident about our estimates. That is something we can verify. Measuring our confidence and comparing it with how accurate we are is something researchers have been doing for a while now.

      To test how well professionals know how accurate they are, Oskamp performed a series of experiments (Oskamp 1965). The group of subjects was composed of clinical psychologists with several years of experience, psychology graduate students, and advanced undergraduate students. They received only written data about a certain Joseph Kidd. From that data, they had to evaluate his personality and predict his attitudes and typical actions. Extra information was introduced in each of four stages, that way, it was possible to see how the subjects opinions evolved as they received more information. Surprisingly, the subjects did not get better at answering the questions as they got more information. Their accuracy, the percentage of questions they got right, oscillated a little, but stayed in the 23–28 percent range. Notice that, as they were answering a multiple-choice test, that is only a little better than a random rate, 20 percent. But they became more confident about their answers. At first, they thought they had gotten about 33 percent of the questions right. After the fourth stage, they believed they had scored a little less than 53 percent of the questions.

      New information made the subjects feel more confident, but it did not help them to answer more correctly. That tendency was confirmed in other studies. In another area of expertise, predicting results of basketball games, people became more confident with new data, but there was at least one type of extra information that seemed to make predictions become worse. When Hall et al. informed their subjects about the names of the teams, their estimate got worse (Hall et al. 2007).

      

      Of course, not all information is damaging. Tsai and collaborators (2008) asked their subjects to predict the outcomes of games as well. They slowly added more information on performance statistics, but they provided no names. For the first six cues, the ability to predict the outcomes did improve. After that, and up to thirty different cues, accuracy no longer improved, but confidence kept increasing.

      When we think our performance was much better than it actually was, we can say we are badly calibrated. In other words, we are overconfident. In the experiments I described, as people got more information, they started getting more and more overconfident. Even professionals seemed to think their work was better than it was. People trust their own opinion and evaluations when they should not. That happens to teachers (Praetorius et al. 2013), law enforcement officers (Chaplin and Shaw 2015), or consumers (Alba and Hutchinson 2000).

      Overconfidence does not happen all the time. When accuracy increases to large values, overconfidence tends to diminish. Lichtenstein and Fischhoff (1977) described that in their experiments; when subjects scored higher than 80 percent, they usually became underconfident. Confidence did seem to increase with accuracy, but it did not increase as much as it should have when individuals got more competent.

      Yet, it is not true that high confidence corresponds to high accuracy. Highly accurate people largely do report as confident, but those who score poorly can also feel confident about abilities they don’t have. When individuals estimated they were 99 percent sure they had answered correctly, Fischhoff and collaborators (1977) observed they were right between 73 percent and 87 percent. Even almost certainty, represented by only a chance in a million that they were wrong, was not real. When people were that sure, the researchers still observed an error rate between 4 percent and 10 percent. Remember how we change the probability values we hear to less extreme values. When everyone is stating they are far more sure than their competence, those corrections make perfect sense.

      Dunning and et al. have suggested that part of the problem might come from incompetence itself. In several areas, estimating one’s competence is only possible if you are competent. Incompetent people might not have the required competence to know how accurate they are (Dunning et al. 2003). But whether that effect is real or not, that is only one cause for the problem, if at all. It is also likely that the social context might provide important incentives to overconfidence. In many situations, we choose experts based on how confident those experts are. When we do that, we are encouraging overconfidence


Скачать книгу