Arguments, Cognition, and Science. André C. R. Martins

Arguments, Cognition, and Science - André C. R. Martins


Скачать книгу

      Figure 3.1 Representation of the figure shown by Asch to the subjects of his experiment.

      Influence toward a wrong answer could have bad consequences, even deadly consequences, as we are witnessing with the comeback of fatal diseases brought on by anti-vaxxer campaigns. That makes it important to figure out when groupthink is more likely to happen. Experiments do show that not all influences work equally. Surprisingly, the level of expertise of the source of information might have little effect on how much she can influence others. That was suggested by experiments performed by Domınguez et al. (2016). They observed that brain activity did not increase when influencers were experts, but stronger activation of the brain was observed when we had a history of agreeing more with the source of information. People we tended to agree with more frequently caused more effects in our brain than those with whom we disagreed more often. We seem to care less about disagreements with people we did not agree with much in the first place.

      As an example, De Polavieja and Madirolas (2014) also studied ways to get better estimates in social contexts. Their study suggested it might be better to consider only the opinions of very confident people, that is, to get average estimates only from those who did not change their minds under social influence. Their suggestion was an attempt to recover the initial range of opinions. On the other hand, that strategy might cause us to only pay attention to the most extreme opinions.

      The composition of group also seems to be an important factor, as we have seen in the Wikipedia problem. Answers are also influenced by framing effects—they often depend on how the question is asked or framed. While studying those effects, Ilan Yaniv observed that framing problems also tended to be smaller when the group was more diverse (Yaniv 2011). Homogeneous groups, on the other hand, were too susceptible to influence. They actually performed worse than the individuals in the group.

      In many of those studies, social influence happened between equals. There were often no positions of power nor any known expertise among the volunteers. In real life, however, we often find ourselves in groups where a hierarchy exists. Positions of power can add a new characteristic, if we want to understand how groups reason.

      Influence under a hierarchy was the subject of a very famous (and also infamous) experiment by Stanley Milgram (1963). Milgram was interested in understanding how it was possible Nazism could have dominated Germany. Most Germans were not murderous psychopaths. To answer that question, he prepared an experiment where one scientist was in the room controlling the situation. Two other people, the ones that were supposedly being tested, were assigned to specific roles. The first one was tied to a chair. That chair was connected to a machine that could be turned on to administer electric shocks to the sitting person. The task of the second individual was to switch the button that caused the shock, when told to do so.

      The first subject was asked questions. When the subject answered those questions correctly, nothing happened. However, each incorrect answer was to be punished with a shock, starting at the small voltage of 15V. Each error made the shock 15V stronger than the previous one, up to a final shock of 450V. The researchers described the experiment to the subjects as being about the effects of nervousness on the accuracy of individuals to answer questions, but that was not the actual setup. Unknown to the person who inflicted the shocks, no real shock was being applied. The person answering questions was an actor instructed to act as if the shock were real. The actor would get some answers right and some wrong, showing some discomfort at first. Eventually, the actor would beg for the experiment to stop. He showed very clear signs of distress and pain, but the scientist would instruct the second subject to continue on with the shocks, despite those pleas.

      Many among the tested people showed clear signs of extreme stress while hearing the cries of pain from the actor. Despite that, Milgram reported that 65 percent of them kept obeying the scientist up to the largest voltage. That experiment had several problems. It can be criticized in many ways, including the serious ethical problem of the horrible psychological pain it caused to the people who kept pressing the button. It also seems there were a number of problems about how well the experiment script was followed (Perry 2013). The 65 percent figure is probably an inflated estimate. But the experiment still showed how a trusted authority figure can make people act even against their best judgment. Milgram was not looking at possible changes of opinion. The actions he observed were not what we would expect from normal, thinking human beings.

      Group cognition can be an important asset. Evidence suggests one way to make group cognition better is to diminish the influence between the individuals, but that is not always possible. We often have to make decisions on subjects outside our expertise. To do that properly, we need help of others. At the very least, we need information provided by others. Nowadays, that is not hard to find. We live in a time when information is easily available in overwhelming quantity. So, another important question is how well we make use of that easy access.

      Once again, we seem to use reasonable strategies; and, once again, while those strategies make sense, they also carry bad consequences. Given the amount of available information, we need to have ways to separate what is relevant and what is trustworthy from piles of garbage. Identifying reliable sources becomes fundamental, but if we are not experts in a field of knowledge, we might just not have the skills to estimate who is reliable. If every possible expert agreed, it would make sense to listen to them. Too often, however, there is someone who disagrees and we are left with the problem of who is right. The disagreeing person might not even be an expert, maybe she just claims she is, but we might not know how to identify expertise.

      Under those circumstances, we need to guess how reliable our sources are when the only thing we have is our own opinions. We may start with no opinion on the matter, but as we collect information, some opinion starts taking shape. Either from the beginning or from some later point, we tend to find ourselves favoring one side of any debate. Since one point of view seems more likely true to us, we estimate that people who agree with us know better. They sound more trustworthy.

      That is a well-documented bias called confirmation bias (Nickerson 1998). When looking for data or arguments, we look for those pieces of information that agree with our views; we do not look for disconfirming evidence. The problem that behavior can cause should be evident. If we are lucky enough to start with a correct opinion, good. We will reinforce it and nothing too bad happens. But if we are not that lucky, we will only reinforce our erroneous point of view. As we avoid opinions that could show us we are wrong, confirmation bias can compromise our ability to learn and correct ourselves. It also seems confirmation bias might have an important effect on our overconfidence. By looking for reasons we are right, we only make ourselves more confident; we do not improve the quality of our estimates.

      Koriat and collaborators performed a series of experiments on the relationship between overconfidence and our tendency to only consider one idea. They posed questions to their volunteers and measured both confidence and accuracy (Koriat et al. 1980). The volunteers’ overconfidence tended to diminish when they received specific instructions to look for reasons why their answers might be wrong. The same was not true when the volunteers were told to give reasons why they might be right or when the instructions were to give reasons both in favor and against their answer. In this last case, people seemed to only look for weak negative reasons that they could easily counter-argue. They performed the task, but in a way that made their favorite answer look good.

      Confirmation bias does not happen only when we look for sources of information, though. It, or something similar, can also be found in the ways we reason. Taber and Lodge observed that when we receive arguments about a political issue, we do not treat them equally (Taber and Lodge 2006). When an argument supports our political views, we accept it at face value, but when we hear arguments that are in conflict with those views, we show proper skepticism and we analyze their merit. We look for reasons that would show why those arguments might be wrong, something we do not do when we agree with the conclusions.

      Indeed, it seems we even fail at tasks we are capable of performing, if that will help support our beliefs. Dan Kahan and collaborators (2013) have reported results that show that mathematically educated people make serious errors when analyzing data that conflicted with their personal opinions. In a control scenario, if the same data was about a neutral problem, people with better numeracy skills performed


Скачать книгу