Arguments, Cognition, and Science. André C. R. Martins

Arguments, Cognition, and Science - André C. R. Martins


Скачать книгу
Inference, Volume 2B. London: Arnold.

      Oskamp, S. 1965. “Overconfidence in Case-Study Judgments.” Journal of Consulting Psychology 29(3), 261–65.

      Patihis, L., L. Y. Ho, I. W. Tingen, S. O. Lilienfeld, and E. F. Loftus. 2014. “Are the “Memory Wars” Over? A Scientist-Practitioner Gap in Beliefs about Repressed Memory.” Psychological Science 25(2), 519–30.

      Peterson, C. R., and W. M. Ducharme. 1967. “A Primacy Effect in Subjective Probability Revision.” Journal of Experimental Psychology 73(1), 61.

      Philips, L. D., and W. Edwards. 1966. “Conservatism in a Simple Probability Inference Task.” Journal of Experimental Psychology 27(3), 346–54.

      Praetorius, A.-K., V.-D. Berner, H. Zeinz, A. Scheunpflug, and M. Dresel. 2013. “Judgment Confidence and Judgment Accuracy of Teachers in Judging Self-Concepts of Students.” The Journal of Educational Research 106(1), 64–76.

      

      Radzevick, J. R., and D. A. Moore. 2010. “Competing to Be Certain (but Wrong): Market Dynamics and Excessive Confidence in Judgment.” Management Science 57(1), 93–106.

      Sagan, C. 1995. The Demon-Haunted World: Science as a Candle in the Dark. New York: Random House.

      Simon, H. A. 1956. “Rational Choice and the Structure of Environments.” Psychological Review 63(2), 129–38.

      Téglás, E., E. Vul, V. Girotto, M. Gonzalez, J. B. Tenenbaum, and L. L. Bonatti. 2011. “Pure Reasoning in 12-Month-Old Infants as Probabilistic Inference.” Science 332(6033), 1054–59.

      Tenenbaum, J. B., C. Kemp, and P. Shafto. 2007. “Theory-Based Bayesian Models of Inductive Reasoning.” In A. Feeney and E. Heit (Eds.), Inductive Reasoning. Cambridge: Cambridge University Press.

      Tsai, C. I., J. Klayman, and R. Hastie. 2008. “Effects of Amount of Information on Judgment Accuracy and Confidence.” Organizational Behavior and Human Decision Processes 107(2), 97–105.

      Tversky, A., and D. Kahneman. 1973. “Availability: A Heuristic for Judging Frequency and Probability.” Cognitive Psychology 5(2), 207–32.

      ———. 1983. “External versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment.” Psychological Review 90, 293–315.

      von Neumann, J., and O. Morgenstern. 1947. Theory of Games and Economic Behavior. Princeton: Princeton University Press.

       Groups and Ideas

      There is one more aspect of cognition we need to discuss. A huge part of what we believe we know, our opinions and preferences, comes from the influence of others. Social influence and social cognition can have an important impact on what we do and claim to know as a species. Individual ants, for example, behave following very simple rules, but their colonies can make sophisticated decisions. It is reasonable to expect we should also be able to do more as groups than as individuals. That leads to two questions: How do we influence each other? How do our opinions combine into social effects?

      More than a century ago, Francis Galton visited the West of England Fat Stock and Poultry Exhibition. There he observed a competition where the contestants had to guess the weight of a fat ox. Galton was surprised when he realized that, by combining the estimates of every participant, he got a median estimate (1,207 pounds) that was very close to the actual figure (1,198 pounds; Galton 1907). The average value of the guesses was even closer, at 1,197 pounds. That suggested groups might be much better at reasoning than individuals.

      The observation that group cognition can sometimes outperform individual reasoning is called the “wisdom of the crowds” (Surowiecki 2005). It has been partially confirmed by several other studies. The evidence that groups can, most of the time, reason better than the average individual seems solid (Hill 1982), but group reasoning does not fare so well against the reasoning of the most competent member of the group. Neither does it do well against statistically pooled responses or mathematical models.

      On the matter of biases, it seems there is no clear winner (Kerr and Tindale 2004). Overconfidence may diminish when we use groups (Sniezek and Henry 1989), but the details on how the group is allowed to interact can have an important impact on the correctness of the group estimates. Some questions can elicit deep emotional answers. When those reactions are brought to the group, interaction between members can become very detrimental to accuracy. The desire to agree (or not disagree) with the group seems to be important. That desire can undermine our critical thinking analysis. Janis called that effect groupthink (Janis 1972). We can observe this kind of phenomena in many circumstances where group belonging is important. Examples include political and religious discussions, as well as communities of sports fans. When groupthink happens, the group might reason worse than its best members would. In other words, the group can have a much worse performance than the average individual, if left alone.

      In a series of experiments, Lorenz et al. (2011) explored how much social influence could damage group wisdom. They asked people factual questions. When there was no interaction between the individuals, the group did show improved answers. Their results confirmed the wisdom-of-the-crowd effect. After that initial round, the researchers gave information about others’ answers to their subjects and asked who would like to change their initial answers. What they observed was a diminishing in the diversity of answers. They even observed cases where the correct answer would then lie outside the new interval of the answers the subjects provided. Despite the decrease in the quality of reasoning, confidence seemed to increase.

      There are, of course, ways to diminish the problem. Evidence from collaborative writing of articles on the website Wikipedia suggest that keeping the team as diverse as possible can improve the quality of the entries (Lerner and Lomi 2018). Indeed, models inspired by the data about the interaction of editors suggest the outcome might be of lower quality if one blocks the contribution of people with more extreme, potentially problematic points of view (Rudas et al. 2017). Preserving the diversity of opinions seem to matter in order to get better results.

      That diversity, however, might be easily destroyed by trivial social interaction. Solomon Asch (1955, 1956) performed a series of now classical experiments about social influence. His experiments highlight how well that influence can destroy the expected ability of groups to reason better. He asked his volunteers to look at a version of the picture in Figure 3.1.

      The question he asked the volunteers was this: Which of the three lines at the right panel, A, B, or C, has the same length as the line in the left panel? The question was asked to two groups. In one, the control situation, there was no social influence. In that case, 99 percent of the subjects answered correctly; it was line C. Other volunteers in the treatment group were subject to social influence. Before answering, they listened to others claiming A was the correct choice. Those people who provided the wrong answer were actors. They were not being tested; they were there to see if they could influence people to answer wrongly—and they did that. When there was a minimum of three people answering A before the actual volunteers answered, those volunteers tended to imitate the answers of others, and they picked the wrong choice up to 75 percent of the time.

      From that study alone, it is not clear why people made so many mistakes. It is possible that they still thought C was the correct answer but said A to fit in the group. They might have changed their perception to match what everyone else was saying or perhaps a little of both. Thanks to functional magnetic resonance imaging (fMRI) scans, recent experiments have given us more clues as to what might be going on. Eisenberger et al. (2003) observed that when we experience rejection, our brain activity is similar to that of real physical pain. And Klucharev et al. (2009) noticed that our tendency to conform to the opinion of a group happened with the use of the learning mechanisms of our brain. That suggests, though not conclusively, that at least to some extent, we do change our opinions.


Скачать книгу