The Research Experience. Ann Sloan Devlin
in the United States the week of June 1, 2015, according to Nielsen (http://www.nielsen.com/us/en/top10s.html). The show ended in 2019 after 12 seasons.
Hasty generalization: Reaching decisions before evidence warrants, or faulty induction.
Faulty induction: Reasoning from the premises to a conclusion that is not warranted.
Hasty generalizations are a problem in many steps of the research process. We can consider the problem of hasty generalization when we talk about how much data are needed before conclusions are warranted. We can also include hasty generalization when we talk about sampling (see Chapter 11). Because humans are limited information processers and pattern seekers, we are eager to take information and package or categorize it; this process makes the information more manageable for us, but it may lead to errors in thinking.
A second kind of logical problem in thinking that Shermer lists is “overreliance on authorities” (pp. 56–57). In many cases, we accept the word or evidence provided by someone we admire without carefully examining the data. In the domain of research, we may have an overreliance on the published word; that is, we assume that when we read a published article, we should unquestioningly accept its data. Unfortunately, as we increasingly observe in academia, we should be far more skeptical about what has been published. Instances of fraud are numerous. Consider the case of fraud involving a graduate student, Michael LaCour (and Donald Green, the apparently unknowing faculty mentor), who published work in Science (LaCour & Green, 2014) showing that people’s opinions about same-sex marriage could be changed by brief conversations (http://retractionwatch.com/2015/05/20/author-retracts-study-of-changing-minds-on-same-sex-marriage-after-colleague-admits-data-were-faked/). LaCour apparently fabricated the data that were the basis of his article, and the story of how this came to light reinforces the idea that findings must be reproducible. Two then–graduate students at the University of California–Berkeley, David Broockman and Josh Kalla, are responsible for identifying the anomalies in LaCour’s data, which were revealed when these students from Berkeley tried to replicate the study. This revelation quickly led to the identification of other inconsistencies (e.g., the survey research firm that was supposed to have collected the data had not; no Qualtrics file of the data was ever created).
Overreliance on authorities: Trusting authorities without examining the evidence.
Qualtrics: Online platform for survey research.
Reproducibility Project: Project in which researchers are trying to reproduce the findings of 100 experimental and correlational articles in psychology.
The broader issue of reproducibility has been in the news recently with what is known as the Reproducibility Project (https://osf.io/ezcuj/), in which scientists are trying to reproduce the findings of 100 experimental and correlational articles in psychology published in three journals. The results (Open Science Collaboration, 2015) have been less than encouraging as many replications produced weaker findings than the original studies did. The authors emphasize that science needs both tradition (here, reproducibility) as well as innovation to advance and “verify whether we know what we think we know.”
Simply because an article has been published does not make it good science. Even well-known researchers publish articles that contribute little to the literature. In Chapter 2, you will see the need to take into account the standards of particular journals (e.g., their acceptance rates, scope of research they publish, and rigor of methodology) rather than treating the work in all journals as equal. Relying on authority without questioning the evidence leads to mistakes in repeating what might have been weak methodology, for example. As Julian Meltzoff (1998) stated in his useful book about critical thinking in reading research, we should approach the written (here, published) word with skepticism and always ask, “show me.” Meltzoff went on to say, “Critical reading requires a mental set of a particular kind,” and he believed this mental set can be “taught, encouraged, and nurtured” (p. 8). The value of a particular argument has to be demonstrated with evidence that stands up to rigorous questioning. In regard to the research process, being willing to challenge authority by asking questions is an essential skill.
Psychological Problems in Thinking: Problem-Solving Inadequacy
The last category Shermer offered is “Psychological Problems in Thinking.” Among the problems identified is the idea that we exhibit “problem-solving inadequacy” (1997, p. 59) when we don’t seek evidence to disprove, only to prove. We discussed this issue earlier in the context of the Wason Selection Task, where people rarely thought that turning over the 7 was necessary. We invariably turn over the E (that is, look for evidence to confirm the hypothesis).
Problem-solving inadequacy: When we do not seek to disprove hypotheses, only to confirm them.
Consider the sobering evidence that “most doctors quickly come up with two or three possible diagnoses from the outset of meeting a patient…. All develop their hypotheses from a very incomplete body of information. To do this, doctors use shortcuts. These are called heuristics” (Groopman, 2007, p. 35). The word heuristics is familiar from material covered earlier in this chapter and, unfortunately, in the current context! Once we develop our hypotheses, we tend to stick with them; relinquishing them is difficult.
Doing Science as Tradition and Innovation
When we think about how science advances, we can talk about the social and behavioral sciences broadly as a combination of tradition and innovation. As the work of Kahneman and Tversky (and others cited here) has shown, tradition is easier than innovation. It is much easier to operate within an existing framework and harder to figure out how to head in new directions. Most of the time we hope to master the tradition through a review of the literature, and then we take a small step toward innovation by figuring out how we can advance the discipline with this small step. We have to write a literature review or summary of the work in the field that shows our knowledge of past work; at the same time, we have to propose research that goes beyond the existing literature in some way. We should be able to answer the question, “What’s new here?” If views to everyday nature enhance recovery for surgical patients (Ulrich, 1984), why not see whether substitutes for nature such as representational paintings of nature have beneficial effects such as pain reduction. That use of “manufactured nature” would be a step forward. Researchers have done this work, and such representational paintings of nature do in fact reduce stress (Hathorn & Nanda, 2008; see Figure 1.4).
In your work, the problem of being governed by a paradigm or way of thinking about a research topic directly affects the kinds of research questions you are willing to ask. In an influential book written in 1962 titled The Structure of Scientific Revolutions, Thomas Kuhn describes how normal science proceeds. While he concentrates on scientific revolutions in physics, astronomy, and chemistry (e.g., Aristotle and Galileo, Ptolemy and Copernicus, Lavoisier and Priestley), the basic messages he provides in this book about how knowledge accumulates can be applied to the social and behavioral sciences. He states that scientists “whose research is based on shared paradigms are committed to the same rules and standards for scientific practice” (p. 11). Kuhn uses the term paradigm in an overarching way to describe scientific practice; components include “law, theory, application and instrumentation together” (p. 10). As he notes “normal-scientific research is directed to the articulation of those phenomena and theories that the paradigm already supplies” (p. 24) (italics added).
Paradigm: In science, an overarching approach to a field of inquiry that frames the questions to be asked and how research is conducted.