Everyday Bias. Howard J. Ross
How can we have good intentions, engage in so many of the right kinds of behaviors, and still not get it right? In fact, many of the results of research that has been done on bias show that biased behavior is shockingly normal. Let’s look at a few research examples that show how this behavioral tendency that exists in all of our minds shows up all around us.
Adrian North, David Hargreaves, and Jennifer McKendrick, members of the music research group in the psychology department at the University of Leicester in the United Kingdom, decided to find out whether the sound of music could influence people’s choices when shopping.[1] They stocked the shelf of a normal supermarket with eight different bottles of wine. Four of the bottles were French and four were German. The wines were alternately displayed in different positions on the shelf, to ensure that the shelf placement would not affect the experiment. They were matched for cost and sweetness. Flags of their countries of origin were positioned near the bottles. On alternate days, French accordion music or German Bierkeller music was played as background in the store.
The results of this experiment were startling. When the French accordion music was playing, 76.9 percent of the French wine was purchased. When the German Bierkeller music was in the background, 73.3 percent of the German wine was purchased! Interestingly enough, when the forty-four shoppers involved in the experiment were questioned after their purchases, only 14 percent of them acknowledged that they noticed the music. Only one said it made any impact upon their purchases.[2] In similar studies, researchers have found that classical music playing in the background, as opposed to Top 40 popular music, can encourage people to buy more expensive wine and spend more money in restaurants.[3]
How fair are NBA referees? Justin Wolfers, an assistant professor of business and public policy at the Wharton School at the University of Pennsylvania, and Joseph Price, a Cornell graduate student in economics, decided to find out. They studied more than six hundred thousand observations of foul calls in games over a twelve-year period between 1991 and 2003. They worked hard to sort out a large number of non-race-related factors in the way fouls were called by referees. What did they find?
As it was, white referees called fouls at a greater rate against black players than against white players. They also found a corresponding bias in which black referees called more fouls against white players than black players, although the bias was not as strongly represented statistically as was the case with white referees and black players. The researchers claimed that the different rates at which fouls are called is large enough that the probability of a team winning is noticeably affected by the racial composition of the refereeing crew assigned to the game. Wolfers and Price also studied data from box scores. They took into account a wide variety of factors including players’ positions, individual statistics, playing time, and All-Star status. They reviewed how much time each group spent on the court, and also considered differentials relating to home and away games.
In addition, the researchers reported a statistically significant correlation with performance relative to points, rebounds, assists, and turnovers when players were performing in games where the officials were primarily of the opposite race. “Player-performance appears to deteriorate at every margin when games are officiated by a larger fraction of opposite-race referees,” Wolfers and Price noted. “Basically, it suggests that if you spray-painted one of your starters white, you’d win a few more games,” Wolfers said.
David Berri, a sports economist, professor of economics at Southern Utah University, and a past president of the North American Association of Sports Economists, was asked to review the study. “It’s not about basketball,” Berri said. “It’s about what happens in the world. This is just the nature of decision making, and what happens when you have an evaluation team that’s so different from those being evaluated. Given that your league is mostly African American, maybe you should have more African American referees—for the same reason that you don’t want mostly white police forces in primarily black neighborhoods.”[4]
Jo Handelsman is a Howard Hughes Medical Institute professor of molecular, cellular, and developmental biology at Yale University, and the associate director for science at the White House Office of Science and Technology Policy. Curious about some of the dynamics that might account for the fact that a disparity has existed for generations between the performance of men and women in the sciences, Handelsman and several colleagues designed a relatively simple experiment to find out if gender plays a role in the scientific staff hiring process. In a relatively straightforward attempt to explore the question, Handelsman reached out to science professors at three private and three public universities and asked them to evaluate a recent graduate attempting to secure a position as a laboratory manager. All of the professors were sent the same one-page candidate summary. The applicant was intentionally described as promising but not extraordinary. However, some of the applicants were named John, and some were named Jennifer. All other aspects of the applications were identical.
A total of 127 professors responded to the request. The results were both fascinating and troubling. When asked to evaluate the applicants on a scale of 1 to 7, with 7 being the highest score possible, candidates named John received an average score of 4 for perceived overall competence. “Jennifer” received a score of 3.3. When asked if they thought they were likely to hire the candidate, John was seen as the candidate not only more likely to be hired, but also the candidate the professors would be more willing to mentor.
The professors also were asked to propose a potential starting salary for the candidates. Candidates named John were thought worthy of $30,328 per year. The Jennifer applicants would get $26,508.
Perhaps most surprising of all, responses from female professors were virtually the same as those of their male counterparts![5]
We are sometimes led to believe that scientists are particularly rational, but in looking at these results, one might ask if scientists are more or less rational than anyone else. The results from this particular study do not seem to indicate as much.
David Miller, an associate professor of internal medicine at the Wake Forest University School of Medicine decided to explore whether medical students’ responses to patients were affected by the extent to which the students had a bias about obesity. Between 2008 and 2010, Miller and his colleagues tested 310 third-year students. The students came from twenty-five states within the United States and from twelve other countries. A total of 73 percent of the students were white and 56 percent were men.
The students were tested for their reactions to people of different weights using the Implicit Association Test (IAT), a computer-based testing system developed by researchers at Harvard University, the University of Washington, and the University of Virginia that I will discuss at greater length later in this book. The particular IAT that Miller and his colleagues used asked the students to pair images of heavier people and thinner people with negative or positive words, using a computer keyboard in a timed exercise.
The race, age, or gender of the students made no difference in their responses. According to the IAT results, 56 percent of the students tested had an unconscious weight bias that was characterized as either moderate or strong. A total of 17 percent of the students’ results demonstrated bias against people who were thin and 39 percent demonstrated bias against people who were heavy. And yet, two-thirds of the anti-fat students thought they were neutral bias, as did all of the anti-thin students.
Miller remarked in a Wake Forest University news release that “because anti-fat stigma is so prevalent and a significant barrier to the treatment of obesity, teaching medical students to recognize and mitigate this bias is crucial to improving the care for the two-thirds of American adults who are now overweight or obese.”[6]
Ironically, researchers at the Bloomberg School of Public Health at Johns Hopkins University in Baltimore, Maryland, also studied the impact of weight on the doctor-patient relationship but from a different angle. They found that overweight patients tend to trust doctors more when they also are overweight, and that patients with normal body mass indexes tend to trust overweight doctors less.[7] “Our findings indicate that physicians with normal BMI more frequently reported discussing weight loss with patients than did overweight or obese physicians,” said Susan Bleich, the study’s lead author and an assistant professor