Provoke. Geoff Tuff
about how to respond. (The egocentric bias also tends to result in people overestimating their contributions to a group and underweighting the contributions of others, but, from the point of view of what causes people to miss trends, we are less interested in that aspect of the bias.1)
The egocentric bias may have developed because the human brain is better at coding things into memory when individuals believe that information will have an impact on them. At some point in our evolutionary history, this may have had advantages to our survival. Now, it challenges our ability to succeed if humans are less able to incorporate data that is not obviously connected to our current worldview.
Affect heuristic bias. This bias suggests that people base their judgments on their perceived affect toward what they are judging. Here, affect refers to the size of the emotional response (either good or bad) associated with the stimulus. Essentially, affect heuristic is a “gut” response to something that is triggered when we have strong feelings associated with the subject.2
The reason affect heuristic is partially responsible for the systematic blindness of people toward new – or distant – trends is that small trends are unlikely to provoke any emotional response. In our example in Chapter 1, our executive was unconcerned about a 1.75% segment because it didn't trigger any emotional response because it was overwhelmingly small compared to his overall market share. We also see this in people who try to get healthy. For many, the immediate response to exercising is that it “hurts” (a strong negative emotion) but the potential health benefits (a strong positive emotion) don't happen for a long time.
Each of the preceding three biases contributes in part to the inability of individuals to see trends that are at the “if” stage, or even the early “when” stage. They may not be in the available array of data that leaders assess, or they are discounted because they don't conform to their views, or they don't elicit an emotional response because of how distant they are. Taken together, “if” issues tend not to get raised within an organization until they might trigger some emotional response in someone (usually labeled an alarmist within the organization). Although we can't say this definitively, our strong hypothesis is that by the time something is triggering an emotional reaction, it's highly likely that the trend is at the far end of the “when” stage, when options for influence are limited.
The challenge of not seeing trends is further exacerbated by the human tendencies that prevent action against those trends. Several well-known biases include:
Status quo bias. This is a pretty straightforward bias: a preference for the status quo over a change. One explanation for this bias is that a deviation from the status quo is perceived by people as “losing” something – and humans are quite loss averse. Another explanation is that the status quo requires less cognitive effort to comprehend and maintain, while thinking about change requires more effort.3
The status quo bias is key when applied to organizational challenges. In our experience, we see a pervasive behavior from management teams that is rooted in the status quo. Imagine a management team meeting to evaluate a new product for launch. They will rightly name all the risks associated with the move against the potential upside. In most cases, they implicitly compare it to a baseline characterized by the status quo. For instance, consider the following typical risks that one might hear in a management meeting:
“It may not work as we anticipate, and our competition will gain share.”
“Our customers may not give us brand permission.”
“Our channels won't want to stock it.”
“There's no way sales will go for it.”
“The lawyers will just say ‘no.’”
Of course, all of those are distinct possibilities, but the comparison is implicitly to a status quo that is riskless. Management teams almost never take the status quo and assess all the risks associated with not launching the product – risks such as maybe our competitors will launch something faster or if we don't launch it and our competitors do, we will lose customers in the future. The way human beings tend to think about the status quo naturally positions any deviation from it as a “loss.” In other words, it makes the status quo a “stock” value (measured at one point in time – the present), rather than a “flow” value (measured over time).
Overconfidence bias. Another bias that makes action difficult is being overconfident in one's likelihood of being correct. People overestimate the likelihood that they are correctly judging a situation. They underestimate the chance that they are wrong. Several studies have demonstrated this bias by asking people to answer questions such as how to spell words or true/false statements on general knowledge topics, and then assessing their confidence in their answers. Systematically, people overestimate their chance of being correct. In other words, on questions they say they are 100% certain they are right, they are only correct say 90% of the time, and on questions they feel they are 80% right, they are correct less than 80% of the time.4
Combining these – when you couple the overconfidence bias with the availability heuristic, in which people don't see possibilities they are not intimately familiar with, and don't adequately assess the risks of the status quo – makes it easy to see how human beings are prone to systematic misevaluation of the potential impact of emerging trends that are not yet pervasive in their world. They just miss and/or dismiss them as a result of being typical human beings.
It would be great if organizational behaviors tended to correct for these human fallacies but, sadly, they don't. They do the opposite, reinforcing them and increasing the likelihood that humans fall prey to these tendencies. Several ways that human biases are reinforced in organizations include the following:
Embarrassment in meetings. How many meetings have you been in in which you had something important to say that disagreed with the consensus but you held your tongue just in case you were wrong? Or how many times did a disagreement start to develop when someone interjected to suggest “taking it offline”? Taking it offline is the widespread phenomenon that supposedly “saves” people from having to discuss challenging topics in groups. A successful meeting is one in which everyone agreed and people left feeling good – or the boss is happy. One of our very close friends was once brought to a meeting as a summer intern to keep the boss from yelling, because the team surmised that the boss wouldn't yell in the presence of an intern.
We look at meetings as something to get through while keeping face rather than a setting to discuss and debate important topics. This is corporate theater and not real discussion. Everything is prewired and socialized so that nobody has to disagree in the presence of others. Frankly, the two of us wish we might have lived in the time of Alfred Sloan, who once famously said, “I propose we postpone further discussion of this matter until our next meeting to give ourselves time to develop disagreement and perhaps gain understanding of what the decision is all about.”
If management teams are literally unable to create meeting space where legitimate disagreements are raised, not only because people might be embarrassed but because successful meetings are characterized as the kind where people don't disagree, then they will increasingly be unable to see and debate emerging threats.
Fear of embarrassment is a form of loss aversion on an organizational scale. People don't want to be seen to be wrong in meetings because organizational culture tends to deem being wrong as a loss of status.
Cognitive bandwidth of leadership. There is a demonstrated bias in psychology called the scarcity effect that makes people value things that are scarce above things that are plentiful. It used to be that only the very most senior executive leadership had their calendars characterized by wall-to-wall meetings. Now it's