Theorizing Crisis Communication. Timothy L. Sellnow
seen as having more explanatory potential. Finally, theory should be structured in such a way that it can be tested. We noted earlier that a theory can never be proven true or accurate. It is possible, however, to prove a theory false. This characteristic of falsifiable is a critical component of any theory that has as a goal generating research.
Conclusion
Theory and theory building are expressions of our natural inquisitiveness and creativity. Humans have an instinctive drive to explain and understand; in this sense we are all theory builders and users. People who have experienced a crisis often feel an intense need to ensure that such an event never happens again. Explanation and understanding are part of that process. Interestingly, communication of the experience or sharing the story of the crisis is often part of the process. These stories help others learn and make sense of the event. Crises, however, are anomalous events and generate high levels of uncertainty about what is happening, why, and what should be done. Theory is particularly appropriate in these contexts for informing decisions and actions. Beyond this, however, theory helps build a more comprehensive understanding of crises: how they develop, what role they play, and how they can be managed.
3 Theories of Communication and Warning
Both scholars and practitioners have sought to understand the process whereby crisis managers and the public receive information about an immediate and impending threat, how that threat is interpreted and understood, and how it may impact individual decisions and actions. One result is a set of relatively specialized theories and models that address crisis detection, issues of evacuations, efforts to create shelter-in-place responses, and recalls of potentially dangerous products, such as contaminated food. While related to more general theories of risk perception and communication as presented later in Chapter 8, these approaches are distinct in dealing with the specific problem of how to inform the public about an imminent threat and provide motivation to take self-protective action. Warnings are important because they are the principal way, along with promoting preparedness, for reducing harm.
In this chapter we describe the general process of issuing warning messages as well as the contexts of such warnings. Some of the fundamental tensions of warning systems, including the duty to warn, are described, along with variables such as channels, audience characteristics, contextual variables, and timing. Warnings vary widely in terms of channel (e.g., sirens, text alerts), specificity (e.g., a Department of Homeland Security [DHS] color-coded alert of “elevated risk,” a hurricane evacuation order, a Centers for Disease Control and Prevention message about social distancing), and the source of the message (e.g., neighbors, media, government agency). A significant body of literature has sought to describe these variables in warnings.
We review several functional theories of communication and crisis warning, including Mileti and Sorensen’s Hear-Confirm-Understand-Decide-Respond model, Lindell and Perry’s protective action decision model (PADM) response framework, and the integrated food recall model. We describe several warning systems, including the Emergency Broadcasting System, the DHS alert system, and the National Hurricane Center’s cone of uncertainty. The development of mobile alert systems such as the Wireless Emergency Alert (WEA) are also described.
Detection of Risks
The detection or identification of risk is a communication process that may be understood as signal detection or, as described in Chapter 4, a trigger event. A trigger event signals a significant discrepancy between the current and desired state. Organizations and institutions survey their internal and external environment through an ongoing process of scanning to assess risks and threats. New risks are constantly presenting themselves and old threats reemerge. Signals about impending risk can manifest from news reports, warnings from scientists, automated warning systems, engagement of activists, government regulatory bodies, or through interpersonal sources, among others (Kasperson et al., 1988). To issue a warning, the threat must be recognized and agreed upon by decision makers. The development of a crisis usually involves a failure to recognize, receive, interpret, or attend to a threat signal. Mileti and Sorensen (1990) suggest that “[t]he ability to recognize the presence of an impending event is determined by the degree to which an indicator of the potential threat can be detected and the conclusion reached that a threat exists” (p. 4). Missed warnings, ineffective communication about a perceived threat, failed interpretations, and/or failure to act upon warnings, then, are typically associated with the development of a crisis (Seeger et al., 2003).
COVID-19, for example, emerged in Wuhan, China, with the first signals emerging December 30, 2019. Dr. Li Wenliang, an ophthalmologist at Wuhan Central Hospital, warned his colleagues of a new respiratory illness. Reports soon emerged in social media. The World Health Organization issued its first warnings in early January 2020. The Chinese government was slow to react and even tried to silence Dr. Wenliang, who later died of the illness. Most other governments were also slow to react, discounted the threat, and generally failed to take decisive action. In some cases, existing pandemic preparedness plans were not activated.
Turner (1976) included a discussion of these forms of failures in his larger failures of foresight model. As Turner (1976) noted, the failure to perceive a risk may involve a variety of signal features as well as general problems in reception, detection, and interpretation (Table 3.1). Seeger et al. (2003) claimed that signals and messages associated with threats are often faint, subtle, or not easily detected, and, in addition, are often incorrectly interpreted. They typically involve novel, non-routine information that does not have well-defined audiences, channels, interpretive schemes, or clear routine responses. The strength, frequency, and urgency of the message and the credibility of the source are important determinants of a response, including the chances of issuing a larger, more general warning message.
Table 3.1 Limiting Factor on Threat Recognition.
1. Weak or subtle crisis signal. |
2. Presence of strangers as distractors. |
3. Source of crisis signal not viewed as credible; that is, from an outside source or from a whistleblower. |
4. Inadequate channels for communicating risk or threat. |
5. Signal of threat embedded in other routine messages. |
6. Risk/threat messages systematically distorted. |
7. Organizational or professional norms against communicating risks and warnings. |
8. Risk/threat messages discounted because of inconsistency with dominant beliefs. |
9. Signals do not coalesce, are not compiled, or do not reach appropriate receivers. |
Source: Adapted from Turner (1976).
Weick’s theory of sensemaking (described in Chapter 7) outlines the ways information is collectively interpreted and the ways this process may collapse, mislead, or fail to recognize a risk (1988, 1993). Sensemaking is a collective process for creating plausible meanings and involves the “bracketing of cues from the environment, and the interpretation of those cues based on salient frames. Sensemaking is thus about connecting cues and frames to create an account of what is going on” (Maitlis & Sonenshein, 2010, p. 552). Three factors are identified that may influence enactment, precipitate failed interpretations, and lead to crises. These are commitment, capacity, and expectations. Commitment is associated with public statements reifying a specific interpretation. A strong public commitment from leaders to a particular interpretation may limit the ability of other interpretations to emerge. Capacity concerns sufficient volume and diversity of sensemaking resources. If managers are distracted by other issues and demands, they may not have the capacity to receive and interpret cues about impending risks. Finally, collective expectations may create blind spots leading to missed cues. Many crises may be attributed in part to failures in enactment, including the Challenger Shuttle Disaster (Gouran et al., 1986), the Flint Water crisis (Nowling & Seeger, 2020, and the Bhopal Union Carbide Disaster (Shrivastava, 1992). The failure to take decisive and rapid action in response to the initial warnings of COVID-19 can be explained in part by ongoing efforts by elected officials to downplay the risk and distractions from other issues and conflicts.