Communicating Science in Times of Crisis. Группа авторов
messages intended and framed as opinion pieces or editorials (Allcott & Gentzkow, 2017). Others have attempted to distinguish “serious fabrications,” “large scale hoaxes,” and “humorous fakes” such as stories in The Onion (Bondielli & Marcelloni, 2019). There are, however, gray areas among these. For example, a politician’s false statements that are reported without any critical concern for their veracity (i.e., reported as a priori factual or potentially factual), or conspiracy theories that contain or rely upon verifiably false claims, may well overlap fake news, especially when news reporting itself gets duped by such false forms of information. Alternatively, conspiracy theories have been typologized by the extent to which they reflect (i) general versus specific content and structure, (ii) scientific versus non-scientific topics, (iii) ideological versus neutral valence, (iv) official versus anti-institutional agendas, and (v) alternative explanations versus denials (Huneman & Vorms, 2018).
Another example of a gray area in such typologies is conspiracy theories that are not disprovable at a given point in time and that may be plausible and feasible yet do not meet professional standards of veracity. For example, rumors regarding COVID-19 that the SARS-CoV-2 virus originated in a laboratory appears to be plausible to approximately a third of the US population, with 23% believing it was engineered, and 6% believing it escaped accidentally from a laboratory and another 25% indicating they are unsure of its origins (Schaeffer, 2020). As these narratives fit with certain political agendas of rhetorical scapegoating, and given that the contrary narrative of natural zoonotic infection (Calisher et al., 2020; CDC, 2019) is merely the relative consensus of scientists, it is difficult to know precisely how to categorize such “news.”
Technologically adapted forms of dismisinformation present a complicated category. For example, one “category of social bots includes malicious entities designed specifically with the purpose to harm. These bots mislead, exploit, and manipulate social media discourse with rumors, spam, malware, misinformation, slander, or even just noise” (Ferrara et al., 2016, p. 98). The role of machines (Schefulele & Krause, 2019), bots, algorithms, AI, and “computational propaganda” (Bradshaw & Howard, 2018) increasingly need to be included in typologies of misinformation—the logics may be intentional, but the information itself upon which such logics are applied, may or may not be intentionally fake, or may be intended more to sew chaos or political division rather than mislead per se.
Such malign uses of bots have already begun to be employed for political purposes. A study of tweets about the presidential election in 2016 and the subsequent state of the union address found that almost twice as many polarizing and conspiracy tweets (27.8%) involved amplifier accounts (bots) as professional news outlets (15.5%) (Bradshaw et al., 2020). It is unsurprising, therefore, that bots are beginning to play a role in disease outbreaks and the public response to those outbreaks. For example, bots are often designed with political purpose and intent and algorithmically designed to engage in trend hijacking or a tendency to “ride the wave of popularity of a given topic … to inject a deliberate message or narrative in order to amplify its visibility” (Ferrara, 2020b, p. 17). In this large social media dataset, bot accounts were substantially more likely to be the carriers of alt-right conspiracy theories compared to human accounts (Ferrara, 2020a).
“Though spam is not always defined as a form of false information, it is somehow similar to the spread of misinformation” that facilitates or promotes “the ‘inadvertent sharing’ of wrong information when users are not aware of the nature of messages they disseminate” (Al-Rawi et al., 2019, p. 54). Graham et al. (2020) identified a bot cluster of tweets including misinformation and disinformation regarding mortality statistics in Spain and Mexico, many of which contained graphic images of people with body disfigurements and diseases. Yet, there was no immediately discernable malicious intent or objective to the tweet stream. In other instances, the distinction between routine political polarization and identity politics, and disinformation, may be difficult to ascertain. For example, in the Graham et al. (2020) data, one bot cluster of tweets constituted a positive message campaign for the Saudi government and their Crown Prince, along with Islamic religious messages, aphorisms, and memetic entertainment techniques as click-bait. Another bot cluster was more extreme in its partisanship, representing tweets critical of Spain’s handling of the epidemic and hyper-partisan criticisms and complaints suggesting the government was fascist. Thus, the role of “intention” becomes problematized in operationalizing fake news, misinformation, and conspiracy theory in software-based mediation contexts.
The importance of this particular form of dismisinformation is suggested by a study of 14 million tweets sent by over 2.4 million users. They found that mentions of CNN were a dominant theme, and there was “not a single positive attribute associated with CNN in the most recurrent hashtags,” indicating “that conservative groups that are linked to Trump and his administration have dominated the fake news discourses on Twitter due to their activity and use of bots” (Al-Rawi et al., 2019, p. 66). Another study of 14 million Twitter messages found that “social bots played a disproportionate role in spreading articles from low-credibility sources. Bots amplify such content in the early spreading moments, before an article goes viral. They also target users with many followers through replies and mentions” (Shao et al., 2018, p. 1). Gallotti et al.’s (2020) analysis of 112 million messages across 64 languages about COVID-19 estimated that approximately 40% of the online messages were bots. Another study of 43.3 million English language tweets found that “accounts with the highest bot scores post about 27 times more about COVID-19 than those with the lowest bot scores” (Ferrara, 2020b, p. 8).
Other message clusters may be designated as intentional forms of shaping or reinforcing tactics rather than explicitly false information. For example, some of the Russian Internet Research Agency (IRA, or GRU) campaign was designed to amplify certain stances by increasing the flow of false posts to selected audiences as if they were from real persons (Lukito et al., 2020; Nimmo et al., 2020), but such efforts can simply machine-replicate actual persons’ posts with the intent to drown out competing messages or to reinforce or polarize differences in opinions. Such messages might not be explicitly false—they are simply amplified through replication and distribution and then targeted in ways that alter the appearance of the vox populi, not unlike traditional forms of mass communication.
Indeed, many of Russia’s IRA-generated tweets were able to zoonotically cross the social media—traditional media—barrier and make their way into traditional news media stories. Lukito et al. (2020) identified 314 news stories from 71 of the 117 media outlets searched that quoted tweets generated by the IRA between January 1, 2015 and September 30, 2017. These tweets generally expressed opinions posed as if they derived from everyday American citizens. An exemplar of an opinion tweet was in reference to the Miss USA in 2017: “New #MissUSA says healthcare is a privilege and not a right, and that she’s an ‘equalist’ not a feminist! Beauty and brains. She is amazing!” (Lukito et al., 2020, p. 207). Of those IRA tweets that were primarily informative in nature, “contrary to some popular discourses about the functions and effects of the IRA disinformation operation, the preponderance of IRA tweets drawn on for their informational content (119 of 136 stories, 87.5%) contained information that was factually correct” (Lukito et al., 2020, p. 208). The exemplar was a tweet about how “Security will be dramatically increased at Chicago’s gay pride parade” (Lukito et al., 2020, p. 208). In either instance, there is little in the content of the individual tweets that appears insidious or malevolent. However, to the extent they alter the appearance of the actual vox populi, they may function to shape the collective discourse and public opinion predicated or reinforced by such perceived norms of opinion and attitude.
A Proposed Typology of Mediated Dismisinformation
A long-standing assumption of communication and rhetorical theory is that “all rhetorical interaction is manipulative in that communicators intend messages and are strategic in their choice of causes, selection of materials, design of compositions, and style of presentation” (Fisher, 1980, p. 125). From such an assumption, and integrating much of the foregoing, a tentative typology of dismisinformation is proposed in Table 2.5. This typology is not likely to properly situate certain forms of online deception beyond