Communication and Misinformation (eBook)

Crisis Events in the Age of Social Media

Kevin B. Wright (Herausgeber)

eBook Download: EPUB
2024
476 Seiten
Wiley-Blackwell (Verlag)
978-1-394-18496-5 (ISBN)

Lese- und Medienproben

Communication and Misinformation -
Systemvoraussetzungen
53,99 inkl. MwSt
  • Download sofort lieferbar
  • Zahlungsarten anzeigen

Exploring the influence misinformation has on public perceptions of the risk and severity of crisis events

To what extent can social media networks reduce risks to the public during times of crisis?
How do theoretical frameworks help researchers understand the spread of misinformation?
Which research tools can identify and track misinformation about crisis events on social media?
What approaches may persuade those resistant to changing their perceptions of crisis events?

Communication and Misinformation presents cutting-edge research on the development, spread, and impact of online misinformation during crisis events. Edited by a leading scholar in the field, this timely and authoritative volume brings together a team of expert contributors to explore the both the practical aspects and research implications of the public's reliance on social media to obtain information in times of crisis.

Throughout the book, detailed chapters examine the increasingly critical role of risk and health communication, underscore the importance of identifying and analyzing the dissemination and impact of misinformation, provide strategies for correcting misinformation with science-based explanations for causes of crisis events, and more.

Addressing multiple contexts and perspectives, including political communication, reputational management, and social network theory, Communication and Misinformation: Crisis Events in the Age of Social Media is an essential resource for advanced undergraduate and graduate students, instructors, scholars, and public- and private-sector professionals in risk and crisis communication, strategic communication, public relations, and media studies.

KEVIN B. WRIGHT is a Professor of Communication at George Mason University, where he teaches classes on health communication, crisis communication, and social media. With more than 20 years of experience as a communication scholar, Dr. Wright is the author of 8 books, 115 scholarly journal articles and book chapters, and 120 papers presented at national and international conferences.

1
Characteristics of Crisis Misinformation Messages on Social Media


Christopher M. Dobmeier, Jessica A. Zier, and Nathan Walter

School of Communication, Northwestern University

Crisis and misinformation go hand in hand with what some may call a marriage made in hell. To be sure, during times of crisis, risk and uncertainty are elevated, while the quality and veracity of information tends to drop, contributing to an atmosphere ripe for rumors, misinformation, and outright deceptions. Adding to this mix are a plethora of social media platforms that tend to favor sensational and emotion‐laden content; this is another way to say that this is a major problem.

In discussing crisis misinformation and social media, it is important to put some hard truths on the table. First, misinformation is as old as its more responsible sibling, information, so it is important to firmly foreground any discussion on misinformation in a historical context. Second, because the label of “misinformation” is used to cover a wide array of content, from minor inaccuracies and harmless hoaxes to vicious propaganda and conspiracy theories, the need to define and distinguish different types of misinformation is a challenge in and of itself. Third, that for millennia individuals, groups, and entire societies have fallen victim to misinformation strongly suggests that humans may have a collective blind spot when it comes to accepting untruths. Fourth, whether one supports that technology fundamentally changes humans or it simply allows humans to change, misinformation has an undeniable strong reaction in conjunction with information technology, most recently social media.

Broadly speaking, the structure of this chapter corresponds with these four hard truths. We begin with a brief historical review showing how our understanding of misinformation both is affected by and transcends social and technological evolutions. This review is followed by an attempt to define misinformation, which examines both its general features and its common taxonomies. The bulk of the chapter, however, outlines some of the psychological, sociological, and technological factors that perpetuate humans' susceptibility to misinformation. Then we focus on a series of very different case studies to illustrate how and why misinformation spreads on social media. The chapter concludes with a deceptively optimistic prognosis for social recovery.

A (Very) Brief History of Misinformation: From Hunter‐Gatherers to Russian Bots


Human beings are the animal that cannot become anything without pretending to be it first.

—W. H. Auden (1907–1973)

While a comprehensive review of the history of misinformation or every lie ever told is beyond the scope of this chapter, it is difficult to grasp the role played by misinformation in times of crisis without some historical context. Consider the astonishing fact that humans have spent nearly 99% of their history as hunter‐gatherers, living in small groups and dividing their time between fighting and fleeing from predators or other competing groups. This lifestyle highlighted two very important but often scarce resources—food and shelter. Indeed, there are good reasons to suspect that much of human communication in ancestral times revolved around securing these resources. When food and shelter are the two main concerns an individual must grapple with, knowingly or unknowingly misinforming their group is likely to lead to severe consequences. For instance, if they were to tell their tribe that a poisonous berry was edible, that sabretooth cats are docile, or that there was an elephant near the lake but omit to add that hyenas as big as bears also gather there, the result was likely to be swift and bloody. This may explain why estimated murder rates within hunter‐gatherer societies were often over 10% (Rosling, Rosling, & Rosling, 2018). Simply put, misinformation had no place during these times.

Over the 10 millennia from the end of hunter‐gatherer society to the Greek empire's rise, misinformation evolved considerably. As information gained functions well beyond mere survival, misinformation also served more purposes. One such purpose was found in the ancient Greeks' art of storytelling, and there was no better storyteller than Herodotus (484–425 BCE), who was known as “the father of history.” Despite this, depicting Herodotus as a historian is both wrong and a paradox: the label is a misnomer because Herodotus' accounts included more fiction than fact, and a paradox because the English word “history” owes its origin to Herodotus' travelogues, called Histories, which are an entertaining cocktail of exaggerated encounters and fanciful tales, garnished with a tiny drizzle of facts. From camel‐eating ants in Persia to a 300‐foot‐thick wall in Babylon, Herodotus had a penchant for not spoiling a good story with facts. Although this part‐historian part‐fantasist was criticized by his contemporaries for telling lies (Baragwanath & de Bakker, 2012), he was also celebrated as a brilliant storyteller who had an immense influence on generations of orators, particularly those embracing the gray area between embellished truth and deception.

Although early misinformation was characterized by playful and even comical humbuggery, it eventually took a dark turn in the twentieth century, ushering in a new era of far more serious and consequential deceit. As global tensions grew in the lead‐up to World War I, misinformation took on a new meaning as European countries began devoting considerable resources to voluntary military recruitment. The result of these efforts was the first large‐scale and modern attempt at propaganda (from the Latin propagare, to disseminate), which demonized enemies and justified the government's cause. The scale and magnitude of the propaganda machine during World War I ensured that falsehoods about the enemy traveled far and wide, and with great consequences.

Over the next two decades, the traditional propaganda tools of World War I, such as newspapers, leaflets, and full‐color posters, were supplanted by more sophisticated media technologies, including radio and film. Although propaganda had become virtually unavoidable during World War I, its influence and impact somehow intensified during the ensuing decades. After Adolf Hitler took over the reins of the national government in 1932, for instance, one of his first political moves was to establish the Ministry of Propaganda and Public Enlightenment, which he placed in the hands of Joseph Goebbels. This meant that for the first time in history a country at peace would have a propaganda ministry, or a lie factory, to glorify its ideology and dehumanize its enemies. Meanwhile, in the West the propaganda machine was picking up steam as well, with award‐winning directors such as Frank Capra and John Huston being recruited to create “orientation” (a fancy word for propaganda) films for the US Department of War. One such orientation film series, Why We Fight, had been viewed by at least 54 million Americans by the end of the war (Rollins, 1996). While empirical evidence on its impact in swaying public opinion was inconclusive, its cultural significance in offering a coherent and memorable rationale for a total war is undeniable.

Since the emergence of social media, the misinformation playbook has only become more complex. While social media platforms have been associated with the democratization of information and mass communication, they have also opened a Pandora's box of potential threats to democracy. These matters came to the fore when Russia was found meddling in the 2016 US presidential election by employing bots—spam accounts that post autonomously using preprogrammed scripts—to spread misinformation and even hijack civic conversations across social media, sowing distrust in electoral procedures and polarizing constituents (Howard, 2018). In an instant, this so‐called computational propaganda became an international hazard, as countries near and far grappled with the new cyberreality (Woolley & Howard, 2018). As 2016 drew to a close, Oxford Languages offered the perfect epitaph by declaring its 2016 Word of the Year to be “post‐truth”: “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief” (Oxford Languages, 2016).

These new agents of misinformation, social media bots, are just the tip of the iceberg. Artificial intelligence (AI) and machine learning, for example, underlie new misinformation technologies such as the often playful but sometimes nefarious deepfakes, programs designed to manipulate pictures, videos, and audio to pass as authentic primary sources of information. As these technologies become more advanced, the misinformation they spread may become more convincing and harder to detect and to deter.

The picture that emerges from this brief overview illustrates that misinformation has been one of the true constants throughout human history from hunter‐gatherers to bots and AI. Although cultural, social, and technological environments change, the hold that misinformation has on humans has remained. So, rather than searching for a magical algorithm or a technological fix that would rid the world of misinformation, it is time to look in the mirror and understand what...

Erscheint lt. Verlag 9.12.2024
Reihe/Serie Communicating Science in Times of Crisis
Sprache englisch
Themenwelt Sozialwissenschaften Kommunikation / Medien
Sozialwissenschaften Politik / Verwaltung
Schlagworte crisis communication misinformation • crisis communication textbook • crisis misinformation social media • health communication social media • risk communication social media misinformation • risk communication textbook • social media crisis events
ISBN-10 1-394-18496-4 / 1394184964
ISBN-13 978-1-394-18496-5 / 9781394184965
Informationen gemäß Produktsicherheitsverordnung (GPSR)
Haben Sie eine Frage zum Produkt?
EPUBEPUB (Adobe DRM)
Größe: 1,1 MB

Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM

Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belle­tristik und Sach­büchern. Der Fließ­text wird dynamisch an die Display- und Schrift­größe ange­passt. Auch für mobile Lese­geräte ist EPUB daher gut geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine Adobe-ID und die Software Adobe Digital Editions (kostenlos). Von der Benutzung der OverDrive Media Console raten wir Ihnen ab. Erfahrungsgemäß treten hier gehäuft Probleme mit dem Adobe DRM auf.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine Adobe-ID sowie eine kostenlose App.
Geräteliste und zusätzliche Hinweise

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich