KMWorld CRM Media Streaming Media Faulkner Speech Technology Unisphere/DBTA
PRIVACY/COOKIES POLICY
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM EContentMag Faulkner Information Services Fulltext Sources Online InfoToday Europe KMWorld Library Resource Literary Market Place OnlineVideo.net Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer Unisphere Research



For commercial reprints or PDFs contact Lauri Weiss-Rimler (lwrimler@infotoday.com)
Magazines > Online Searcher
Back Forward

ONLINE SEARCHER: Information Discovery, Technology, Strategies

Media Kit [PDF] Rate Card [PDF]
Editorial Calendar [PDF] Author Guidelines
SUBSCRIBE NOW! HOME

Page 1 of 3next
Creating a Playbook to Combat Fake News
By
Volume 43, Number 4 - July/August 2019

Whether the headline was “Dewey Defeats Truman,” (1948), or “Hillary Clinton Adopts Alien Baby” (1993), fake news is not new. Librarians and information professionals have been concerned about its creation and dissemination for a long time. Judging from mentions in the popular press, fake news became a “thing” around 2016; a search of all publications on Factiva from 1984 to 2015 results in around 12,000 documents that mention the phrase. A search from 2016 to 2018, however, uncovers more than 200,000.

Interestingly, the meaning of “fake news” has evolved and changed in lockstep with its growth and proliferation. Prior to 2016, the term “fake news” was used to describe out right falsehoods—fabricated stories that are devoid of facts and totally wrong, regardless of agenda. The meaning of “fake” news” since 2016 has taken on a nuanced complexity. Massa chusetts Institute of Technology researchers Soroush Vosoughi, Deb Roy, and Sinan Aral, who undertook the largest study of the phenomenon to date, use the term “false news” rather than “fake news” in their research, stating that the latter term “has lost all connection to the actual veracity of the information presented, rendering it meaningless for use in academic classification” (“The Spread of True and False News Online,” Science , v. 359, issue 6380, pp. 1146–1151; science.sciencemag.org/content/359/6380/1146). Others, such as Neil Irwin of The New York Times , describe it as “an all-purpose insult for news coverage a person doesn’t like” (nytimes.com/2017/01/18/upshot/researchers-created-fake-news-heres-what-they-found.html).

Regardless of definition (and the definition of “fake news” remains constantly in flux and subject to interpretation), it is not an overreach to suggest that fake news is a complete insult to what librarians and information professionals hold dear and sacrosanct. Our reputation relies on our being curators, purveyors, and stewards of reputable news and information. We pride ourselves as upholding the gold standards for veracity and truth. However, in the current climate, when even those of us in the information business are finding it harder and harder to separate the real from the fake, we must remain constantly vigilant, continuously upgrading our information evaluation skills. We also need updated tools and resources to further our education in source evaluation.

WHY THE FAKES?

Before we take up the challenge of combating fake news, it is important to understand why people create it. Recently, much of this false content has been blamed on bots, which are essentially computer programs that fabricate stories, post, and share them automatically. This belief permeates the media. A search of all publications on Factiva since 2017 reveals more than 100 articles with “news” and “bot or bots” in the headline, with titles such as “Worried About Fake News? Then It’s Time We Talked About Social Bots” (Deutsche Welle, January 2017), “Twitter Bots Manipulating Stock Markets as Fake News Spreads to Finance” (Telegraph UK, March 2018), and “Twitter Bots Rampant in News, Porn, and Sports Links, Pew Finds” (USA Today, April 2018).

However, Vosoughi, Roy, and Aral’s study of viral news on Twitter found that bots retweeted real and fake news at equal rates, and that bot accounts that tweet fake news have fewer followers and send fewer tweets than accounts that tweet real information. Put another way, accounts that tweet factual news have been on Twitter longer than their fake news tweeting counterparts. Plus, most have verified accounts, giving the truth “a running start,” but allowing inaccuracies “to somehow still win the race.”

Further diminishing the role of bots is the fact that the techniques used to spread fake news that result in widespread beliefs and support for causes are uniquely labor-intensive. Proponents of these campaigns use SEO techniques to harness specific keywords (usually those used in the fake stories’ real news counterparts) so that the fake articles appear when people search for truthful information. These content creators also in filtrate conversation groups online, and use screen shots and images with text messages to escape detection from online moderators, according to Joan Donovan, director of the Technology and Social Change Research Project at the Harvard Kennedy School, as quoted by Christina Caron in a March 7, 2019, article in The New York Times (“Facebook Announces Plan to Curb Vaccine Misinformation”; nytimes.com/2019/03/07/technology/facebook-anti-vaccine-misinformation.html).

Another reason for creation of fake news is that false content creators may be motivated by money; these articles often command a fee. In the February 2017 issue of Wired Magazine, Samanth Subramanian wrote about the town of Veles, Macedonia, and how some of its residents, many of them high school students, were creating websites containing fake news about U.S. politics. These websites were immensely popular and garnered heavy traffic, enabling the creators to profit through Google AdSense; one teen report ed earning $16,000 in just 3 months (“The Macedonian Teens Who Mastered Fake News”; wired.com/2017/02/veles-macedonia-fake-news).

It is also possible that in the ultra-competitive media landscape, reporters may inadvertently publish information in a rush to garner an exclusive to a story. CBS News seems particularly susceptible to the urge to rush-to-publish; it report ed the deaths of both musician Tom Petty and former first lady Barbara Bush while they were still clinging to life. Adding to the confusion, CBS News has been an authority for a long time (reporting by its chief anchor during the Vietnam War, Walter Cronkite, was responsible for the conflict being labeled “The Living Room War”), reinforcing the fact that source vetting alone is not enough to guarantee authenticity.

MEDICAL MISINFORMATION

Sensationalized content is a way to make people angry or excited in order to unite them behind a particular candidate or cause. The anti-vaccination movement has been particularly adept at this phenomenon. In 2000, the Centers for Disease Control and Prevention (CDC) declared that measles had been eradicated in the United States. Fast-forward to 2019, and measles outbreaks in the Portland, Ore., suburbs and in Rockland County, New York, prompted public health officials to declare states of emergency.

Writing in The Atlantic, Nat Gyenes and An Xiao Mina pin pointed a single article linking the measles vaccine to autism as being responsible for this widely believed but false connection. Although the author’s medical license was revoked and the article retracted after revelations of “serious financial conflicts of interest” (the author was in the process of filing a patent for an alternative measles vaccine), “unethical data collection” (the author paid guests at his son’s 10th birthday party to provide blood samples), and fraud, the damage was done once the article’s precepts became known (“How Misinfodemics Spread Disease”; theatlantic.com/technology/archive/2018/08/how-misinfodemics-spread-disease/568921).

It is unlikely that the general public read that article in a medical journal; rather, they probably saw it on CNN iReport, a CNN website that allows anyone to upload stories that are “not edited, fact-checked, or screened before they post.” By October 2014, the article and a follow-up to it, also on the CNN iReport, had “more than 786,000 views and 256,000 shares on social media,” and viral YouTube videos featuring the key points of the discredited article had more than 124,000 views. Can we really blame the public for believing something that seemed to be from CNN? It would have taken more than a cursory glance at the details of the source to determine that its content was crowdsourced. Is it unrealistic to expect general news consumers to undertake source-vetting? Probably, and luckily for us, it leads to job security.

According to a Dec. 15, 2016, Pew Research report, 14% of people admit to having knowingly shared fake news (“Many Americans Believe Fake News Is Sowing Confusion,” Amy Mitchell, Jesse Holcomb, and Michael Barthel; journalism.org/2016/12/ 15/many-americans-believe-fake-news-is-sowing-confusion). Why would they do that? The most obvious motivation may be that someone is trying to entertain and inspire an eye roll or a chuckle. However, former Dartmouth University government professor Brendan Nyhan believes that the novelty of falsehoods and the emotional responses they illicit are to blame. “False in formation online is often really novel and frequently negative. Those are two features of information generally that grab our attention as human beings and that cause us to want to share that information with others” (“The Grim Conclusions of the Largest-Ever Study of Fake News,” Robinson Meyer; theatlantic.com/technology/archive/2018/03/largest-study-ever-fake-news-mit-twitter/555104).


Page 1 of 3next



Amy Affelt is director, Database Research, Compass Lexecon and author of The Accidental Data Scientist: Big Data Applications and Opportunities for Librarians and Information Professionals (Information Today, 2015).

 

Comments? Email the editor-in-chief: marydee@xmission.com

       Back to top