What Happens Now?
Fake News, the Gross Internet, and What to Do About Information Literacy
Text and illustrtions by Mark Roquet
[The following is based on a speech Mr. Roquet delivered at ITI’s Internet@Schools conference last October in Monterey, Calif. —Ed.]
IN THE DISORIENTING DAYS after the 2016 election, pundits and analysts grasped for an explanation—how did Trump win, given his weak national polls, compounding scandals, and overt bigotry? Among myriad other culprits, many analysts pointed to “fake news,” a catch-all term for the noxious clickbait articles and memes that spread disinformation virally across social media. It was later established that some of this material, which exploited racism, sexism, and the bottomless public appetite for scandal, was funded by Russian interests with the twin goals of destabilizing American democracy and putting Trump in office.
School libraries and our focus on information literacy instruction soon drew attention as a potential corrective measure for this new social ill. Just weeks after the election, The Verge declared that “school librarians have a huge role to play” in “the war on fake news.” In March of 2017, VICE begged librarians to “save us from fake news,” and in July, USA TODAY celebrated librarians that “teach CRAAP [Currency, Relevance, Authority, Accuracy, Purpose] to fight fake news.” Librarians delighted in this new attention; we aren’t often recognized by the mainstream media. While I believe deeply in the value and mission of school libraries, I don’t think this is any time for celebration. In fact, this moment should prompt critical reflection on the efficacy of our work.
We also must reassess our information landscape and where it’s headed. Too much of what we teach as information literacy skills prepares students for an information landscape that has passed. Librarians tell students to click on About pages, to assess sites based on their design and advertising, to only visit reputable news sites, and to trust legacy sites and paywalled content over social media and Wikipedia. Some of these suggestions are flat-out wrong, while others are unrealistic and are not followed by most news-literate adults. We find ourselves in rough information seas at this fraught moment in history. We must jettison old and useless ideas before we sink the civic ship.
Legacy journalism has experienced a small renaissance in the tumultuous days since the presidential election. Painstaking investigative reporting into the new administration’s scandals was rewarded with record subscription increases for major newspapers. However, there is no reversing the larger seismic shifts in how we consume news. The majority of us access much of our news from social media. This news is filtered by the opaque algorithms built into monetized platforms which privilege sponsored content, links similar to ones we’ve previously clicked, and shocking or sensationalistic material. Additionally, we only see links and opinions shared by accounts we follow—often people who hold similar opinions to ours. The result is a personalized news ecosystem that reinforces our existing beliefs and biases and makes it difficult to make smart decisions about what information we ingest.
Outside of social media, many of us use tools like Google News or the iOS News app, which track our behavior and aggregate content that reflects our opinions and interests. Many of these apps feature editorial content alongside hard news, with no distinction between mainstream and radical sources. Even The New York Times’ homepage is now customized. There is no going back. The tools we have taught students to use to assess news sources, such as the CRAAP test, are largely incompatible with this new ecosystem.
We increasingly live in a post-truth society, not because the truth doesn’t matter to us, but because my lived experience of truth may be different from yours. Much of the internet is invisibly customized to appeal to our personal demographic identifiers, opinions, and interests. Search results are different for each user; homepages are customized. In 2015, the Ted Cruz campaign ran topical web ads targeting attendees at a conservative Jewish conference at the Venetian hotel in Las Vegas. A conference attendee’s experience of the campaign was substantively different than someone across the street at the Mirage. Is it a stretch to think news articles may themselves soon be customized, given that some articles in The Washington Post are now written by bots? All of this has ramifications for information literacy. If my internet looks significantly different than yours, are we interacting with the same truth? How can I cite a source, when it reads differently on your device? “Post-truth” is often used derisively to describe the primacy of emotion over truth, but in fact, “the truth” is, increasingly, whatever publishers and aggregators think we want it to be.
For too long, many librarians have told students to avoid information on Wikipedia and social media and to be critical of websites with advertising, but to trust information on government websites and library-funded databases. Many of the guidelines we’ve given to kids about how to use the internet don’t hold up to common-sense or realistic everyday practice. Wikipedia, while imperfect, is more reliable, and certainly more up-to-date, than many reference sources. Meanwhile, the White House page on LGBT issues disappeared hours after Trump took the oath of office, and the EPA scrubbed climate change data from its website. To the extent they ever could be, federal government sites can no longer be expected to provide reliable information.
Social media, meanwhile, can be a wonderful research source. During the election, I found myself following the Twitter feeds of many journalists who were embedded in the campaigns. I found their timely updates and analysis incredibly useful. Similarly, scientists, authors, and artists increasingly use social media to communicate with each other and the public. Does this mean every social media account publishes legitimate information? Of course not. We’ve already seen how misinformation on social media influenced the outcome of the election. But if we are telling students that every social media account is equally untrustworthy, we are perpetuating rather than correcting the problem.
It has become very easy to make serious-looking websites filled with absolute nonsense. A recent Stanford study found that most students ranked the website of the American Academy of Pediatrics as a less reliable source than the august-looking homepage of the American College of Pediatricians, a homophobic fringe group that broke off from the mainstream Academy in 2002. Advertising has also too often been used as a metric for judging a site’s authority. This doesn’t hold up to scrutiny. Advertising defrays the cost of writing and hosting content; it can mean that a site supports professional writers rather than content sponsored by industry or advocacy groups.
Another reality librarians need to seriously grapple with is that the internet is often an awful place. While the web has created new access to voices that have often been marginalized by traditional media, much of internet culture is still dominated by “trolls”—often straight, white, cisgendered men who, against all available evidence, feel that the world is stacked against them and spend their free time harassing and threatening anyone who disagrees. This community, which coalesced around 4Chan, Reddit, and white nationalist websites, also drove much of the online energy behind Trump’s campaign. Reward structures for this behavior are built into the software that undergirds our internet lives. Technology companies have an overrepresentation of white men who don’t feel the effects of the harassment on their platforms; many of these male technology workers engage in this sort of harassment themselves, creating work environments that are hostile to women and people of color and perpetuate inequities in our digital society.
Librarians and other adults have often developed habits that protect us from some of the ugliness of our digital society. We know to avoid Comments sections. We know that vitriolic opinions we see online don’t always reflect public opinion. We know that big talk is often a cover for anxiety and low self-esteem. Our students don’t know these things; they have little context for what they are seeing. Starting early in elementary school, teachers and librarians send students out into the internet to complete school assignments. Teachers or parents who have watched young students navigate the internet know painfully well how few clicks it takes to unintentionally find offensive content online. We must take some responsibility for the hate and ugliness our students encounter online and equip students with the critical skills and orientations required to fix internet culture rather than fall for its worst lies.
The previously mentioned Stanford study found that professional fact-checkers use a technique that eludes many students and even professors: verifying information horizontally instead of vertically. In other words, rather than clicking an About link or assessing a page’s design, these fact-checkers opened a second tab in their browser to validate or invalidate information using Google and often Wikipedia. We must teach students this skill. The beauty of horizontal fact-checking is that it is platform-neutral; I can quickly verify or debunk information that I find on The New York Times, Twitter, or Infowars. It’s an approach that’s far better suited to our current information ecosystem than the CRAAP test.
Our fawning glorification of Silicon Valley entrepreneurship (think nine seasons of Shark Tank and children’s biographies of Elon Musk) obscures our painfully real digital economy, in which safety and labor regulations are side-stepped by companies such as Uber and TaskRabbit, creating large ranks of underpaid and unprotected workers who have been “disrupted” out of other careers. It also distorts the bargain implicit to many social media sites, which ask us to give away our photographs, ideas, and experiences in exchange for attention, a sense of connection, and perhaps a remote shot at fame. We must work to help students understand the full infrastructure of our online society. It’s vitally important that we teach students basic programming, not so that they can embark on profitable careers in Silicon Valley, but because it’s one of the literacies that will be vital for understanding and impacting social change in the coming years. I care less about whether my students can afford Teslas when they are adults and more about whether they can prevent our society from becoming a hyper-stratified, techno-dystopian hellscape by the time I retire.
It’s time to stop telling kids the internet is great in order to protect them from how awful the internet is. Too often, librarians act as cheerleaders for technology. We enthusiastically share cool apps and tools, and we teach “digital citizenship” lessons, developed by Silicon Valley companies, that encourage kids to share the right personal content online to cultivate a positive “digital footprint.” Instead, we should model critical thinking about technology. We should encourage students to unpack and examine the platforms and algorithms that dominate their lives, and make sure they have both the tools and license to push back against services they feel aren’t serving them well. We must develop student agency and engagement to solve the internet’s many problems.
It shouldn’t be news to anyone, particularly after the last 16 months, that bigotry and inequality are systemic problems in American society. It’s easy for librarians to believe they are part of the solution, axiomatically making the world a better place by virtue of their good intentions. However, an uncritical library instruction program, or one that ignores issues of race and identity, perpetuates inequity. In the specific context of information literacy, we’ve seen that people cannot make good information choices when manipulated by anxieties about race or gender. We also know that instruction and curricula that are “color-blind” reinforce existing inequalities and exclude marginalized groups from the educational experience. Knowing that unconscious bias is developed through repeated stereotypical representations of marginalized identities, we must mercilessly weed and craft our collections and curricula to reflect the rich variety of human identities and experiences. We need a radical new information literacy that, instead of preparing students for our current world, imagines a saner, more humane, more equitable future and equips students with the information skills they will need to get us there.