Information Today, Inc. Corporate Site KMWorld CRM Media Streaming Media Faulkner Speech Technology DBTA/Unisphere
PRIVACY/COOKIES POLICY
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM Faulkner Information Services Fulltext Sources Online InfoToday Europe KMWorld Literary Market Place Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer Unisphere Research



Vendors: For commercial reprints in print or digital form, contact LaShawn Fugate (lashawn@infotoday.com)

Magazines > Computers in Libraries > June 2023

Back Index Forward

SUBSCRIBE NOW!
Vol. 43 No. 5 — June 2023

FEATURE

Media Literacy Moves Front and Center
by Robert Berkman


Librarians, information professionals, journalists, teachers, and others in the information and media fields have long lamented the paucity of attention that media literacy has received in the United States. It was rarely part of the K–12 curriculum and was not a common part of public discourse. Little recognition was given to the harms that can occur when citizens do not have the ability to decipher, analyze, and make sense of their media ecosystem.

But this is changing—and quickly. During the last few years, I’ve seen a growing discussion about the urgent need for media literacy skills. This critical issue is, at last, getting the attention it deserves.

Included among the many developments and activities that illustrate this change are the increasing number of school districts, such as those in New Jersey (nj.gov/governor/news/news/562023/approved/20230104bb.shtml), which now require media literacy in the curriculum and more funding for teacher training to give them the knowledge and capacity to teach these classes. The legislation (S588), signed into law in January 2023, was the result of a bipartisan effort to promote information literacy under the implementation of the New Jersey Student Learning Standards. Additionally, a slew of recent articles in the popular press, including The New York Times, USA Today, and CNN, report on how media literacy is becoming more mainstream in U.S. classrooms.

WHY NOW?

Why this recognition now? It may well have started with discussions about the actual injuries incurred from being misled by online disinformation during the 2016 presidential election and the ensuing recognition of and reporting on the massive amounts of political misinformation appearing on social media, specifically on Facebook. Then there was the literally life-or-death matter of health-related misinformation being spread during the COVID pandemic. The final blow was the attempted coup d’etat of Jan. 6, 2021, spurred largely by those who became enraged over disinformation they believed to be true about the integrity of the 2020 election.

It has, then, become plainly obvious that the matter of online propaganda and malicious disinformation is not a trivial one: It has real and harmful effects on individuals’ health and safety, the country’s ability to govern itself, and even the functioning of our democracy.

Technology adds a new wrinkle to media literacy efforts. Gary Marcus, writing in The Atlantic, asks “Why Are We Letting the AI Crisis Just Happen?” (theatlantic.com/technology/archive/2023/03/ai-chatbots-large-language-model-misinformation/673376). He worries about misinformation that regenerative AI systems such as ChatGPT and Bard can so easily create and spread that may sound true but isn’t: “Bad actors could seize on large language models to engineer falsehoods at unprecedented scale.” This opens up whole new vistas for media literacy.

GLOBAL MEDIA EDUCATION SUMMIT

I attended the Global Media Education Summit held at Simon Fraser University, Vancouver, Canada, in early March 2023 (bournemouth.ac.uk/research/centres-institutes/centre-excellence-media-practice/global-media-education-summit-2023), where scores of scholars, researchers, teachers, and librarians from around the world presented their latest theories, findings, and practices in media education and literacy. The conference highlighted evolution and trends in media literacy. In more than one session, presenters delved into the importance of teaching students about algorithms and shared actual classroom exercises designed to help make these systems less opaque and more understandable for students.

In Developing Children’s Algorithmic Literacies Through Curatorship as Media Literacy, Amanda Levido, Annette Woods, Michael Dezuanni and Aleesha Rodriguez from the Queensland University of Technology (Brisbane, Australia) demonstrated how they created a curation exercise in which students were asked to take photographs based on a specific prompt (“See me use technologies to learn”) and display their photos to their classmates by hanging them on a clothesline.

The class then discussed how the images were similar and how they were different. Individual students shared why they chose one picture or another for the assignment as well as what other students should “know” about them (their likes, interests, etc.) to understand why they selected what they did. This small curation exercise helped the students—who, rather amazingly, were only 6–9 years old—get a taste of what it means to choose something for others based on a criteria as a way to begin to understand the workings of how algorithms operate.

In another session, Diana Maliszewski, a teacher-librarian from the Toronto School Board, shared the questions she asks her students to help them understand how algorithms operate when using social media sites like YouTube, which offers up suggested videos. Among her questions: “Where do these suggestions come from?” “Who is collecting the information?” “What are they doing with it?” “What do you think companies are doing with this data?” After that initial conversation, she turns the conversation to deeper related topics such as, “What do you think the impacts of algorithms are on your life?”

The summit included other key themes:

  • Media literacy as a national security issue: Countries that share a border with Russia confront disinformation regularly, so being media literate is seen as a vital national security concern. A presentation by Martin Priedols of the University of Latvia (Riga) discussed this matter in his country.
  • The value of librarian-teacher collaborations for teaching media literacy
  • The need for media literacy to be discussed and taught outside of the walls of schools
  • The importance of critical media literacy, which examines and unpacks the intersection of media literacy with matters of equity, inclusion, and social justice

Further Resources

An excellent selection of scholarly and practical texts and resources were displayed or noted during the event. The number of media literacy textbooks appears to be increasing, with the leading instructional text likely to be Renee Hobbs’ Media Literacy in Action (Rowman & Littlefield, 2021). Hobbs runs the Media Educa tion Lab at the at the University of Rhode Island’s Harrington School of Communication and Media. For scholarly works, Routledge is clearly the leading publisher, with a special series devoted just to media literacy: Media Literacy and Education (routledge.com/Routledge-Research-in-Media-Literacy-and-Education/book-series/RRMLE).

Here are some other resources that stood out as potentially being of particular value:

  • Guidelines for Teachers and Educators on Tackling Disinformation and Promoting Digital Literacy Through Education and Training, published in 2022 (op.europa.eu/en/publication-detail/-/publication/a224c235-4843-11ed-92ed-01aa75ed71a1/language-en). This short but information-packed free guide offers hands-on guidance for teachers and educators, including practical tips, activity plans, insights on topics, and cautionary notes grounded in what works in digital literacy and education and training.
  • The Algorithm & Data Literacy Project (Digital 2030, the Canadian Commission for UNESCO and UNESCO). This project offers a series of clear, easy-to-understand, and engaging animated videos that explain and unpack how algorithms work (algorithmliteracy.org).
  • The Global Village Square for Digital Media Literacy (McLuhan Foundation, Toronto). This is a new online hub where media educators can discuss teaching practices and share their work and lessons. Interested participants need to request to join: the-global-village-square.mn.co.

It’s good to know that the tide is turning and that media literacy is, finally, moving front and center as a concern not only for librarians but also for educators, scholars, and researchers.

Sander van der Linden

The Role of ‘Prebunking’ for Media Literacy

Robert Berkman’s Q&A With Sander van der Linden

In digging into the status and trends in media literacy, I took a somewhat different tack by interviewing Sander van der Linden, professor of social psychology in society at the University of Cambridge (U.K.), who is also the director of its Social Decision-Making Lab. The Q&A took place in late March 2023 via Zoom. His research on the use of “prebunking” to inoculate persons against disinformation has received wide popular attention, as has his just-published book: Foolproof: Why Misinformation Infects Our Minds and How to Build Immunity (HarperCollins/WW Norton).

Below is an edited summary of the conversation.

Where did the term “prebunking” come from?

It wasn’t until 2016 that I, along with John Cook, a researcher in Australia, began using the word. We both thought it would be a good term as it was a nice contrast with debunking. We were doing a lot of work in parallel, and in 2017, our studies on this came out, and we replicated each other’s findings. Then around 2018–2019, the term started taking off. We had previously been using the word “inoculation” but found that journalists and fact-checkers used the prebunking term often and it began getting more traction.

Is prebunking actually being utilized anywhere for teaching media literacy?

We’ve been doing it in classrooms since 2018, and it is being used in college education too. The News Media Literacy project has used our own Bad News game for teaching media literacy to a variety of environments, including prisons and for those in at-risk communities. 

Prebunking is predicated on knowing what disinformation to expose people to ahead of time, so they are less likely to be fooled when they encounter it for real in the future. But as methods, technologies, and strategies for creating disinformation change so quickly, how do you know what to prebunk for?

Well, we don’t focus just on prebunking against specific issues, but try to demonstrate general themes and strategies, whether it is what a conspiracy theory might “look like” or how people are often impersonated, and these can serve as a kind of umbrella that includes a range of potential future deceptions. You do need a lot of examples, though. We also found that exposing people to one form of disinformation (e.g., polarization techniques) will actually provide some protection when they encounter others (e.g., conspiratorial messaging).

The other point is that many of the disinformation techniques we see today have been used for a very long time. In fact, some of the anti-vax scare techniques of today are the same ones that have been used as far back as the 1800s! There are entirely predictable tropes being repeated. 

Is it tricky though to figure out what is currently disinformation vs. just an outlier opinion that eventually is accepted? In one of your scholarly articles from a few years ago, when examining conspiracy theories, you cited as an example people who believed that COVID-19 emerged from the virology lab in Wuhan, China. While that theory was initially widely dismissed, more recently, it has received credibility from certain U.S. government agencies as the most probable cause, though still with low confidence.

The problem here was that the real possibility that the virus came from a lab became wrapped around the conspiratorial belief that it was bioengineered by China as a weapon. [Note: The techniques of taking something potentially true, or with a grain of truth, and then adding false information on top of it is a common disinformation technique that can be quite effective . —RB]

There is a larger issue here too—the wider communications from the scientific community and WHO [World Health Organization] has been problematic, and they did not inform people about the range of possibilities and natural accidents that can happen. And this can create people who become overly skeptical of accounts as a side effect. It raises the interesting question of what is the right amount of skepticism. In our instruction and prebunking activities, with feedback, we can actually turn up or down these levels of skepticism. So, if people are overly skeptical of things—not everything is a conspiracy—we can help them turn that down.

When it comes to figuring out the right level of skepticism, which is a good question at the moment, I’d say that the optimum attitude and orientation is what we’d call “actively open-minded thinking”—where one can hold multiple hypotheses, be open, flexible, and leave room for uncertainty.

The Bad News Game

So much of your work, and by others in this field, naturally focuses on matters like applying rational thinking, logic, probability, critical thinking, and other cerebral approaches. But so often the way people react to information is driven by emotion, not rational thinking. How, if at all, are you able to deal with this?

One way is to look at the impact of social contagion, where people react to how others respond to a piece of information. Ideally, we would want to take our inoculation theory to its logical conclusion and try to get it to reach the equivalent of herd immunity. Unlike biological viruses, psychological beliefs can be passed from one to another as a form of social contagion, and people can have group immunity so it does not get passed along so much. We also use humor and other, less cerebral modes to help bridge divides.

Could malevolent entities also create and spread maliciously created prebunks to predispose people to doubt true information that is circulating online?

Yes, evil actors can also use this for nefarious purposes. The problem is particularly acute when the population is in a closed system such as in China or Russia and people cannot easily access other information. But this is not new either. For example, cults have been doing this for a long time—telling people to distrust others. 

And with deep fakes getting more sophisticated, spotting fake images and videos is getting harder now?

They are getting so much more sophisticated and realistic. We can’t rely anymore on some of the cues we used to be able to spot fakes—things like eyes or hair looking odd. 

So instead, what we want to teach people is to be more aware of the context in which they are viewing the content. What are the surrounding cues? Those typically are also misleading if the images and videos are fake; so we can inoculate people about what kind of context are tip-offs to disinformation.

The latest concern over misinformation centers around generative AI systems like ChatGPT. It seems that prebunking against this powerful form of human-like communication would require a different set of instructions and approaches. What might you advise here?

We have been working on papers for years on AI and misinformation. One finding was that they are good at creating headlines containing misinformation that sound very real. But we have also found AI can work well for our own training purposes, too, for example to create psychological scales to test people and even to mass-produce prebunks that can be used in our games and instruction.

U.S. Rep. Jim Jordan of Ohio and certain other Republican congresspersons have sent letters out to the University of Washington and other schools and research centers that have been studying disinformation, asking for documents. They allege that this work on disinformation is an effort to support a “pro-censorship Biden regime” and to oppose conservative principles. What is your reaction?

This is super-disturbing. What these people are doing is trying to use the principles of free speech to legitimatize the spread of misinformation. The issue is that although misinformation can and does come from all parts of the political spectrum, there is an asymmetry: More disinformation has been coming from superspreaders on the extreme right side of the spectrum, so they will, in fact, be called out more often. But then this gets spun into a “You’re targeting us” message.

One difference here in the U.K. from the U.S. is that conservatives and liberals are both likely to oppose extreme misinformation and will even suspend or sanction those that spread it. But in the U.S., a lot of prominent people in Congress are endorsing or staying silent, and then there are no social sanctions. 

Robert Berkman (berkmanr@newschool.edu) is the editor of The Information Advisor’s Guide to Internet Research and faculty member of the School of Media Studies at The New School (New York, N.Y.).

Comments? Emall Marydee Ojala (marydee@xmission.com), editor, Online Searcher.