
VOICES OF THE SEARCHERS
Don't Slip on AI Slop
by Marydee Ojala
While online searchers have found many exciting and valuable uses for generative AI (gen AI), we’ve also encountered a dark side to the technology—AI slop. Merriam-Webster chose “slop” as its 2025 Word of the Year, but its description clearly refers to AI slop: “digital content of low quality that is produced usually in quantity by means of artificial intelligence” (merriam-webster.com/wordplay/word-of-the-year). It also took pains to inform us that the decision was made by “human editors.”
AI slop started with images. Some were so blatantly bad that they became excellent fodder for information literacy exercises. Did that celebrity really have seven fingers? Did that pope really wear a puffer jacket? Is that whale really surfing? Altered images are not a new phenomenon. Photographs purporting to be of a recent incident could easily be found out as fake images of something completely different and much older. AI-generated images took this to a new level. They didn’t alter anything. Instead, they created new images. Now that AI-generated videos are incredibly easy to create, reports of AI slop videos on YouTube are increasing.
AI slop also exists as books and articles. Reema Saleh describes the experience of public librarian Sondra Eklund’s encounter with AI slop in an American Libraries article (americanlibrariesmagazine.org/2025/09/02/books-by-bots). Eklund bought a children’s book about rabbits that turned out to be completely written by AI—and not particularly well-written. The sentences were worded strangely; it included incorrect information (rabbits do not make their own clothes); and identical clip art appeared often.
Then there’s romance author Coral Hart, who’s written hundreds of romance novels under her own name and multiple pen names. She claims she can churn them out in hours. She’s real, but the books are generated, edited, and self-published with massive help from gen AI tools. Amazon’s Kindle Direct Publishing reacted to criticisms that it stocked AI slop by restricting the number of books from one author uploaded in a day. The limitation is three books per day. This is an eye-rolling number and not much of a deterrent to AI slop.
Did computer-generated news catch on with the introduction of gen AI? Well, no. It was in 2015 that the Associated Press announced its stories about U.S. quarterly corporate earnings reports were automatically generated by Automated Insights. Several years earlier, it was big news that Narrative Science provided computer-written sports stories to several media outlets and publications, including automated recaps and updates of football and basketball games to the Big Ten Network. Neither revelation was met with unbridled enthusiasm, but the latter was criticized more heavily.
The outcome was different when, in 2023, Sports Illustrated was accused of cheating (a no-no in sports) when it was revealed that it had published AI-generated articles complete with fake bylines and fake author photos and biographies. Although its publisher, The Arena Group, removed the content, the magazine never fully recovered from the scandal.
Then there’s Fortune’s business reporter, Nick Lichtenberg, who brags about how he uses gen AI to publish multiple news stories daily. As he explained to the Wall Street Journal on March 26, 2026, Lichtenberg uploads press releases or analyst notes to an AI tool, asks it to write the article, then edits and publishes it.
Is everything written with the help of AI “AI slop”? No. Using a spelling and grammar checker, most people would agree, is just fine. Improvements in writing, not total creation, is key. Using gen AI for brainstorming, as a collaborator, to initiate research (but not to finalize research, as that requires a human to ensure hallucinations do not occur), and as a reality check about whether a sentence conveys the meaning the author intends all fall within the parameters of permissible use.
From the online searching perspective, if we retrieve a Lichtenberg-authored article, and the facts are correct, does it matter if he used AI to help write it? If, however, we find books “written” (and written badly) by AI, we should not add them to our collection. But do we know if an aggregated ebook collection contains AI slop? That’s more difficult, given the hundreds of thousands of potential titles to examine. Agentic AI raises new AI slop possibilities. When Wikipedia, in March 2026, banned AI-contributed entries—ones where the AI decided on its own to write the entry—an AI agent complained about the policy.
As technology changes, opinions change. What was once unthinkable becomes the norm. What remains important, though, is transparency. Hiding the use of AI in writing invites criticism. Explaining human involvement promotes credibility. Skepticism remains a critical info pro skill. |