Information Today, Inc. Corporate Site KMWorld CRM Media Streaming Media Faulkner Speech Technology DBTA/Unisphere
PRIVACY/COOKIES POLICY
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM Faulkner Information Services Fulltext Sources Online InfoToday Europe KMWorld Literary Market Place Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer Unisphere Research



Vendors: For commercial reprints in print or digital form, contact LaShawn Fugate (lashawn@infotoday.com).
Magazines > Online Searcher
Back Forward

ONLINE SEARCHER: Information Discovery, Technology, Strategies

HOME

Pages: 1| 2
Chaos in Scholarly Publishing?
By
November/December 2021 Issue

Within the space of a week, I found three popular stories in my favorite news aggregator, and each made me uneasy. The first described pushback from the scientific community about a preprint article suggesting COVID-19 vaccines were less effective against two COVID virus variants (“Evidence for Increased Breakthrough Rates of SARS-CoV-2 Variants of Concern in BNT162b2 mRNA Vaccinated Individuals” (medrxiv.org/content/10.1101/2021.04.06.21254882v1; medrxiv.org/content/10.1101/2021.04.06.21254882v2). The second reported on a preprint study titled “The Autodidactic Universe” (arxiv.org/pdf/2104.03902.pdf), which suggested that the universe could be a machine-learning computer evolving over time. The third, “Compositional Perturbation Autoencoder for Single-Cell Response Modeling,” announced the development of the Compositional Perturbation Autoencoder, a machine-learning tool to speed up scientific research on various diseases, which came from a preprint paper that is expected to go into peer review (biorxiv.org/content/10.1101/2021.04.14.439903v1; biorxiv.org/content/10.1101/2021.04.14.439903v2).

What is the common element in all these studies? Not machine learning. Not scientific advancement. It’s the fact that all three are preprints, not peer-reviewed, yet their results are being widely reported to the public. Plus, they are already available full text in Google Scholar. You know—Google Scholar, the people’s academic database.

Our confidence about academic scholarly literature faces several challenges. Let’s consider some of them.

The evolution of preprints

Scholars for decades have shared their manuscripts with one another prior to peer review to get feedback. This was generally done in a closed loop so that the material rarely reached the public. Today, with the web offering opportunities for massively expanded sharing, any number of preprint databases have arisen. Think arXiv.org, bioRxiv.org, and so on. For a reasonably comprehensive list, see the Wikipedia entry on preprint repositories (en.wikipedia.org/wiki/List_of_preprint_repositories).

The advantages are fairly obvious. Since getting an article through peer review and formal publication can take a few years, posting on a preprint database can get crucial information out quickly, something that has been a boon in COVID times. Since peer review can be brutal, posting to a preprint site can open up the possibility for informal review from colleagues so that significant bugs can be worked out before the manuscript enters formal peer review. A preprint site, as well, can help a scholar increase his or her own standing in the scholarly community.

As an element of scholarship as a conversation, the preprint offers a lot of good possibilities, but also one obvious problem: Preprints are not peer-reviewed, yet they are available and easily found via a Google Scholar search; my recent search showed 36,900 explicit preprints. Most preprint sites clearly affirm the lack of peer review, but this is routinely ignored or downplayed by news organizations and by students.

Lack of replication

Getting funding is difficult these days, and those ho-hum studies that reproduce earlier ones to verify their findings tend not to get the attention of funders, journals, or tenure boards. Yet replication is crucial. Take this rather banal example: A 2014 study by Pam A. Mueller and Daniel M. Oppenheimer, “The Pen Is Mightier Than the Keyboard: Advantages of Longhand Over Laptop Note Taking” (journals.sagepub.com/doi/abs/10.1177/0956797614524581), claimed that “students who took notes on laptops performed worse on conceptual questions than students who took notes longhand.” This had a lot of play in education circles until several replication studies found that there was no such effect (“A Popular Study Found That Taking Notes By Hand Is Better Than By Laptop. But Is It?” (edsurge.com/news/2021-04-27-a-popular-study-found-that-taking-notes-by-hand-is-better-than-by-laptop-but-is-it).

What if no one had replicated it, something that is true of many innovative studies that stand alone and unchallenged for years, even though further investigation would have cast doubt on their findings? Peer review didn’t catch the initial problems. What is more, there is no consistent program for replication and, as a result, no sure way to challenge research results. This leaves us vulnerable to accepting research conclusions without requisite additional fact-checking.

Shoddy research

In years past, I would have entitled this section “Predatory Publishing,” but I’ve concluded that while there are true predators out there—publishers who use shabby methods to get academic papers from which they can generate sizeable author fees—there is a lot of simply ineffectual research, regardless of motive. Although we often assume that standards of peer review are the same in every environment, this is not the case.

It would be easy to accuse non-Western academics of inferior methods, but this is not a nuanced conclusion. The fact is, standards of peer review vary across cultures, and no publisher is immune from slippage in rigor. Following Retraction Watch (retractionwatch.com) is instructive. Consider these headlines from Retraction Watch: “Elsevier Journal to Retract Widely Debunked Masks Study Whose Author Claimed a Stanford Affiliation”; “Journal Retracts Paper by ‘Miracle Doctor’ Claiming Life Force Kills Cancer Cells.” The journals in these cases were published by Elsevier and Springer Nature.

True, shoddy research can sometimes be found out and retracted. But that is not the point. It’s that so much of it is out there, unchecked and available.

The heart of my concern

I will readily admit that I am deeply concerned about the bulwark of our society: scholarly research. It’s not that standards are dropping. They are not. It’s that the sharing of incomplete or inferior research, once an in-house activity among scholars, is now a feature of academic publication, unofficial but public. And replication is not happening to the level that it should be.

I get it. We need venues to share findings in order to spark further research and to build quality through widespread review. With traditional publishing timelines pushing at delays of 2 or even more years, getting out preliminary research results within weeks or even days of it being written up enhances the cycle of innovation as others pick up on those results. This also opens the possibility of multiple voices speaking to the quality of those results, thus creating a super peer review situation.

So, where is the downside? I see two significant problems. First, academia is already under attack by those who don’t understand it or whose views of “evidence” make them intensely suspicious of the scientific establishment. Widespread publication of research that has not been peer-reviewed means that we have many more contradictory conclusions than we used to. Study A says this, but study B, 3 months later, tells us, on further thought, that A is wrong. We’ve seen it with COVID-19 research, but it’s more prevalent than this.

There may well be good reason for research to take 2 years to publish. This gives publishers time to ensure that the results will survive beyond the next news cycle. True, delayed findings mean a delayed ability to use those results, but with a public increasingly distrusting scholarship, being more certain may well be more important than being fast.

Second, when all research—preprints, the shoddy stuff, and properly peer-reviewed documents—live in the same ecosystem (I’m looking at Google Scholar here), the nuances get lost. If it has references and looks scientific, it must be scholarly, right? I ask you, how is the average undergraduate, or even beginning graduate student, supposed to look at an article that has all the scholarly features and determine, on the face of it, whether the article is worthy or not? Few of the students I know will investigate the level of publication it represents, nor will they check out the kind of review it received.

A course correction?

Maybe we should call for a reset, an agreement that the scholarly publication environment will include only peer-reviewed resources, with all the preprint sites taken down and shoddy stuff banned. Now that would work, right? No it wouldn’t. Can you imagine how something like that could be accomplished without dictatorship and an amazing amount of surveillance? What is more, all the advantages of having preprints would be lost.

Let me suggest some possible correctives:

  1. Compel students to use only academic databases outside of the Google environment.
  2. Insist that students annotate all references in research projects with information regarding their peer-review status.
  3. Penalize student papers that use non-reviewed resources.
  4. Stop having students do any research at all, because they don’t understand the scholarly information landscape.

I can imagine my readers thinking, “Badke has gone off the rails. After 80-plus columns, he has finally lost it.” But notice that I said these are “possible correctives,” not probable, nor even feasible. The challenge with each of them is that they miss the point. My “correctives” are intent on herding students into one corral with an electric fence around it: Do your research here, and in this way, to protect yourself from the ephemeral or downright bad stuff outside the fence. But those same students live on the open range, not in a corral. They have to learn how to thrive in the midst of information confusion.

Pages: 1| 2


William Badke is associate librarian at Trinity Western University and the author of Research Strategies: Finding Your Way Through the Information Fog, 7th Edition (iUniverse.com, 2021)

 

Comments? Contact the editors at editors@onlinesearcher.net

       Back to top