Information Today, Inc. Corporate Site KMWorld CRM Media Streaming Media Faulkner Speech Technology Unisphere/DBTA
PRIVACY/COOKIES POLICY
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM Faulkner Information Services Fulltext Sources Online InfoToday Europe KMWorld Literary Market Place Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer Unisphere Research



For commercial reprints or PDFs contact Lauri Weiss-Rimler (lwrimler@infotoday.com)
Magazines > Online Searcher
Back Forward

ONLINE SEARCHER: Information Discovery, Technology, Strategies

Media Kit [PDF] Rate Card [PDF]
Editorial Calendar [PDF] Author Guidelines
SUBSCRIBE NOW! HOME

Page 1 of 2next
The Research Reputation-Industrial Complex: Corporate Influence on the Research Ecosystem
By
Volume 46, Number 4 - July/August 2022

I braved airline travel in COVID America to attend the fall 2021 meeting of the Coalition for Networked Information (CNI), which took place Dec. 13–14 in Washington, D.C. (cni.org/events/membership-meetings/past-meetings/fall-2021). At the opening plenary session, the forward-looking head of CNI, Clifford Lynch, talked about a topography of emerging advances and trends he sees in the broad terrain of research and scholarship. While the talk was interesting, during the Q&A portion, a question about research impact indicators particularly piqued my interest because the questioner used the term “research reputation-industrial complex” to describe the overreliance on simplistic metrics and indicators in research evaluation processes.

As it happened, using that phrase in his question was none other than library luminary Lorcan Dempsey, a prodigious thinker in the information profession with a longtime career at OCLC. Dempsey was gracious enough to give me some of his time to discuss issues related to the generation and refinement of research impact metrics by for-profit entities, namely Elsevier, Clarivate, and Digital Science. Dempsey suggested that info pros and librarians should question the motives of commercial players and their roles in research evaluation. Dempsey made it clear to me that he is merely an observer and by no means an expert in scientometrics or research evaluation, yet I thought his point merited pondering.

BIAS IN INDUSTRY-FUNDED RESEARCH

Concerns of bias in the outcomes or results of industry-funded/sponsored research are not new. Does sponsorship of a research project serve to reinforce the priority, aims, and goals of the sponsor? A 2018 scoping review in the American Journal of Public Health analyzed 36 studies and concluded, “Industry-sponsored studies tend to be biased in favor of the sponsor’s products” (Fabbri, Alice, Alexandria Lai, Quinn Grundy, Lisa Anne Bero, “The Influence of Industry Sponsorship on the Research Agenda: A Scoping Review,” American Journal of Public Health, v. 108, no. 11; Nov. 1, 2018; pp. e9–e16; doi.org/10.2105/AJPH.2018.304677).

If industry funding can taint the outcome of research studies, how does the fact that it is for-profit entities providing impact metrics affect the perception of value of a given research output (in the form of a peer-reviewed journal article)? We know most research impact indicators are insufficient for research evaluation purposes when used as a standalone metric. Many such indicators are misused, misunderstood, or misapplied in a variety of contexts. Web of Science, Dimensions, and Scopus are competing products that Clarivate, Digital Scholar, and Elsevier are trying to sell us as the best means of demonstrating research impact.

WHAT FLAVOR IS YOUR METRIC?

Citation databases draw from different publication datasets and calculate different values for the same metric (for example, citation count or h-index). Likewise, each vendor has slightly different metrics meant to serve identical purposes. I have sometimes joked about this phenomenon of citation metrics tool preference being based on brand recognition. Prior to Dimensions coming on the scene, I would liken the tools to cookies: Web of Science is the Oreo, Scopus is the Hydrox, and Google Scholar is the generic store brand of citation databases. Carrying the analogy further, everyone has their preferred cookie and chooses it based on necessity or values. Is an Oreo really worth the extra price over the Hydrox or generic cookie? You get a higher cookie count from the store brand package, but are the cookies of the same quality as the brand-name cookie counterparts?

You get the idea. The value in citation metrics can sometimes merely be the perception of brand-name characteristics advertised or promoted by the vendor that created the metric. Characteristics like cited reference quality, high citation count, or curated journal coverage come to mind.

Is the Article Influence Score from Clarivate’s Journal Citation Reports “better” or more informative than Scopus’s Scimago Journal Rank? Both journal-level metrics are ratios that seem to be outgrowths of the journal impact factor (JIF). They both use eigenvector analysis to weight the citation counts in the numerator to derive the score (jcr.help.clarivate.com/Content/glossary-article-influence-score.htm; scimagojr.com/files/SJR2.pdf). The denominator calculations differ, but both are based on publication count. The time frames are different (5 years for AIS, 3 for SJR), and the datasets are different.

The categorical rankings of journal titles do shift somewhat under each calculation, but generally, the AIS and the SJR are highly correlated. So are we getting chocolate-and-vanilla-flavored metrics, providing complementary flavor combinations? Dropping the cookie analogy for ice cream flavors, to me it seems more like the difference between Philadelphia-style vs French vanilla. You may have a preference of one over the other, but in the end, they are still relatively interchangeable in terms of flavor. The difference is nuanced, and I am not sure that this nuance is substantive for most evaluative purposes.

PROFIT MOTIVE AND METRIC DEVELOPMENT

The point is that the metrics from Clarivate, Elsevier, and Digital Science are essentially developed for competitive, not collaborative or complementary, purposes. And the competition between the database vendors is based on pursuit of our organizations’ subscription dollars. This leads me to wonder how genuinely concerned these vendors are in relation to misuse of their own proprietary blends of impact metrics. Dempsey raised a good point about popular institutional rankings: Times Higher Education uses Elsevier data (get a load of this PR video: youtube.com/watch?v=-_Lj4KgEqqE). U.S. News and World Report uses data from Clarivate’s InCites (usnews.com/education/best-global-universities/articles/methodology). I bet these news sources pay big money for the privilege of using the data supplied by the vendors.

Dempsey and I also discussed a 2001 book about business success by Jim Collins called Good to Great: Why Some Companies Make the Leap ... and Others Don’t (HarperCollins). One of the key thrusts of this book is to make certain you have the right metrics by which to gauge an organization’s success. Dempsey noted that this is only applies to profit-driven, not socially driven, organizations. Interestingly, Collins himself acknowledges this in a 2004 article in MIT’s Forum for the Future of Higher Education (library.educause.edu/resources/2004/1/getting-from-good-to-great-in-higher-education). In it, he states that economic-based metrics are not the best indicators of research institution greatness. Instead, Collins says, evaluation of socially driven organizations should be framed within the context of that organization’s particular mission.

His thematic overview of how these metrics should be approached includes some points on metrics of impact, framed with these questions: [T]o what extent can an institution marshal evidence that it has had a unique impact on the world it touches? That is, if the institution didn’t exist, how would the world it touches be different?” Generic metrics like h-index and JIF do not come close to the nuanced metrics needed to get a grasp on these concepts; neither would most research impact metrics.

A crystal-clear understanding of the organization’s research mission is needed to begin to create a meaningful metrics dashboard. Even then, the metrics developed would likely be institution-specific, and often would only measure the byproducts of impact. Comparing impact is squishy and may at times be highly subjective. Apples to apples comparisons of one-size-fits-all metrics applied to institutions (or researchers or laboratories) can be as misrepresenting as a comparison of the JIF for journals in differing disciplines.

Most vendors have affiliated think tanks that ponder and study these issues and propose solutions to research impact quandaries. For example, Clarivate resurrected the Institute for Scientific Information (ISI) and repurposed it as a scientometrics and research impact thought center (clarivate.com/webofsciencegroup/solutions/isi-institute-for-scientific-information). Elsevier has a competing entity: The International Center for the Study of Research (ICSR; elsevier.com/icsr). Digital Science doesn’t have a separately branded institute, but it, too, has a collection of thought pieces and research about, well, research on a subsite of its web presence titled: “Challenges” (digital-science.com/challenges).

I generally find the papers and reports from each of these entities helpful. They seem to me to be diligent efforts, as opposed to public relations pieces. Nonetheless, reports such as “Profiles Not Metrics” from ISI, one of my favorites (clarivate.com/webofsciencegroup/campaigns/profiles-not-metrics), are designed to move the impact metric discussion forward while preserving the integrity and reputation of their sponsor companies.

And here I find myself back to the point about corporate-funded research arriving at conclusions that validate the efficacy, safety, and success of that corporation’s output.  No wonder smart observers are characterizing the whole can of worms as a research reputation/industrial complex.


Page 1 of 2next



Elaine Lasda is coordinator for scholarly communication and associate librarian for research impact and social welfare, University at Albany, SUNY.

 

Comments? Email the editor-in-chief: marydee@xmission.com

       Back to top