Information Today, Inc. Corporate Site KMWorld CRM Media Streaming Media Faulkner Speech Technology DBTA/Unisphere
PRIVACY/COOKIES POLICY
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM Faulkner Information Services Fulltext Sources Online InfoToday Europe KMWorld Literary Market Place Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer Unisphere Research



Vendors: For commercial reprints in print or digital form, contact LaShawn Fugate (lashawn@infotoday.com).
Magazines > Online Searcher
Back Forward

ONLINE SEARCHER: Information Discovery, Technology, Strategies

HOME

Pages: 1| 2
The Research Reputation-Industrial Complex: Corporate Influence on the Research Ecosystem
By
July/August 2022 Issue

RESEARCHERS RESIST, SOLUTIONS PROPOSED

It comes as no surprise that declarations (DORA), manifestos (Leiden) and reports (Metric Tide), as I wrote about in the September/October 2020 issue of Online Searcher (“Manifestos, Declarations, and Metrics, Oh My”; v. 44, no. 5, pp. 42–44) have been published that seek to counter the misuse of metrics for research evaluation and establish best practices for the use and creation of research impact metrics/indicators.

Some organizations have developed dashboards or schemas to capture the impact of mission- and value-based outcomes more effectively. Dempsey pointed me to the University of Utrecht in the Netherlands, which offers a Research and Rewards program favoring a commitment to open science over impact indicators such as JIF (uu.nl/en/research/open-science/tracks/recognition-and-rewards), and to the University of Waterloo in Ontario, where the Working Group on Bibliometrics partnership of the Libraries, Office for Research, and Institutional Data is currently updating its 2016 white paper titled “Measuring Research Output through Bibliometrics” (uwspace.uwaterloo.ca/bitstream/handle/10012/10323/Bibliometrics%20White%20Paper%202016%20Final_March2016.pdf?sequence=4&isAllowed=y?), a comprehensive effort at education and best practices regarding the measurement of research impact.

I will also throw in the Becker Medical Library Model from Washington University in St. Louis, Mo. (becker.wustl.edu/impact-assessment/model) as another interesting approach to getting at the true impact of organizational research activity. Mainly geared toward medical research, there are aspects of Becker Library’s schema that are applicable to any type of research. It favors a multidimensional approach to measuring impact, developing indicators based on five components: advancement of knowledge, clinical implementation, community benefit, legislation and policy, and economic benefit. None of these solutions are perfect. However, they are meaningful proposals for change. The other point to consider is that none of these approaches were developed by an organization whose primary motivation is profit; rather, they came from social-mission organizations in the form of research universities.

Some may make the argument that evaluative schemas like these, developed from within the research ecosystem, are just as suspect as those from ISI, ICR, and Dimensions’ research arm. Wouldn’t self-evaluation always put an entity in the best possible light? Are these alternatives giving research organizations a pass on impact? I would posit the answer is no. Collecting and analyzing the data that would be needed to develop an impact statement or profile under each of the approaches is more time consuming and more nuanced than pulling a series of out-of-the-box metrics together for a report. Do not get me wrong, I am not trying to bash Clarivate, Elsevier, or Digital Science for their research efforts—as noted, I find their reports very helpful. We simply need to consider the source when reviewing any sort of proposal or approach to evaluate research more effectively.

WHITHER THE INFO PRO?

No matter if the product is pharmaceuticals, research impact metrics, cookies, ice cream, or what you will, there are roles in this arena for us as info pros and librarians. First, there is an information literacy aspect in play: We can help educate our clients/patrons/users in how to scrutinize and evaluate the information with which they are presented with regards to various impact indicators and the differences among them.

Second, we are researchers and experts in our own right, and, as such, we can help our stakeholders identify what indicators truly reflect the organizational mission. Vendors might provide such indicators; they could be internally generated, or derived from other sources. Perhaps there is an opportunity for hybrids in which vendor-generated metrics are synthesized with internally generated data to create more informative knowledge unique to a research organization’s mission and values. Some of us are likely involved in efforts to this effect.

Finally, we can keep in mind for ourselves that our favorite citation database is a for-profit competitor in the research impact marketplace and take time to thoughtfully question the significance (or lack thereof) of the differences in datasets, calculations, and approach to research impact.

(For the record, I favor ice cream made with Madagascar vanilla, and I enjoy any chocolate sandwich cookie, regardless of brand.)

Pages: 1| 2


Elaine Lasda is coordinator for scholarly communication and associate librarian for research impact and social welfare, University at Albany, SUNY.

 

Comments? Contact the editors at editors@onlinesearcher.net

       Back to top