Information Today, Inc. Corporate Site KMWorld CRM Media Streaming Media Faulkner Speech Technology DBTA/Unisphere
PRIVACY/COOKIES POLICY
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM Faulkner Information Services Fulltext Sources Online InfoToday Europe KMWorld Literary Market Place Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer Unisphere Research



Vendors: For commercial reprints in print or digital form, contact LaShawn Fugate (lashawn@infotoday.com).
Magazines > Online Searcher
Back Forward

ONLINE SEARCHER: Information Discovery, Technology, Strategies

HOME

Pages: 1| 2
Staying Relevant With Measures of Scholarly Impact
By ,
January/February 2019 Issue

[The contents of this article are based on presentations given by the authors at the Special Libraries Association 2017 Annual Conference and the 2018 Computers in Libraries Conference. –Ed.]

Scholarly metrics are measurements or indicators that can be used to tell the story of a body of research. Many librarians and information professionals are familiar with bibliometrics or citation-based metrics. The measurements found in Web of Science, originally created by Eugene Garfield as ISI Citation Indexes in the late 1960s and now published by Clarivate Analytics, have long been considered the gold standard of bibliometric indicators [1].

We now have the ability to collect and analyze a wider variety of data. In turn, this allows us new and interesting ways of measuring scholarship and research. Before we start digging into some key metrics and their strengths and limitations, we’ll address the basics: What exactly are impact metrics? Why do we use them? How do they actually demonstrate “impact,” anyway?

Scholarly metrics are traditionally considered an evaluative tool to determine if an academic researcher should be granted promotion and tenure within a college or university setting. Increasingly, however, they are used to measure and evaluate the impact, performance, or reach of not only researchers and journals, but also research institutions, groups of scholars in a given research specialization, and geographic units—the list is ever-expanding.

Measurements of impact are used to assess unit performance. Funders use them to evaluate grant applications, and they can inform strategic planning decisions within organizations. From impact metrics, we are able to glean what the “hot topics” are in a given field. Decision makers, whether part of an academic campus, research think tank, or government agency, increasingly use impact metrics when choosing how to allocate resources.

Reputation metrics

One way to look at different types of impact metrics is to determine whether the metric reflects the reputation of an entity (author, journal, etc.) or the utilization/citation of the entity’s research in other works (generally, scholarly and peer-reviewed). Approaches for identifying research impact based on reputation include word of mouth from colleagues or peers, endorsements of publications from disciplinary professional associations, identification of the bibliographic databases which index the research, and the composition of a journal’s editorial board. All are reputational means of evaluating the quality of a publication.

Sometimes within disciplines or sub-disciplines, a scholar will undertake a survey to identify and rank journals based on the perceptions of peers or experts within the specialization. When available, these survey/ranking articles provide useful information, but there is no systemic surveying of disciplines or specializations that provides reliable ranking across time. Instead, these surveys tend to be one-offs that provide a snapshot of publication reputation at a given point in time. After a few years, the hot topics in a discipline may shift, rendering the survey ranking out-of-date. Many survey rankings and peer recommendations rely very heavily on the perception that a high citation metric—the Journal Impact Factor (JIF)—is the best way to generate an initial list for ranking. However, using JIF could ignore journals in emerging subfields as well as newer journals not indexed by Journal Citation Reports.

Citation metrics

Evaluators of various stripes consider citation metrics the empirical gold standard for assessing impact. Citing scholarship does not necessarily constitute its endorsement. Yet it remains a measure of impact even when the context of the reference is to discredit the scholarship. Citation metrics are reproducible, increasingly cheaper to compute, quickly provide basic information, and are perceived by some as objective.

Research using text mining methodologies to perform sentiment analysis on how cited works are being presented in scholarly literature may reveal nuances to the relative objectivity of citation rates. A common concern about citation-based metrics is that they are not well-understood by evaluators. Those who do not clearly understand the strengths and limitations of a given metric may use them to make inappropriate and, some say, irresponsible, assessment and faulty decisions.

Journal-level metrics

JIF is the most common citation-based metric applied to journals. It’s found in InCites Journal Citation Reports, a component of Web of Science. JIF is a ratio that measures the citation rate of all articles published in a journal, either during a 2- or 5-year period, against the total number of articles published in the journal during the same period. By dividing cited reference count by the number of articles published, a journal that publishes many articles in a time frame can be compared to a journal in the same discipline that published fewer articles in that same time period.

JIF measures the citation rate of a journal taken as a whole. JIF is not necessarily a reliable quality indicator of a single article or an individual scholar, although it is frequently misapplied in this manner. Additionally, it is not cross-disciplinary. There are no apples-to-apples JIF comparisons between fields with different frequencies and patterns of citation. For example, a journal title in a social science may have a lower citation rate and a slower turnaround to be cited than a journal title in the hard sciences, mathematics, or computer science. It is possible that a researcher doing computational behavioral science would consider publishing in journals covering both psychology and computer science. In this case, look at the percentile and quartile rankings of the impact factors to more closely find an equivalently prestigious journal from each discipline.

Another issue with JIF is that each article counts as “1” in the ratio used to compute JIF. Another metric, called Eigenfactor, uses a technique called eigenvector analysis to weight the articles within a journal to compute the ratio of journal “influence.” Eigenvector analysis weights citation counts more heavily when the citing article itself has a higher citing reference count. In other words, the difference between an Eigenfactor score and a JIF score is that some citing articles are considered more influential than others when the ratio is computed. Unlike JIF, the Eigenfactor score will always be higher if a journal publishes more articles, but, like JIF, due to varying citation rates from discipline to discipline, Eigenfactor is not comparable between fields.

There are many newer, citation-based metrics from other sources, such as CiteScore in Scopus, Source Normalized Impact per Paper (SNIP), and SCImago Journal Rank. These metrics are freely available on the web (scopus.com/sources), however, the free web version lacks the trend data and comparative functionality available when one subscribes to the Scopus database. Google Scholar also does journal rankings (scholar.google.com/citations?view_op=top_venues&hl=en). Taken together, these indicators can provide a more robust picture of the influence, prestige, and impact of a journal title.

The nature of citation-based metrics can raise other concerns. Within a given discipline, it takes time for ideas to filter through its researcher community. Is the 2- or 5-year win dow in JIF and Eigenfactor for capturing citing references accurate? There are often times when the importance of research results may not be recognized until much later.

The responsible use of metrics has recently attracted attention. The San Francisco Declaration on Research Assessment (sfdora.org) was developed by editors and publishers at a 2012 conference in San Francisco. The Declaration outlines many concerns with journal-based metrics (most notably JIF) and best practices for various entities using scholarly metrics. As of this writing, almost 600 organizations and 12,000 individuals have signed the declaration.

Gaming the System?

There are also dubious knockoff metrics that have names which look or sound misleadingly similar to the ISI/Clarivate metrics, such as the Global Impact Factor and Universal Impact Factor.

There are other ways of trying to game impact scores. Net works of spurious journals have cropped up which all cite each other to artificially inflate citing reference counts. Furthermore, many news reports of scholars forming citation “cartels” to boost their citation rates of the group’s publications are troubling. Although these, and, no doubt, other dubious or ethically suspect practices exist to inflate citation counts, none are generally considered significant threats to the integrity of citation-based indicators.

Author-Level Metrics

Some metrics applied to journals are more informative when applied to individual authors. Common author-level citation metrics include a simple citation count (sometimes adjusted to remove self-citations) and the h-index and its related variations.

Citation count is the raw number of times an author has been cited. Citation counts can span an entire scholar’s career or a specific time period. What about authors who cite their own previous work? While some feel these self-citations can be used to “game” an author’s citation count, others feel it does not significantly inflate scores to the point of being misleadingly high. It could be argued that, if new research builds on a researcher’s prior work, it is appropriate to self-cite. All things considered, moderation is the rule of thumb when it comes to self-citing.

The h-index is an integer that relates the citation count to the number of articles published by an author. For example, an h-index of 5 would mean that a researcher has a minimum of five papers cited at least five times. h-index, which takes the author’s complete oeuvre into account, will change as the scholar publishes new works and as citation counts increase across time. The h-index, in juxtaposing publication count with citation count, provides information that differentiates it over a straight citation count.

Primarily, it rewards the scholar who has a sustained record of significantly cited works over the “one-hit wonder” re searcher who has only one or two articles with significant citation counts and a number of publications with low or no citation scores. Again, differences in citation and publication rates make this metric difficult to use in a multi- or interdisciplinary context. Furthermore, it might not be fair to compare the h-index of a new researcher having fewer publications and lower citations counts to an established scholar who has more publications, citations, and, therefore, most likely a higher h-index just by having a longer research career at a given point in time.

Creating author profiles on Google Scholar, Web of Science, and Scopus can also provide trend data and other useful in formation to round out a scholar’s publication record. It is worthwhile to set these up for all of the bibliometric databases. This can be made easier by assigning an author identifier, such as ORCID (orcid.org), to the scholar/researcher’s record.

With any citation-based metric, there will be variations depending on which database is used for the raw citation information. Although there is content overlap in Web of Science, Scopus, and Google Scholar, each will return unique citing references. There is no one source for comprehensive citation count of a given journal, author, or article.

When you are asked for comprehensive citation data, it may be best to be mindful that metrics and indicators are not necessarily a reflection of the universe of citation data, but rather point us to (or indicate) the relative utilization of the research in question. Thus, an author’s h-index calculated in Web of Science should not be compared to another author whose h-index has been calculated from Scopus or Google Scholar data.

Article-level metrics

Metrics that apply to authors, such as citation count and citation trends across time can also provide useful information about individual articles. Now that most journal articles are available online, some consider web-based usage metrics to be more desirable to determine article impact. For example, the number of abstract views, PDF downloads and other data captured by web analytics of journal publisher websites, aggregator databases, and other platforms provide interesting information. Also, if the research also has a version available in an OA repository, it is likely the usage will significantly increase.

Attempts to develop meaningful metrics from the capture of journal usage data, such as MESUR (mesur.informatics.indiana.edu) and Johan Bollen and Herbert Van De Sompel’s Usage Impact Factor [2], have encountered challenges not only due to the lack of a universal standard for usage metrics, but also because it is incredibly difficult and time-consuming to gather the data from the many proprietary and OA platforms where scholarly research may appear.

ALTERNATIVE METRICS

Altmetrics, as alternative metrics are known, measure impact beyond the citations in scholarly, peer-reviewed journal articles. For most altmetric scores, a web crawler scans for mentions of scholarship in sources such as social media, news, blogs, citation managers, and patents. Altmetrics provide an interesting and useful supplement to traditional bibliometrics.

Altmetrics also provide recognition of research in venues other than traditional peer-reviewed publications. The on line dissemination of scholarly ideas through primarily online outlets happens much more rapidly than through the peer-review system of scholarship. Altmetrics can be provided in a much more timely way—weeks or even days after scholarship is put forth, instead of years, as is the case with waiting to see a publication’s scholarly citation count.

Altmetrics enable broader awareness of online discussions about research efforts. The information provided by altmetric indicators helps to round out the story about scholarship and research that citation indicators begin. All of the various alt metrics scores are subject to frequent enhancements and changes as the creators of these indicators develop new and interesting ways to capture and analyze information about the spread of scholarly ideas.

The Altmetrics Manifesto, published by Jason Priem, et al. in 2010, outlines certain challenges with the existing system of impact and calls for the need for new metrics to capture new uses of scholarship (altmetrics.org/manifesto). A Jan. 10, 2017, JAMA article notes, “Altmetrics are increasingly recognized tools that aim to measure the real-time reach and influence of an academic article” (Vol. 317, No. 2, pp. 131–132).

CRITIQUES OF ALTMETRICS

Complicating the value of various altmetric indicators, scholars in a range of disciplines congregate in different plat forms on the web. Some disciplines have a high presence on Twitter; other disciplines prefer to showcase their activity on Slideshare, GitHub, or through blogs. If the altmetric indicator does not happen to favorably weight activity on the plat form of preference, or gives more favorable weight to other platforms underutilized by a given discipline, it may not be a particularly reliable indicator of impact.

With references from altmetric sources, we can’t always tell who is referencing the research. Traditional scholarly, peer- reviewed citing references are documented by standard citation styles. We know exactly who cited a given piece of scholarship, their affiliation, and where citers themselves published. With altmetrics, extra digging is required to see if it is members of the public, policymakers, academics, or other entities who are spreading information about a particular scholar ship item. With platforms such as Twitter, we may be able to tell geographic location and other source data of a referencing tweet, but we may lack the tweeter’s real full name, affiliation, and so forth.

Critics of altmetrics sometimes point to a lack of standardization of how impact scores are determined. The means by which the altmetric was developed is sometimes a proprietary “secret sauce” that is not transparent to those trying to understand the metric.

Automated “bots” exist in social media that inflate mentions and therefore altmetric scores. This is often touted as a huge concern as to the reliability of altmetric indicators. Some naysayers discount altmetrics singly on this issue. However, the structure of these bots is such that they have canned metadata with patterns which make bots easy for altmetric crawlers to omit from their calculations. As previously noted, gaming the metrics is not a problem unique to altmetrics; there is fraud as well with citation metrics. Removing false references, whether in an altmetric or citation metric source, remains an area for continued vigilance.

Dimensions (dimensions.ai) is a new tool that uses linked datasets and modern data science principles to provide integrated and comprehensive information about scholarly research from a number of platforms. This very robust tool can help map research inputs, such as grant funding, to research outputs, such as publications and patents. It aggregates OA versions of publications where possible and provides citation and impact data. This tool is free for individual users, and there is also a subscription-based institutional version.

Roles for librarians and info pros

Libraries, archives, and museums remain challenged to demonstrate their value to various constituencies. What can librarians and info pros do to assure the responsible and effective use of scholarly impact metrics? We can promote what the authors will term “impact literacy” by understanding how the underlying statistics inform the metrics, and we can help users understand the strengths and limitations of various indicators. We can provide context, and promote the responsible use, of metrics both within and outside traditional academic environments. We can encourage use of metrics that follow the Leiden Manifesto (leidenmanifesto.org). This manifesto identifies some best practices for using scholarly metrics, including these:

  • Give favor to those metrics with transparent methodologies.
  • Use altmetrics as a supplement to other measures of impact and quality.
  • Contextualize the metric within the greater schema in which the scholarship exists.

“Impact literacy” can help librarians stay current and relevant in the research and scholarly publications process. In-depth knowledge of scholarly metrics is a highly valued skill set that supports the research mission of a variety of institutions beyond the academic sphere. Newer citation metrics and most altmetrics are still relatively unfamiliar to many. By being impartial to the outcome of the metric but experts in the true meaning of impact of a given indicator, librarians and information professionals can demonstrate meaningful value that gets at the core mission of their research centers and organizations.

Endnotes

1. For an excellent history of ISI, see Bonnie Lawler’s chapter, “The Institute for Scientific Information: A Brief History,” in The Future of the History of Chemical Information . Washington, DC: American Chemical Society, 2014, pp. 109–126; DOI:10.1021/bk-2014-1164.

2. Bollen, J. and H. V. Sompel, “Usage Impact Factor: The Effects of Sample Characteristics on Usage Based Impact Metrics,” Journal of the American Society for Information Science , Vol. 59, No. 1, January 2008, pp. 136–149; DOI:10.1002/asi.20746.

See sidebars on page 2

Pages: 1| 2


Elaine Lasda is coordinator for scholarly communication and associate librarian for research impact and social welfare, University at Albany, SUNY.

Richard P. Hulser is president, Richard P. Hulser Consulting.

 

Comments? Contact the editors at editors@onlinesearcher.net

       Back to top