KMWorld CRM Media Streaming Media Faulkner Speech Technology Unisphere/DBTA
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM EContentMag Faulkner Information Services Fulltext Sources Online InfoToday Europe Internet@Schools KMWorld Library Resource Literary Market Place Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer Unisphere Research

For commercial reprints or PDFs contact Lauri Weiss-Rimler (
Magazines > Online Searcher
Back Forward

ONLINE SEARCHER: Information Discovery, Technology, Strategies

Media Kit [PDF] Rate Card [PDF]
Editorial Calendar [PDF] Author Guidelines

previousPage 3 of 3
Set Your Cites High: The Value of Quality Citation Information
By ,
Volume 40, Number 5 - September/October 2016

Web of Science

Is Web of Science ( uct/web-of-science) “the world’s most trusted citation index”? Thomson Reuters states that it is, and, given its antecedents date back 50 years, it makes a good case. With a collection of 12,700 journals, some with records back to 1900, Web of Science has become something of an industry standard.

The foundation of the Web of Science platform is the Core Collection, which consists of eight citation indices that in clude the Social Science Citation Index, Science Citation Index, and Conference Proceedings Citation Index. This collection can be supplemented with more focused subject area indices or regional and language-specific databases. With options ranging from the Food Science and Technology Abstracts to the Chinese Science Citation Database, you can tailor Web of Science to meet your needs by adding these supplementary modules to your Core Collection, allowing you to pay only for collections that are relevant to your field. The additional upside is that searching across these diverse collections remains consistent due to its standardized indexing.

Like Scopus, Web of Science makes it very clear that it maintains a rigorous standard for inclusion in the database. Thomson Reuters claims that of the 3,000–4,000 journals it reviews each year, only 10%–12% make the cut. Each included journal is indexed from cover to cover. And, unlike Google Scholar, Web of Science is happy to let you know that the database remains “free from proprietary involvement.” This allows for a more objective means of research, in the opinion of Thomson Reuters, which goes hand-in-hand with the curation, by humans, to ensure quality results.

Search functionality is straightforward. You can choose “all databases,” the Core Collection, or any individual index to which you have access. There is an option to select Cited Reference Search in the main query box. You can limit the results with focused criteria, such as author affiliation or subject area. Results include a Citation Network for each article that provides numbers for cited references and citations within other Web of Science databases. Finally, you can generate reports that include number of citations, average citations per item and year, and an h-index score, which is a measure of impact of a scholar’s research based on proliferation of articles and number of times cited.

What we don’t know about Web of Science is how it might change under new ownership, now that its acquisition by private equity firms Onex and Baring Asia is in progress. [Read Richard P. Hulser’s article “Private Equity’s Effect on information Professionals and Their Institutions” in the November/December 2014 issue of Online Searcher for more information on private equity firms’ acquisition of information companies. —Ed.]

Best Practices for Citation Research

Despite the focus on algorithms as the latest and greatest in tabulating and analyzing research results, when considering whether or not to use cited references in research, the human element truly is of utmost importance. There isn’t a substitute for actually reading the references to parse out their connection to the article that is being cited. We have to be completely certain that the references reinforce the point that we are trying to make.

In litigation or other high-stakes situations, it is probably best to use a cited reference database with a completely transparent methodology. When giving an exact number of citations, we need to be able to stand behind these numbers and provide a complete list of those citations comprising that number if questioned. Having said that, it is possible the only database containing citations to your article is one that estimates numbers and is unable to point to the underlying articles that cite your main article. In this case, be sure to explain how the database works, and issue a strongly worded caveat that the number is just an estimate.

Note, however, the ways to make this sound more scientific, if needed. For example, if you have a high number of citations, you can say something like, “Even though the numbers are estimated, the database states that this article has been cited more than 12,000 times, so it is definitely widely read and used,” or, “Even though the numbers are estimated, there are only three citations. This article is not widely circulated.” Also, when using an estimation database that contains many citations to an article, be sure to scan at least the first 25 or so for context. You want to make sure you are not holding up an article as a gold standard when it really is being cited repeatedly in the context of shoddy research.

So, which is it? Are cited reference algorithms cool, or should you stick with old school? VHS tapes or streaming video? As with most research tools, the answer is both. Algorithms can massage the data on citations quickly and easily, providing the user with a broad picture of the quality of an article or the lack thereof. When attention to detail is critical, however, and every single reference makes an argument stronger or weaker, it is very important to understand exactly how the database works and how the citation counts are tabulated. Perhaps in the future, Google Scholar and other wide-swath resources will become more nuanced and carefully documented. We can only hope. Set your cites high!

previousPage 3 of 3

Amy Affelt is director, Database Research, Compass Lexecon and author of The Accidental Data Scientist: Big Data Applications and Opportunities for Librarians and Information Professionals (Information Today, 2015).

David Pauwels is senior information professional, Compass Lexecon.


Comments? Contact the editors at

       Back to top