KMWorld CRM Media Streaming Media Faulkner Speech Technology Unisphere/DBTA ITIResearch.com
PRIVACY/COOKIES POLICY
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM EContentMag Faulkner Information Services Fulltext Sources Online InfoToday Europe Internet@Schools Intranets Today ITIResearch.com KMWorld Library Resource Literary Market Place OnlineVideo.net Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer



For commercial reprints or PDFs contact David Panara (dpanara@infotoday.com)
Magazines > Searcher > December 2011
Back Index Forward
SUBSCRIBE NOW!
Vol. 19 No. 10 — December 2011
SEARCHER'S VOICE
The Second Story
by Barbara Quint
Editor, Searcher

The National Library of Medicine (NLM), for decades a model for producing, maintaining, and expanding the reach of quality data, has always had a full array of MeSH subject headings allotted to tracking the erroneous. Clearly, it tracks literature on scientific and medical misconduct and research ethics. But NLM does more than that.

Reportedly, the NLM has an end-of-year exercise that applies caveat markings to the previous year’s slip-ups. This extends the inquiry beyond articles discussing publication and research ethics, e.g., those tagged with a subject heading such as “Retraction of Publication as Topic.” It actually brands articles in the Publication Type field as “Retraction of Publication” and “Retracted Publication.”

The advantage to this approach is twofold. First, it alerts users who have found content based on their own subject interests that individual articles may look good, but watch out . But it has a second advantage. It lets scholars of research ethics and wary sponsors of research and even muckraking journalists find the stinkers. I mean that you can create a search strategy that will find the dirty linen so you can look for patterns, identify miscreants, and maybe — just maybe — prevent it from occurring again. Wouldn’t that be nice? Particularly in a world and a subject area where so many nonexperts are searching on their own these days and making major, even life-threatening decisions based on their findings.

But how are you going to do those things if error — whether malicious or just incompetent — is not traceable? The motivation to cover up an error is intense. In the case of research scholarship, the primitive, cover-thy-ass compulsion has an easy and somewhat true rationalization available, namely the need to eliminate the dissemination of error. And the technology to perform such laudable laundering is absolutely in place. When Science magazine got hoaxed by some South Korean stem cell researcher in 2005, it jumped onto the database archive and stripped out the falsehood-bearing article.

This is very efficient when it comes to shutting down error dissemination. You couldn’t do as well in print. In printed sources, notices of retraction or disavowal or simple corrections all appear in future issues, which the reader may or may not see, particularly since the notices are usually so much smaller than a full article. When a printed source goes online, only a rigorous policy of attaching such notices to each display of the original article will do the job properly. And that takes some programming.

But it might be well worth the effort if you want scholarship to have a full and accurate historical record, the kind of record that humanity has relied on academic libraries to maintain. If you want those research ethics scholars to find evidence easily enough and quickly enough to study how errors, and even frauds, occur, you need a diligently and thoroughly maintained record. If you want those wary research sponsors to avoid providing money and positions to people who have failed flamboyantly in the past (sometimes the blame game is the right game to play), the evidence of past failures must be as traceable as past successes. If you want muckraking journalists to give you something exciting to read with your breakfast cereal, something that can remind you and others of the need to check and recheck, to verify the research they gather, something that can frighten ersatz scholars who might be considering an act of chicanery or just of sloppy, incomplete research efforts, you’ve got to make evidence of such acts available. In the case of the journalists, the verification process has got to be solid and visible, otherwise they might turn to rumor mills for a “good” story and end up generating more embarrassment than truth.

The thing about today’s information technology is that it’s given everyone the tools to publish whatever they want to publish. The pecking order of quality assessment does not occur as much before the publication as after. No longer can editors and peer reviewers and publishers block the appearance of content, as they could in the days of print. Clearly, there are advantages to society in this expansion of content flow, but the disadvantages are just as clear.

The problem is that the ability of librarians, particularly academic librarians, to stand as bastions of the full historical record of scholarship is rapidly being eroded by the reality of digital licensing. In this issue, Chuck Hamaker, a long-time and battle-weary bastion himself, identifies the troubles being created by the publisher-driven collections of ebooks. Most of the problems have already hit and infested journal literature.

It’s not just budgets that are in danger, it’s the historical record itself. Most licenses carry clauses that allow the publisher to change and alter individual items at will. Electronic content “in” libraries these days usually doesn’t exist “in” the library at all. It exists on a publisher’s or a vendor’s machines with library licensees allowing library clients access. This gives the publisher lots of control, but it also gives them lots of liability, a liability that could make them vulnerable to engaging in practices that librarians would not approve of, even practices that librarians would set up a stink struggling against. If the government of a country in which a publisher does business wants something changed, who can say what a publisher might do, particularly if the change were easy and almost invisible. If an author wanted a mistake corrected with no publicity, why not? Once again, it’s just making the piece more accurate — at least that’s what the author told the publisher. And if a publisher tells a third-party vendor to make content changes with the vague but omnipresent threat of pulling content, what do you think the database aggregator is going to do, particularly if he cannot even go to his clients to help counter the publisher?

Is there any way to stop this threat to scholarship? Well, let’s think. We could insist on downloaded copies of licensed content, copies that could be checked against current content versions for changes. Academic libraries might join together to demand and even fund a central collection of licensed material to conduct such verification on a regular basis. It might become another Underwriters Laboratory label of approval. Hmm. This kind of a program might even handle open access sources and give some of them a degree of verified authenticity that currently only the high-priced products of scholarly publishers can claim. And that might serve two goals: It could guarantee the quality of specific open access content in a world where only open access seems to offer any long-term relief for academic libraries from murderous pricing. And the guaranteeing of open access sources might put more pressure on scholarly publishers to create — or succumb to — guarantees themselves.

But one way or another, we’ve got a problem to solve here, and we need solutions in place fast.


— bq

Barbara Quint's e-mail address is bquint@mindspring.com.

       Back to top