Information Today, Inc. Corporate Site KMWorld CRM Media Streaming Media Faulkner Speech Technology DBTA/Unisphere
PRIVACY/COOKIES POLICY
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM Faulkner Information Services Fulltext Sources Online InfoToday Europe KMWorld Literary Market Place Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer Unisphere Research



Vendors: For commercial reprints in print or digital form, contact LaShawn Fugate (lashawn@infotoday.com)

Magazines > Computers in Libraries > September 2023

Back Index Forward

SUBSCRIBE NOW!
Vol. 43 No. 7 — September 2023

METRICS MASHUP

Metrics or Expert Judgment? The Case of Clickbait
by Elaine M. Lasda


I recently read All the News That’s Fit to Click: How Metrics Are Transforming the Work of Journalists, by Caitlin Petre (Princeton University Press, 2021), a book recommended to me a long time ago by my esteemed editor, Marydee Ojala. She urged me, back when I started writing this column, to look at metrics more broadly than only those pertaining to scientific and academic research. I have avoided doing so. Research impact is my comfort zone.

According to my library card record, I requested Petre’s book on Dec. 13, 2021. I proceeded to then toss it in a pile and ignore it. Perhaps the downside to the new auto-renewal policy at my library is this decided lack of urgency to return anything, which serves to encourage my own procrastination habit. So here we are, mid-2023, and I am finally reading the book. It is fascinating and fits in with a body of work devoted to the increasing reliance of various sectors of society on data- or metrics-driven decision making and the accompanying perils and pitfalls of such reliance.

PERVASIVENESS OF METRICS

I have referenced The Tyranny of Metrics, by Jerry Z. Muller (Princeton University Press, 2018), before in this column. In his book, he considers a wide swath of data-driven sectors, such as education, medicine, business, government, police/military, and charitable giving. He discusses cases in which metrics are misconceived, misapplied, or misunderstood. Muller invokes Goodhart’s Law: “When a measure becomes a target, it ceases to be a good measure” (en.wikipedia.org/wiki/Goodhart%27s_law). Think of it this way: If a process or activity is rewarded by how it is measured, it will be subject to gaming strategies.

Gaming in this context does not relate to video or tabletop games. In this context, gaming means efforts to game the system by manipulating the metrics to achieve a desired outcome. This can be done in a variety of ways, some shadier than others. Cheating, omitting data in reports, and falsifying data are on the shadier end of things. Cherry-picking cases, teaching to the test, and co-citation agreements may be less shady, but that is debatable. The point is, rather than using professional judgment and autonomy to make more sound, nuanced, or beneficial decisions, bad metrics can lead to an oversimplification of complex processes, create standardization in situations that are not so standardizable, and lessen the presence of assessment by those with subject matter expertise in a field.

THE CASE OF JOURNALISM

Many of Muller’s themes are echoed in Petre’s journalism-specific tome. Petre, an assistant professor of journalism and media studies at Rutgers University, focuses on three entities in journalism: Chartbeat (chartbeat.com), a metrics platform for online news sites; Gawker (gawker.com), a highly metrics-driven source for online information; and The New York Times (nytimes.com), the venerable and long-established “newspaper of record.”

According to its website, Chartbeat offers “real-time and historical dashboards, in-depth headline testing and in-page optimization tools, robust reporting, and more [that] help the world’s leading media organizations understand, measure, and build business value from the attention earned by their written and video content” (chartbeat.com/company).

Gawker Media, founded by Nick Denton, was once considered an innovator in digital publishing, presaging the digital online news environment of today. Gawker later became the target of well-funded lawsuits that succeeded in significantly devaluing it as a business. Gawker declared bankruptcy and shut down in August 2016 (nytimes.com/2016/05/26/business/dealbook/peter-thiel-tech-billionaire-reveals-secret-war-with-gawker.html and nytimes.com/2016/08/25/technology/gawkers-gone-long-live-gawker.html).

Bryan Goldberg’s BDG (Bustle Digital Group; bdg.com) acquired Gawker in 2018. BDG’s strategy is one of rather aggressively acquiring content providers like Scary Mommy, NYLON, Inverse, Elite Daily, and other quasi-news, quasi-entertainment type websites (bloomberg.com/news/features/2019-06-10/bryan-goldberg-would-like-to-buy-your-dying-online-media-property). BDG shut down the new incarnation of Gawker in 2023 (variety.com/2023/digital/news/gawker-shutting-down-1235509262).

DATA-DRIVEN JOURNALISM

At the time of Petre’s research, Gawker’s editorial strategy was deeply integrated with its Chartbeat dashboard metrics. I presume BDG sites are also Chartbeat data-driven, but Gawker was one of the first to provide its reporters and content providers with the Chartbeat metrics, which update in real time. This created several issues that are echoed in both Muller’s research and texts about impact metrics in the academic and scientific research enterprise.

There was no real science to predict what online posts would generate the most traffic. Petre describes the commonly parlayed strategies for generating high click counts or web traffic as “folk theories,” because even with the metrics they garnered from Chartbeat, the data did not provide any sort of reliable predictability about which content would go viral and which would not. She partly attributes this to an absence of metrics that capture causality. Thinkers in the big data world have said that vast amounts of data render causality irrelevant. As a result, folks at Gawker, both journalists and editors/managers, really did not have a sense of what the numbers were truly reflecting. (For a decent opinion piece by Srinath Perera refuting the irrelevance of causality, go to medium.com/making-sense-of-data/understanding-causality-and-big-data-complexities-challenges-and-tradeoffs-db6755e8e220.) Again, like Muller, Petre points to the de-skilling of management through overreliance on metrics without deep subject matter knowledge. Stakeholders are making decisions based on data they don’t understand.

REAL-TIME COMPULSION

This sounds familiar to me. We wouldn’t have DORA (Declaration on Research Assessment), the Leiden Manifesto, or columns like this one about the responsible use of research impact metrics in scientific research without the backlash over bibliometrics and altmetrics being used by entities that don’t understand the metrics limits. For example, scientists in some fields are increasingly rejecting the h-index metric (blogs.lse.ac.uk/impactofsocialsciences/2014/03/31/four-reasons-to-stop-caring-so-much-about-the-h-index).

A feature of Chartbeat that is not present in, say, Web of Science or Scopus, is real-time updating. Compared to online publishing, academic research is disseminated at a glacial pace. In science, we usually examine metrics that cover a period of years; Chartbeat gives up-to-the-minute counts on clicks, pageviews, and more. This led some rank-and-file writers at Gawker to become compulsive about checking the dashboard and their own personal statistics. Writers downloaded the app to their phones and checked it repeatedly. This compulsion to constantly check numbers that you don’t fully understand—metrics that don’t provide context or causality for the data being rendered—eventually leads to a lack of autonomy, judgment, and de-skilling of decision makers in Petre’s opinion. Sound familiar? Only … maybe worse? Yeah, me too.

CLICK DATA IS POWER

In her ethnographic studies of media organizations, Petre had the least access to The New York Times. A predisposition to restricting access is apparently cultural. For example, the Chartbeat dashboard was not available to all journalists at the Times, ostensibly because Times editors and management wanted journalists to be free to cover stories they felt to be important, whether they generated heavy clicks and pageviews, or not.

However, management and editors had full access to the dashboards and used the metrics to make higher-level decisions. This created an additional power dynamic and barrier between editorial and reporting staffs. Petre notes there were situations when editors showed reporters numbers during employee reviews to prove the reporter was not up to par, even though the reporters hadn’t seen those numbers. So, while the reporters were not addicted to the Chartbeat real-time metrics, they lived under the shadow of metrics seen primarily by management.

STANDARDIZATION VS. AUTONOMY

The comparison of Gawker and the Times shows us an interesting dichotomy. Gawker writers felt they had autonomy because they could write whatever they wanted provided their numbers were “good,” whereas Times reporters felt they had autonomy because their editorial team afforded them the leeway without being beholden to the metrics. One Times writer quoted a Times reporter likening having their article placed prominently on the Times website to being validated by one’s peers. (Hmm. A peer review of sorts?)

The discussion in Petre’s book boils down to the same issue in Muller’s book and the same issue for scientific researchers: Certain functions in society require judgment and autonomy, and there are limits to how these functions can be standardized. Shoehorning research or teaching or medical practice or journalistic reporting solely into a standard set of metric “boxes” does nothing to improve the quality of these functions. In fact, in many cases, it is a downright harmful practice.

Am I saying no metrics is best in certain fields? Am I saying autonomy trumps standardization every time? No. If that were the case, in research writing, we would not have style manuals or a commonly accepted format for research articles. Some standardization can provide a level of quality control. Some metrics can reflect that level of quality. But there needs to be a balance. Autonomy, reputation, and judgment matter but do not tell the whole story. Data, metrics, and indicators also provide valuable information and likewise do not tell the whole story.

The two approaches need to be used in tandem. Our society’s current overreliance on metrics could be denigrating the benefits of expert opinion, deep subject matter knowledge, and decisions based on wisdom garnered from years in the field. As with so many situations, the so-called middle path provides us with the best information and understanding of content, be it clickbait, news of record, or scientific research.

Elaine M. Lasda


Elaine M. Lasda
(elasda@albany.edu) is coordinator for scholarly communication and associate librarian for research impact and social welfare, University at Albany, SUNY.

Comments? Emall Marydee Ojala (marydee@xmission.com), editor, Online Searcher.