Information Today, Inc. Corporate Site KMWorld CRM Media Streaming Media Faulkner Speech Technology DBTA/Unisphere
PRIVACY/COOKIES POLICY
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM Faulkner Information Services Fulltext Sources Online InfoToday Europe KMWorld Literary Market Place Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer Unisphere Research



Vendors: For commercial reprints in print or digital form, contact LaShawn Fugate (lashawn@infotoday.com).
Magazines > Online Searcher
Back Forward

ONLINE SEARCHER: Information Discovery, Technology, Strategies

HOME

Pages: 1| 2| 3
Local Isn’t What It Used to Be: Finding Community News in the Algorithmic Era
By
July/August 2020 Issue

GOING NEXT DOOR

Nextdoor (nextdoor.com) claims to be “the neighborhood hub for trusted connections and the exchange of helpful information, goods, and services.” Prior to the start of the coronavirus crisis, it seemed not unlike a local online garage sale combined with over-the-fence discussions of neighborhood crime and other developments. However, in the nascent stages of the crisis, as panic-buying and hoarding led to a nationwide shortage of toilet paper, Nextdoor became a vital source of critical information, as local stores and helpful neighbors would post when new shipments arrived. It can also be a source of little-known local news in preliminary stages and serve as “a window in to daily life around the country,” claims Ian Bogost in The Atlantic’ s July/August 2018 issue (“What Petty Nextdoor Posts Reveal About America”; theatlantic.com/magazine/archive/2018/07/nextdoor-american-communities/561746).

According to information Bogost gleaned from Steve Wymer, Nextdoor’s vice president of policy, 30% of Nextdoor discussions involve service requests and recommendations, 20% involve real estate, and 10% revolve around crime and safety. More than 180,000 communities, including 90% of the 25 largest cities in the U.S., have a presence on Nextdoor, making it reasonably representative of the nation as a whole. The resulting conversations may be a possible litmus test to gauge the breadth of informal discussion in America.

Is this useful for local community research? In order to join a Nextdoor community, you have to prove that you actually live there (by entering an access code delivered by text), so it would be hard to research a neighborhood in which you do not live, but if you are librarian or information professional and need information on your own community, it could serve not only as a good barometer of local concerns but also as a guide to direct you to other sources.

ALGORITHMIC CONTENT

An additional trend, both in local reporting and in news in general, is the rise of algorithmically produced content. Essentially, computer programs analyze data, look for patterns, and produce narrative text that can serve as a real-time news story. According to research by the Tow Center for Journalism at Columbia University, “Automated journalism is most useful in generating routine news stories for repetitive topics for which clean, accurate, and structured data are available” (Andreas Graefe, “Guide to Automated Journalism,” Tow Center, Columbia University, Jan. 7, 2016; cjr.org/tow_center_reports/guide_to_automated_journalism.php). However, if data quality is poor or sparse, the resulting stories may have biases or errors. Therefore, computer-generated content is most helpful when news consumers need quick answers generated from reliable data sources. Human journalists will always continue to be needed when in-depth analysis and long-form narrative is required.

The Bloomberg News agency was one of the earliest adopt ers of AI-produced content. Many Bloomberg stories are essentially reports of company-released earnings and revenue forecasts that are plugged into the code in Bloomberg’s Cyborg program, resulting in an automated story. The Washington Post has used algorithms to report election results as they are tabulated, and The Los Angeles Times uses AI to turn U.S. Geological Survey earthquake data in to warning bulletins. What was once a spreadsheet or list of data is now formatted as a news story with basic text added along with the numbers (Nicole Martin, “Did a Robot Write This? How AI Is Impacting Journalism,” Forbes, Feb. 8, 2019; forbes.com/sites/nicolemartin1/2019/02/08/did-a-robot-write-this-how-ai-is-impacting-journalism/#a42377477957).

It is easy to understand why journalists might see this as a threat to their livelihoods, but as the Martin article goes on to describe, news executives have stated that their purpose is to give assistance to, not replace, journalists. If computer programs and templates can be used to write quarterly company earnings announcements and sports box scores, reporters will have more time to analyze company activity and strategy and produce in-depth profiles of athletes. One would hope, however, that fact-checking and research always remain central tenets of the work of content creators, whether they are machines or humans. “The work of journalism is creative, it’s about curiosity, it’s about storytelling, it’s about digging and holding governments accountable, it’s critical thinking, it’s judgment—and that is where we want our journalists spending their energy,” Lisa Gibbs, the director of news partnerships for the Associated Press, told Martin.

SEARCHING LOCAL

Local news faces many difficulties, financial and otherwise. This contributes to the frustration of reference librarians asked to “search regional papers.” You can’t search what isn’t there. The need for creativity, to look at alternative sources, such as social media and university newspapers, has never been great er. It’s not just all politics that are local, but that news about businesses, the economy, health statistics, crime, schools, and a host of other issues is local. If local newspapers go out of business, are replaced by imposter sites, or suffer from greatly diminished news coverage, librarians must be receptive to new sources and diligent in exposing false sources.

WHO OWNS ALGORITHMIC NEWS?

Some of the thorniest issues for journalists, news organizations, and information consumers regarding automated content are those of responsibility and ownership. For example, who is responsible for errors in computer-generated news stories? The computer programmer who wrote the algorithm? The journalist who used the algorithmic content within the body of a story? The publisher? Would publishers blame their AI content-generation software and possibly sue the company that sold it to them?

The debate surrounding ownership centers on the question of whether or not content generated by AI is protected by copyright. Currently, the U.S. Copyright Office has a “human authorship requirement” and will only register original works created by human beings. Its most current compendium states, “The copyright law only protects ‘the fruits of intellectual labor’ that ‘are founded in the creative powers of the mind.’ Because copyright law is limited to ‘original intellectual conceptions of the author,’ the Office will refuse to register a claim if it determines that a human being did not create the work” (copyright.gov/comp3/chap300/ch300-copyrightable-authorship.pdf).

However, law surrounding AI is rapidly changing, and in August 2019, the U.S. Patent and Trademark Office published a Federal Register notice (tinyurl.com/ycme98vx) asking experts and the public to weigh in on “whether new forms of intellectual property protection are needed.” The notice asked respondents to consider a list of questions, these among them:

  • What is an AI invention and what does it contain?
  • What are the different ways that a natural person can contribute to conception of an AI invention and be eligible to be a named inventor?
  • Do current patent laws and regulations regarding inventorship need to be revised to take into account inventions where an entity or entities other than a natural person contribute to the conception of an invention?
  • Should an entity or entities other than a natural person, or company to which a natural person assigns an invention, be able to own a patent on the AI invention?
  • Are there any new forms of intellectual property protections that are needed for AI inventions such as data protections?

CANARIES IN THE AI COAL MINE

Oren Etzioni, CEO of the Allen Institute for AI, and a University of Washington professor of computer science, identifies “canaries in the coal mine of AI” in the Feb. 25, 2020 issue of MIT Technology Review (“How to Know if Artificial Intelligence is about to Destroy Civilization”; technologyreview.com/s/615264/artificial-intelligence-destroy-civilization-canaries-robot-overlords-take-over-world-ai). He thinks that once a robot is able to accomplish a certain task, it is a harbinger of an AI breakthrough. Interestingly, one of the canaries discussed is “substantive dialogue on a rich topic,” since at present, AI has “a limited understanding of language and the world.” To illustrate the point, Etzioni asked Alexa: “My trophy doesn’t fit into my carry-on because it is too large. What should I do?” Alexa responded, “I don’t know that one.” The problem is that Alexa is not sure whether “it” refers to the trophy or the carry-on. Once she does, we have a dead canary, and perhaps, higher-quality automated content.

Pages: 1| 2| 3


Amy Affelt is director, Database Research, Compass Lexecon and author of The Accidental Data Scientist: Big Data Applications and Opportunities for Librarians and Information Professionals (Information Today, 2015).

 

Comments? Contact the editors at editors@onlinesearcher.net

       Back to top