Information Today, Inc. Corporate Site KMWorld CRM Media Streaming Media Faulkner Speech Technology DBTA/Unisphere
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM Faulkner Information Services Fulltext Sources Online InfoToday Europe KMWorld Literary Market Place Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer Unisphere Research

Vendors: For commercial reprints in print or digital form, contact LaShawn Fugate (
Magazines > Online Searcher
Back Forward

ONLINE SEARCHER: Information Discovery, Technology, Strategies


Artificial Intelligence and Online Searching
July/August 2018 Issue

Artificial intelligence (AI): It’s a universal concept, yet its meaning is highly personal. Episode one of Netflix’s Lost in Space series involved a robot who, depending on the character with whom it interacts, is alternately portrayed as quasi-human, a gadget, a rival, or a weapon. The role that we each assign to a robot reveals much about our individual viewpoints regarding AI and what it means, not only for us personally, but also professionally.

It is true that a lot of us think of robots when we hear the term “AI,” but in an information industry context, that’s not enough. “Big Data” became a huge buzz phrase a few years ago and referred to the collection of massive amounts of data from myriad formats. Eventually, the discussion of Big Data evolved to mean “Internet of Things,” or data collected from deliberately placed sensors located everywhere, from medical equipment and vending machines to the highway and on human beings. In a natural progression, AI is the trendy term now, and it can be thought of as the algorithm or computer code that manipulates this collected data and combines it with machine learning to answer questions and try to solve problems.

We know that AI affects our daily lives in countless ways, but we now hear warning signals that it will replace our jobs. Variations of the headline “A Robot Will Take Your Job” have run in The New York Times, Business Insider, Fortune, and many other publications. It seems as if no profession is immune. If Alexa can tell us the weather, who needs television meteorologists? If Siri can play “Walking on Sunshine” after being asked, we can ditch that ’80s radio station and the annoying commercials it airs. Computers can read CT scans, so bye-bye radiologists. An algorithm can quickly peruse legal documents and flag non-standardized language, so start packing, paralegals. Right? Not so fast.

These warnings remind me of “The sky is falling” scenario we heard repeatedly in the mid-1990s—if anyone could search the internet, why would we need librarians? The short answer was that sure, anyone can search the internet, but we do it professionally. We know which keywords to look for and which to eliminate. We know which publications are credible and which are fake news. The high-quality, citable information that we provide comes from sources that may never appear in a list of Google results. We add value to the research that we find by summarizing key concepts and writing reports that analyze information. We create visuals by transforming data into a single image that depicts hundreds of concepts. Just like insurance and travel agents, whose demise seemed likely in an internet world, librarians marketed and showcased their expertise and survived and thrived.

When I first started thinking about the possibilities of AI-assisted research, I wondered if it were much ado about nothing. AI enables users to query a database: Haven’t librarians been doing that since the days of acoustic couplers and dial-up modems? But what if we take the emphasis off what is entered and focus on what is delivered? Is there a way to use AI to improve database results?

The issue of flexibility in search queries can be characterized by a phrase coined by Mary Ellen Bates—“squishy Boolean” (“Squishy Boolean,” Online, March/April 2005: p. 84). Straight Boolean logic is very exacting, whereas squishy Boolean allows more room for interpretation. Bates calls this “the third way.” For example, a strict Boolean search string might include options for “must include” and “must not include.” Squishy Boolean allows for “preferably includes.” A natural progression is the development of squishy AI, with more analysis by the algorithm before results are received by the researcher. I began thinking about the limitations and frustrations I run into with the databases I currently use. How could AI be used to program databases to interpret initial search results and deliver more-targeted content?

I do a lot of legal docket research, but I don’t have a J.D., so I constantly run into roadblocks when I need to parse through extremely long case files and interpret filings. If AI were combined with machine learning, it might be possible for me to pull up a docket with a notation stating, “You usually end up downloading the latest amended complaint; it is document number x.” In an ideal AI research world, I would be able to ask the database pinpointed legal questions that need specific answers—and receive them. I could ask if the defendants filed a motion to dismiss a complaint and be given a yes or no answer with a link to the filing. I could query databases with the full text of public laws that have been amended, changed, and overridden multiple times and find out quickly and easily which permutation of the law is current. The database would understand my field, so a search on the hotel industry would return results about financial viability and not recommendations and reviews. Many times when I call database help desks, the representative states, “I know exactly what you are trying to do.” It would be great if there was a way to query the database itself so that it knew exactly what I was trying to do. If the old TARGET command on Dialog represented squishy Boolean (it returned results with some but not all of the user’s search terms), then a database that understands what I am trying to pinpoint represents squishy AI.

I used to laugh about statistics such as the fact that 41% of virtual assistant owners feel as if they are talking to a friend, or that more than 50% say, “Please,” Thank you,” and “Sorry” to their devices. However, an assistant who actually helps me do my work faster and more efficiently would be a friend indeed. But a replacement? Nah. Research robots are just a means to an end and a way for us to deliver a better product. That delivery is the key. The robot is not delivering research results; we are. It is merely helping us find more relevant results more quickly. Also, we will still be needed as a safeguard for the times when the algorithm is incorrect (do a Google search for "death by gps" for some extreme examples.) But once we are in the database of content that we need and have an applicable research challenge, squishy AI can’t hurt, and it might help. To paraphrase Ralph Waldo Emerson, if the database companies can build a better robot, our requestors will beat a path to our door.

Amy Affelt is director, Database Research, Compass Lexecon and author of The Accidental Data Scientist: Big Data Applications and Opportunities for Librarians and Information Professionals (Information Today, 2015).


Comments? Contact the editors at

       Back to top