Information Today, Inc. Corporate Site KMWorld CRM Media Streaming Media Faulkner Speech Technology DBTA/Unisphere
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM Faulkner Information Services Fulltext Sources Online InfoToday Europe KMWorld Literary Market Place Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer Unisphere Research

Vendors: For commercial reprints in print or digital form, contact LaShawn Fugate (
Magazines > Online Searcher
Back Forward

ONLINE SEARCHER: Information Discovery, Technology, Strategies


Pages: 1| 2
I Did the Search, Now What?
September/October 2018 Issue

Challenges with matching the search to the need

Here are a few more insights. First, even when users are not trying to discover their goal through their search results, a lot of searches are done with a minimum of thought. Even with a clear goal in mind, a sloppy search without carefully planned terminology is like throwing mud at a wall in the hope that it will spell out your name. Sloppy, broad searching assumes, I think, that the search engine will figure out what you want and deliver the goods. Two problems exist with that assumption. First, search engines can’t think. No matter how much AI you throw into them, they can’t read your intention or even give a best guess, at least not yet. Second, sloppy searching may seem like an easy way to start, but it demands too much when it comes to evaluating the results.

Searchers often seem troubled by the challenge of uniting two concepts in a single search, for example, the education of homeless children, polar bears and climate change, or The Great Gatsby and crime. I find that students tend to separate out the searches so that the concepts are not brought together but are searched separately—first a search on homeless children, and then one on education. It seems so evident to me that this is a very bad idea, I am left to wonder how searchers can believe the best results are found in each concept and not in where they overlap. But it happens.

A related two-concept problem is the struggle with deter mining which of the two concepts is dominant. This is subtler. If I search on homeless children, possibly one issue that will come up is their education. If, however, I search on education, there is a minimal chance that the problems of home less children will come up. If I search on polar bears, their plight due to climate change will emerge. But a search on climate change is less likely to bring up polar bears. Understanding which is the dominant concept can often help a searcher to shape the search.

Searches that are too broad or too narrow can mess up results. So can the choice of ambiguous terminology. In fact, any number of challenges will ensure that search results are mainly destined for the trash bin. That is why we need to counsel searchers to think out their search strategy and stop throwing mud at the wall.

Using the tools available

Google has taught us that a single box, a few keywords, and a click on the search button form the totality of finding information. You have to live with what you get. Our students, however, have access to many filters that will nuance results and shape them into something they can actually use. But it’s hard to get them to use advanced features. Students tell me, “It’s too complicated”; “It takes too much time”; “You shouldn’t have to fix your search results if the search engine is any good.”

I don’t know how many times I’ve had to explain to searchers that communicating with a database is nothing like ordering a pizza: “I want a deep-dish ham and pineapple, light on the pineapple.” It’s more like, “Want stuff climate change polar bears.” The engine clumsily pulls up results and then you have to use filters to get less of this, more of that, and none of those. Search engines are not intelligent, which is why they need filters.

We can imagine that AI will one day intuit what we are seeking. The grand illusion fomented by a lot of search engines is that this day has arrived. It hasn’t. We have to manipulate results, tinkering with them until they reveal what we were seeking from the beginning.

Recognizing that results may not correspond fully with the question

Google has created a generation of answer-seekers. Common search engines are answer machines. I type in a query, and, chances are, the direct answer to my question will pop up. We expect search terms and search results to match up.

Scholarly searching is nothing like that. You are likely not looking for answers but for discussions, investigations, reviews of literature, facts, and opinions. Search results don’t provide “the answer.” They provide the evidence that can help the searcher move toward an answer.

The burning issue that a searcher thinks is so important may not be a priority to most researchers in academia. I of ten tell students, “Nobody is obligated to do research or write about your issue.” Students seem to have an uncanny knack for addressing obscure questions that no one else is interested in.

In fact, the guiding question may be such that there simply is no data, even if lots of research is being done in the surrounding field. A colleague encountered a student using the research question, “Are people who use a lot of profanity less intelligent than other people?” While there may well be re search data on use of profanity, the connection between profanity and intelligence is not a reasonable area of study. Smart people swear, and so do less smart people. When a searcher assumes that a phenomenon or connection exists when it doesn’t, there is bound to be disappointment.

A searcher, finally, can’t expect that search results will be uniformly relevant. For those people used to choosing the first five results, it can be dismaying to discover that, out of the 50 or 100 citations in a result set, even after a carefully constructed search, only 20 are pertinent to the research goal.

No simple answers

The basic rule should be that you analyze search results with reference to your goal. But the big mistake many students make is looking only for those results that appear to answer their research question, thus ignoring the nuances and debates in the conversation. A good research question is not answered simply. If there is one single answer universally held by scholars, you are not dealing with a genuine research problem.

Research problems lead to more than one possible answer, even when the researcher may ultimately choose one among the options as more plausible than the others. Scholars are seekers in difficult terrain. The evidence available to them varies, as does the influence of their own biases and beliefs. They interpret evidence differently and come to different conclusions. The more complex the problem, the more the variation in proposed solutions, even in hard science.

It’s futile to hope that difficult problems will be greeted with consensus. There is always a broader scholarly conversation that must be recognized and understood. Having a goal—a research question or thesis—does not mean that we will find an absolute answer. We will find a conversation, and our search results must reflect that conversation.

Those search results that look like such a tangled mix of views may be exactly what the searcher needs. Once the results are in the territory of the question, it’s time to discern the parameters of the ongoing conversation. This makes the results relevant even when they contradict the searcher’s presuppositions about the “right” answer.

Teaching search result discernment

It’s important for infolit instructors to teach students how to make sense of their hot mess of search results. This means either that we help them search better initially or we teach them the skills needed to make sense of the results of a good search. Search may be strategic, but so is the unravelling of what comes out of a search.

This is a tough job for students who have been thoroughly Googleized, who are used to believing that search is easy, and good enough results are good enough. Helping them do better is important, even if they resist learning the skills they need. When you’ve done the search, the “Now what?” will mean the difference between mediocrity and solid productivity.

Pages: 1| 2

William Badke is associate librarian at Trinity Western University and the author of Research Strategies: Finding Your Way Through the Information Fog, 7th Edition (, 2021)


Comments? Contact the editors at

       Back to top