Information Today, Inc. Corporate Site KMWorld CRM Media Streaming Media Faulkner Speech Technology DBTA/Unisphere
PRIVACY/COOKIES POLICY
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM Faulkner Information Services Fulltext Sources Online InfoToday Europe KMWorld Literary Market Place Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer Unisphere Research



Vendors: For commercial reprints in print or digital form, contact LaShawn Fugate (lashawn@infotoday.com)

Magazines > Computers in Libraries > May 2023

Back Index Forward
SUBSCRIBE NOW!
Vol. 43 No. 4 — May 2023

VOICES OF THE SEARCHERS

Geocaching and the Turing Test
by Mary Ellen Bates

A friend of mine recently introduced me to the joys and frustrations of geocaching. It is perhaps the quintessential tech-enabled outdoor activity. For those of you who have not yet been introduced to this, er, phenomenon, it involves people hiding small containers in outdoor locations and then publishing the GPS coordinates on geocaching websites. Fellow geocachers locate the cache, annotate a logbook inside the container, and then put it back for the next person to find.

What can be frustrating is that all you have are the coordinates and whatever clues the geocacher chooses to include in their listing. I found myself just down the road from my house the other day, staring at a tangle of dead tree branches and overgrown weeds and wondering where the dickens the hidden geocache was. I had the latitude and longitude coordinates, but that only got me so far. Now I had to look up from my GPS and figure out where in 3D physical space the cache was. It’s not on the ground … Maybe it’s in up in that dead tree? Or under that boulder? As I was wandering around in circles, looking up, down, and under, I had a moment of déjà vu.

That familiar feeling came from all the searches I have done where I was not quite sure what I was looking for, but I knew I would recognize it when I found it. I often have the general coordinates when I start my research. When trying recently to understand how “digital twin” technology impacts vocational education, I knew I needed to start with a database of trade and professional publications. But until I started noodling around and doing a few searches, I did not know whether I should be looking at IT and data science sources, in the literature on manufacturing and architecture, or for articles on new developments in vocational education. The keywords—and even the likely presence of consistently applied controlled vocabulary—vary widely from discipline to discipline. The full text of peer-reviewed literature is often well-indexed but not searchable, while articles in trade publications tend to be shorter, full-text searchable, but with inconsistent subject indexing.

Using my new geocaching skills, I realized that I first needed to decide whether I would be searching metaphorically on the ground, up high in the trees, or somewhere in between. Where should I initially focus my search? Should I start by peering inside all the nearby tree stumps, or, in the case of my data twinning project, in the computer science literature on the assumption that IT industry observers would be the first ones talking about this trend?

As I reflect on the impact of generative AI such as ChatGPT, I see the searcher equivalent of the Turing test, famously used to test a computer program’s ability to conduct a natural language conversation indistinguishable from one between two humans. In my research for digital twinning and vo-tech education, I started with my own parsing out of the various stakeholders and innovation drivers I should be looking for, knowing that a scattershot approach would be no more effective in online searching than it is in geocaching.

When I asked ChatGPT for resources to learn about the impact of digital twinning technologies on vocational education, it responded with a list of various types of sources—academic journals, market research reports, professional conferences, and websites of nonprofits. That response, while useful, did not pass my Turing Searcher test. While it accurately identified some of the resources likely to have information on my topic, it was not able to differentiate among the underlying content sources. (Human) info pros know that you need to consider both the type of content—white paper, government report, conference poster session—and the content itself—intended audience, level of expertise, author perspective.

As we info pros face an infoscape newly “enhanced” by generative AI, I am reminded of the early days of the web, when librarians were asked why users needed professional online services now that everything was available for free online. We responded then by adding value to our search results and proactively offering to teach our users how to get even better results through more sophisticated search tools and techniques. We can take the same approach today; we can look for what would distinguish our deliverable from that of ChatGPT. We cite our sources; we ask our clients what the intended outcome of their projects are to make our results more actionable; we proactively ask clients if they will need an update on the project later. Our job is to ensure our deliverables always pass the Turing Searcher test.


Mary Ellen Bates

Mary Ellen Bates (mbates@BatesInfo.com, Reluctant-Entrepreneur.com) now checks for nearby geocaches when she travels.

Comments? Emall Marydee Ojala (marydee@xmission.com), editor, Online Searcher