Information Today, Inc. Corporate Site KMWorld CRM Media Streaming Media Faulkner Speech Technology DBTA/Unisphere
PRIVACY/COOKIES POLICY
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM Faulkner Information Services Fulltext Sources Online InfoToday Europe KMWorld Literary Market Place Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer Unisphere Research



Vendors: For commercial reprints in print or digital form, contact LaShawn Fugate (lashawn@infotoday.com)

Magazines > Computers in Libraries > December 2023

Back Index Forward
SUBSCRIBE NOW!
Vol. 43 No. 10 — December 2023

FEATURE

AI and the Library Community: The Experts Are In
by Amy Affelt

As AI in general and ChatGPT in particular dominate the news cycle and become a catalyst for promise and peril in just about every aspect of our lives, it is abundantly clear that they will have a serious impact on many of the key tenets of librarianship:

  • Their implementation in our own daily work tasks
  • The training of the next generation of professionals in our library schools
  • The task of providing guidance to our constituents as they navigate the information landscape
  • The learning of new skills and the assumption of new roles in our own organizations and beyond

As is always the case with budding, rapidly developing technology, there is a risk in trying to predict the future, since knowledge about its development, capabilities, applications, and potential uses is continually evolving. To get answers to some of our most common questions and concerns, I went straight to the preeminent experts in the library community. I asked them about their hopes, dreams, fears, and, yes, visions for the future regarding the advancement of AI and its effect on our work and the profession at large.

I expected a wide range of differing opinions and opposing views. However, I loved the imagery of an initial commonality. When asked what immediately comes to mind upon hearing the term “AI,” several colleagues mentioned that they picture robots in general: Robby the Robot, the (nameless) robot from the old Lost in Spacetelevision series, or HAL 9000 from 2001: A Space Odyssey. Point taken. As M Ryan Hess, digital initiatives manager at Palo Alto City Library, mentioned, AI tools such as the widely heralded ChatGPT popularized what was previously the niche field of machine learning in a way similar to the 1950s, when the field of robotics became more mainstream. Or, as Mary Ellen Bates, principal of Bates Information Services, explains it, an AI tool is “something that has been fed lots of facts and strategies and whose thought processes are not understandable or accessible to humans.”

Everything Old Is New Again

In an article in the September 1987 issue of Online: The Magazine of Online Information Systems, titled “Artificial Intelligence (AI) and Expert Systems for Information Professionals—Basic AI Terminology,” Donald T. Hawkins, then at AT&T Bell Labs, asked: “If AI techniques are less precise, why bother to use them?” Answering his own question, he opined it was because “tasks are getting more complex, workloads are increasing, development times are lengthening, jobs are becoming generally more complicated, and people are being asked to do more with less.”

Sound familiar? Remember, it was 36 years ago when Hawkins implored information professionals to take AI seriously and familiarize themselves with AI jargon. As he presciently wrote, “it will soon be significant in their lives.” In a January 1988 follow-up article, “Applications of Artificial Intelligence (AI) and Expert Systems for Online Searching,” Hawkins concluded that the systems being unveiled were in experimental stages and that “online searchers will have to continue using their own skills and perfecting their searching techniques.”

It is now 2023, and we are still waiting. AI tools such as ChatGPT are being rolled out in nascent form. As of September 2023, Google has an “experiment” caveat on its Bard chatbot webpage, and Open–AI CEO Sam Altman has stated that it will take 1 1/2 to 2 years for ChatGPT’s hallucination (misinformation) problem to get to a “much, much better place” (“Tech Experts Are Starting to Doubt That ChatGPT and A.I. ‘Hallucinations’ Will Ever Go Away: ‘This Isn’t Fixable.’” Fortune, Aug. 1, 2023; fortune.com/2023/08/01/can-ai-chatgpt-hallucinations-be-fixed-experts-doubt-altman-openai). Analyzing AI tools in order to determine future uses and potential benefits continues to be a task required of librarians. Are today’s AI systems as rudimentary as those of 1988? I reached out to Hawkins to get his thoughts on what has transpired in the past 3 decades.

Back in 1988, Hawkins reviewed the capabilities and limitations of searchable databases involving both advanced and natural language capabilities. I was struck by his conclusion that “there is still a long way to go before a major AI-based information retrieval system appears.” He distinguished between the mechanical and intellectual capabilities of AI, writing: “The mechanical aspects have been well automated … but those systems do not treat the intellectual aspects well. Automation of this area will require the power of AI.”

In 2023, do these systems now treat the intellectual aspects well? Hawkins isn’t completely convinced. While there are attempts to do so, and there will be many, many more to come, he believes that we will always need to be aware that the potential for corrupted and false information exists. The key task for information professionals dealing with mis-, dis-, and malinformation is to “recognize it when we see it.” He also emphasized to me that AI on its own is very different from AI applications for use in research by information professionals. He is very optimistic that we are well-versed in finding appropriate uses for these tools, ending our call with the quip, “Artificial intelligence is better than no intelligence at all!”

That Old Black (box) Magic

The library experts are in wide agreement that AI is a great way to spark the creative process and generate ideas. Hess summarized it as “a framework to start your learning journey.” Robert Berkman, editor of The Information Advisor’s Guide to Internet Research, stated that it “spurs my thinking” and can serve as a double-check in an almost-finished project. He suggests using it to ensure that every avenue of research is examined, and nothing is missed; you can then review any outlying research that it uncovers to see if there is something relevant that can be added. Put another way, he suggests looking at chatbots as unfamiliar sources that are “worth checking out and confirming, but not to be taken on face value.”

Melissa Del Castillo, virtual learning and outreach librarian at Florida International University and convener of ALA’s Generative AI, Reference, and Instruction Discussion Group, describes AI as being akin to Wikipedia, noting that it “can be a great assistant to get you started on a project or improve on what you have already written.”” While emphasizing that it can be a helpful tool for “brainstorming, writing refinement, and cross-disciplinary perspectives,” she told me its content “must be evaluated for misinformation and bias” and “should not be your final stop on your research journey.”

Document Analysis

Analysis and data mining of PDF documents are potential AI applications for which our experts see great promise. David Lee King, digital services director at the Topeka & Shawnee County Public Library, used ChatGPT to analyze a strategic plan, asking it to create vision statements based on the inputs provided. He then used the resulting information to brainstorm potential vision statements with his library’s CEO. He also sees the benefit to using it to summarize documents, but he cautions us to “make sure it covers everything and sums up the larger work nicely, especially if that summary is going to go before a library board or the C-suite meeting!”

Bates concurs. She uses ChatGPT and Claude to analyze and summarize articles. Claude can summarize multiple documents at the same time and handles up to 75,000 words. She also uses generative AI (GenAI) to create boilerplate business documents. She emphasizes that while we should “always verify any information from searchbots,” summarization of search results and the use of chatbots to find suggestions of starting points for research are valuable uses for these tools.

Coding

The title of Farhad Manjoo’s June 2, 2023, New York Times’ article declares, “It’s the End of Computer Programming As We Know It,” but adds a twist on an old REM lyric: “(And I Feel Fine)” (nytimes.com/2023/06/02/opinion/ai-coding.html). While acknowledging that computer engineers with advanced coding skills will always be needed, Manjoo is excited about the possibility that all of us will be able to write basic code with the help of AI tools, although he doesn’t explain why we would want to do that, or how we would use our self-written code. Luckily, our resident experts have more than a few ideas.

At present, one of the biggest concerns surrounding the use of AI tools for research is the fact that these algorithms “hallucinate”—they make things up and get facts wrong. Marydee Ojala, editor of Online Searcher, sees this as “a failure in the semantic and syntactical predictive technology used in GenAI.” King believes that these fears are diminished when AI is used for coding. If you write code to do a specific thing, as he succinctly puts it, “that code will either work or not.”

Additionally, Hess sees opportunity in using traditional and machine learning programming in tandem, with the AI code being fact-checked by traditional algorithms following explicit rules. This would solve not only the hallucination issue, but also contamination from misinformation. Hess shared with me an interesting example that incorporates a check-and-balance system involving math errors: “Currently, GPT can output math errors, but if you had a secondary algorithm based on traditional computer programming to check GPT’s answers, you could have reliable math outputs. Conceptually, then, you might be able to create a fact-checking or scoring mechanism that such future models could use to check their work before presenting it to a human. Such a system will probably require a new standard for ranking the authority or reliability of information. (I’m looking at you blockchain!)”

Familiar Tools with New Value

The incorporation of AI into our already-vetted and relied-upon databases such as Westlaw and Factiva is one of the most exciting developments for the use of AI in the information industry. Since we already have a lot of confidence in these fee-based, proprietary databases, knowing that their content is factual and credible, Berkman told me, “This is where AI can really be put to very positive, powerful uses. Strong analytical and conversations capabilities—but applied to quality, vetted data!”

Aaron Tay, library analytics manager at Singapore Management University, believes that AI may spur increased use of semantic search in these databases, as “this will complete the 2-decade slow drift from strict Boolean searching to loose Boolean to natural language search.” Tay thinks that accuracy in relevance ranking was an issue in the past, but that AI’s “edging toward human language understanding of text” may go a long way in solving these issues.” As Tay acknowledges, this is a double-edged sword. “Semantic-based matching systems make search results less predictable, because they do not have the transparent results of keyword matching that traditional systems have.

In an ideal world, Tay envisions databases keeping both semantic and keyword-based matching systems, but fears that database owners may not keep both if the cost to maintain both is high and if natural language searching is more widely used than keyword searching. Ultimately, he sees the incorporation of AI into traditional, fee-based databases as an opportunity for librarians in that we will be able to adopt new roles as database evaluators. He thinks that librarians should learn about evaluation frameworks and how to run them so that we can provide assessments of the AI functions of databases to our organizations.

Online Searcher’s Ojala sees a future search environment transformed by the addition of GenAI, which she describes as information creation rather than information retrieval. GenAI will change how we search but not that we search. Concerns about hallucinations will be mitigated when the model is trained on trusted content in subscription databases and with the ability to converse with search results. Search will become multimodal as more formats are seen as relevant to research requests.

Bates agrees that “It would be fabulous if we could have conversations with material in a fee-based database. Imagine asking it, ‘What are some of the key trends in the XYZ industry, focusing on the Asia-Pacific market?’ or, ‘What is the established case law on this issue when applied in this situation?’ and getting a useful answer.” She also thinks that concerns surrounding the validity of results will be quickly assuaged, just as they were when fee-based services began to offer relevance-ranking of results. While we can conduct traditional Boolean searches to verify results, Bates believes GenAI will parallel the case of relevance ranking, when soon enough, we “did not even question” their accuracy.

Ultimately, one of the best case scenarios involving the future use of AI by information professionals would be an environment in which these tools are already completely developed and put through their paces, rather than the current beta-like versions that have been unveiled for the public to test-drive. Bates hypothesizes that after these technologies have been refined and incorporated as back-end implementations to long-trusted databases, “they will disappear behind the scenes.”

In order to foster this best case scenario, “creating meaningful relationships” becomes even more critical, according to Chad Mairn of St. Petersburg College. He sees a need to partner with “AI and other technology companies, policymakers, researchers, and other organizations” to advance the work of “developing effective strategies for addressing AI-generated issues now and into the future.”

New Role: The AI Whisperer

In March 2023, Bloomberg ran a story about a hot new job being created by the AI boom: the “prompt engineer,” or, as Bloomberg called it, the “AI Whisperer” (itprotoday.com/artificial-intelligence/335000-pay-ai-whisperer-jobs-appears-red-hot-market).These positions, which do not require a computer engineering degree and come with salaries upwards of $300,000, consist of people who spend their days writing prompts, or basically search strategies, to harness the power of the AI algorithm to produce the best content results. For librarians, this is part and parcel to the work that we have done all along. It essentially combines reference interviewing skills with good, old-fashioned database searching by ultimately translating Boolean search logic and advanced search syntax into the types of written questions that the AI tool can best understand.

As I was rereading the Bloomberg article in advance of writing this piece, I received a pop-up job ad for a “Prompt Engineer and Librarian” at AI-startup Anthropic (jobs.lever.co/Anthropic/e3cde481-d446-460f-b576-93cab67bd1ed). The ad describes the position and then lists responsibilities, which are such a natural fit for our abilities and training:

“You will figure out the best methods of prompting our AI to accomplish a wide range of tasks, then document these methods to build up a library of tools and a set of tutorials that allows others to learn prompt engineering, as well as work with high value partners to directly solve their challenges.”

The day-to-day responsibilities included, in part, these tasks:

  • Build up a library of high quality prompts or prompt chains to accomplish a variety of tasks, with an easy guide to help users search for the one that meets their needs.
  • Build a set of tutorials and interactive tools that teach the art of prompt engineering to our customers.

I was so excited to see that Anthropic was looking to librarians as a natural fit for these positions. Indeed, it is a huge opportunity for us as information professionals. How can we best prepare for these roles?

Del Castillo sees a natural fit, given the similarities between prompt engineering and the controlled vocabulary and search strategy creation inherent in librarianship, stating that “finding the right keywords and commands is the bread and butter of library instruction.”

Guides on the AI Superhighway

Even if we are not aspiring to write code, it is essential that we understand the basics of how large language models (LLMs) work. Phaedra Boinodiris, IBM Consulting’s global leader for trustworthy AI, suggested three straightforward video lectures from professors at the University of Washington: “Large Language Models From Scratch” (youtube.com/watch?v=lnA9DMvHtfI) and “Large Language Models: Part Two” (youtube.com/watch?v=YDiSFS-yHwk&t=123s), from Steven Seitz, and “ChatGP-Why: When, If Ever, Is Synthetic Text Safe, Appropriate, and Desirable?” (youtube.com/watch?v=qpE40jwMilU), from Emily Bender.

Berkman also shared with me some interesting suggestions for acquiring more of the needed skills. First, he says that we can look to the AI tool providers themselves for guidance on how to create the most effective prompts for their products. We can then use that information to write prompt engineering documents for our constituents, as well as provide training. This would position librarians and information professionals as the go-to people for the creation and use of effective prompts. Berkman also told me about a free course offered by Coursera on prompt engineering for ChatGPT, being taught by a Vanderbilt professor (coursera.org/learn/prompt-engineering?), which is another way for us to hone our skill set.

Tay reminded me that the role of librarians “is far beyond just search.” Since we will always need to guide our users in the adoption and use of any kind of new tool, “with generative AI added to the mix, users will need a lot of guidance for how to use them.” He offered some ideas that we can implement immediately, such as creating newsletters and workshops for our users, in order to establish our role as the people in our organizations with AI expertise so that we can become consultants on the responsible use of GenAI in research.

At Singapore Management University, Tay held a hackathon on the use of LLMs to create apps that improve research and library experiences, cementing the library as a department that is in on the ground floor of AI. Rich conversations with faculty resulted, and the university’s librarians served as mentors on the use of library apps.

Bates suggests positioning ourselves as AI experts in our organizations by setting up virtual AI lab environments in which our constituents can try out tools and datasets in their actual work. She also recommends creating interest groups in order to build relationships with early adopters.

Del Castillo believes that librarians are “poised to act as AI authorities” by “creating LibGuides to demystify artificial intelligence, providing workshops on prompt engineering, and promoting the appropriate use of AI tools.” She thinks that librarians’ skill sets will expand as they become AI-literate, and being well-versed in how these tools work will “complement their current information and data literacy skills and will be beneficial in all of their future reference interactions.”

Michael Stephens, professor at the School of Information at San José State University, believes that, regardless of type of library, “creating classes on how to discern what is real and what is not in the information landscape will probably be one of the biggest roles of librarians going forward.” His emphasis is on the importance of librarians working to “make sure that people understand that not everything they see or hear is real.” Mairn agrees and believes that “ultimately, the responsibility for accuracy and reliability lies with those who use AI applications.” He sees the librarian’s role as educator to now include AI literacy when we teach information literacy and critical thinking. “Everyone must learn how to critically evaluate information, regardless if it is generated by AI or humans” Mairn also sees new roles for us in identifying AI-generated content and in developing transparent AI systems.

While Ojala sees the new AI environment as a combination of “nothing new under the sun,” and something that “presages a sea change for libraries,” she believes that we will face some of the same challenges that we have for quite a while now, which will add to the public perception that libraries are no longer relevant and that everything is online. They are, and it isn’t, and this allows us to assume new roles in a familiar scenario. In addition to learning how to search in this new environment, we will find “opportunities to inform our users about the promise and pitfalls of GenAI.”

Del Castillo suggests that we embrace the fear of being replaced by technology, quipping, “If I had a nickel for every time it is suggested that [insert trendy technology here] will replace librarians, I’d have a lot of nickels.” She believes that most librarians “tend to embrace what we see as new and useful,” and that the explosion of AI shouldn’t be any different. “As soon as librarians are able to establish our position in the AI literacy juggernaut, we might get to feel less paranoid about an AI Armageddon.” After all, she reminds us, “Librarians are smart and resourceful, and our skills are transferable.”

High-Tech, High-Touch

In early September, I was listening to CBC Radio’s The Current, and in a segment on ChatGPT, host Matt Galloway conducted an interview with Tasha Ausman, faculty of education at the University of Ottawa. I had been thinking about ChatGPT and librarianship in the context of the age-old question, “Will a robot take your job?” I wondered if there is a way for us to position ourselves as a more-desirable alternative to reference assistance from an AI chatbot. Ausman stopped me cold with an extremely refreshing answer regarding the use of AI tools for cheating by secondary students. I contacted her to learn more, and she further explained her thoughts:

“My experience in the classroom with secondary students is that they are inherently curious and thoughtful. There has been much written about the desire or inclination for students to cheat, and I find that while there are some students who might, generally speaking, students want to learn. This often goes against the media headlines which generate clicks about the negative aspects of AI—which we have seen in the past about graphing calculators, the internet in general (especially Wikipedia), and previous AI tools like Google Translate. My students are interested in being taught skills they can use and remember, foremost. They worry that they will be empty vessels who walk through life Googling everything, whereas previous generations knew things and had to memorize and learn books.”

Further, she believes that students (and, I would hope, anyone who looks to libraries and librarians for knowledge and research) are interested in seeking out expert teachers (and librarians, as we are also teachers).

I asked Ausman if it is possible for librarians to position themselves as a better alternative to AI tools for accurate, gold standard research and knowledge. This is how she replied: “This is a real existential crisis, so when I can provide ways for youth to apprehend and comprehend new information meaningfully, students feel empowered. This is our role as teachers after all. These things can be as simple as mnemonics or grammar tricks, or as deep as ways to learn complex physiology. Part of the crisis of confidence is that students fundamentally know that there is a plethora of junk and fake things on the internet, but they are getting lost in the ocean of information, and the easiest and most reliable person they can ask about what they see is their teacher. At a basic level, this is where the expert in the room comes in. We have a deep responsibility to help students navigate what they read, encourage critical skills, and remain champions of offline learning.”

Librarians certainly have this responsibility as well, and encouraging critical thinking, navigating content, and championing learning is why we are here. It is very heartening to learn that young and impressionable students still envision teachers, and hopefully librarians, in this way.

Bates agrees that AI tools are not a threat to librarians, as “our value is that we talk with our clients—those reference interviews are what set us apart from a chatbot.” Refreshingly, she suggests that we use the chatbots for simple, mundane tasks in order to free up our time to conduct professional, value-added research and analysis. Imagine having more time to hone our true expertise to deliver the best possible work product for our clients. That gift of time to do what we do best is something that is certain to be on the wish list of every librarian and information professional when thinking about best case scenarios for the future of the profession, regardless of the effects of AI.

Using chatbots to free up time for complex reference questions is also a benefit seen by Del Castillo. She uses Springshare to deliver links to content from library FAQs regarding hours and checkout policies. Chatbot delivery of this information allows librarians to devote more of their time to working on complex research projects whose nature mandates the assistance of a live librarian. Even with bot assistance being offered, users at her library request to be connected to a librarian around 30% of the time, which she sees as beneficial: “Librarians are already experts in finding and evaluating information, and they do not need to compete with chatbots.”

To close the discussion (for now), I asked San José’s Stephens about the future of librarianship in this brave new world. He has written widely about the importance of delivering library service in the context of shared humanity, and with heightened emotional awareness and intelligence, skills which will prove to be even more critical in an automated, robotic environment. He told me, “Now more than ever, folks want human contact, and they want to feel like they belong. I don’t see that delivered by artificial intelligence, but I do see it delivered by thoughtful information professionals that create spaces—both in-person, physical space and online—that welcome people in and engage with [them] in a human voice. You can tell when you’re watching a video that was made by AI. It has that weird feeling. You can also tell when you are genuinely and authentically talking with someone and maybe sharing your story with them. That won’t go away.”

These library community experts, despite starting with an image of robots, emphasized the opportunity AI gives us to position ourselves as vital to our communities. We’re proficient with technology, dedicated to teaching others about benefits and risks, willing to experiment with AI tools, and committed to human connections. Whether any of their predictions will become reality is anyone’s guess, but our commitment to exploring new technologies remains constant.

List of Experts

I am eternally grateful, not only for the expert knowledge shared, but also for the enthusiasm, collegiality, and friendship of these colleagues from around the globe. The draft of this article became richer and richer as each expert weighed in:

Tasha Ausman, LTA professor, faculty of education, University of Ottawa

Mary Ellen Bates, principal, Bates Information Services, Inc.

Robert Berkman, editor, The Information Advisor’s Guide to Internet Research

Phaedra Boinodiris, global leader for trustworthy AI, IBM Consulting

Melissa Del Castillo, virtual learning and outreach librarian, Florida International University

Donald T. Hawkins, distinguished member, technical staff, AT&T/Bell Labs (retired)

M Ryan Hess, digital initiatives manager, Palo Alto City Library

David Lee King, digital services director, Topeka & Shawnee County Public Library

Chad Mairn, librarian and assistant professor, St. Petersburg College Innovation Lab

Marydee Ojala, editor, Online Searcher

Michael Stephens, professor, School of Information, San José State University

Aaron Tay, library analytics manager, Singapore Management University

Amy Affelt


Amy Affelt (aaffelt@compasslexecon.com) is director, database research worldwide, Compass Lexecon.

Comments? Emall Marydee Ojala (marydee@xmission.com), editor, Online Searcher