Information Today, Inc. Corporate Site KMWorld CRM Media Streaming Media Faulkner Speech Technology DBTA/Unisphere
PRIVACY/COOKIES POLICY
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM Faulkner Information Services Fulltext Sources Online InfoToday Europe KMWorld Literary Market Place Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer Unisphere Research



Vendors: For commercial reprints in print or digital form, contact LaShawn Fugate (lashawn@infotoday.com)

Magazines > Information Today > November/December 2023

Back Index Forward
SUBSCRIBE NOW!
Information Today
Vol. 40 No. 9 — Nov/Dec 2023
DATABASE REVIEW
Google’s Answer Machines
by Mick O'Leary

Google’s Generative AI Products

SYNOPSIS

Google has two new generative AI products: Bard, a standalone app, and Search Generative Experience (SGE), which is integrated into the main Google search engine. These have revolutionary search and composition capabilities, putting them in a class with ChatGPT and the new Bing as watershed AI systems that are readily available to the general public.

The Holy Grail of search has always been the answer machine (AM): In response to a simple, natural language query, you quickly get an answer that is accurate, comprehensive, organized, and up-to-date. Previous search systems, from manuscript catalogs to modern online search systems, have always fallen one step short: They retrieve relevant information containers (books, articles, web documents, etc.), but it then still falls to the end user to extract the desired information from those containers—which can be laborious and time-consuming—and then organize it in some practical way.

There have long been proto-AMs—i.e., search hits that provide specific answers without being full-fledged AMs. Publication abstracts and literature reviews have been available, but these are neither on demand, nor customized to a user’s specific need. Current search engines have simplified AMs, but these are also not last-mile tools.

GOOGLE’S PROTO-AMS

Google has led in developing proto-AMs. The first page of a Google search often leads with three proto-AMs:

  • The top-of-page hit is an informative source from which a key snippet is taken.
  • Just below the top hit are several “People also ask” snippets. These are in Q&A format, with a commonly asked question about the topic followed by an informative snippet from a relevant page.
  • To the right of the “People also ask” snippets is another snippet from a relevant site, often Wikipedia.

These proto-AMs are useful in their own right. By retrieving a highly relevant snippet from a highly relevant site, they are focused and concise, and they may be quite satisfactory for short-answer queries. Nevertheless, they do not retrieve data from a set of relevant sources and combine them into a coherent exposition.

THE GRAIL IS FOUND

The Holy Grail of search—a complete AM—was delivered to the public in fall 2022 in the form of OpenAI’s ChatGPT. It was an unprecedented advance that landed with stunning force: It quickly became the fastest-growing app in history (until supplanted by Meta’s Threads in July 2023). Over the next 6 months, it was followed by ChatGPT Plus (which uses the immensely powerful GPT-4), Microsoft’s New Bing (also using GPT-4), and Google’s Bard and Search Generative Experience (SGE). It is one of the greatest and fastest surges in technology advancement in history.

The astonishing power of these AMs landed like a bomb in the tech world, with tsunami-level ripple effects among business, consumers, politics, education, and content creators. Experts forecasted the dawn of a new AI Eden or a descent into an AI apocalypse. The latter view was reinforced by several high-profile mistakes by the new AMs, which undermined their credibility and trustworthiness. Their performance improved in succeeding months but, in any event, they are here to stay.

ENTER BARD AND SGE

My comments here are based on hundreds of searches with Bard and SGE across a variety of topics. Most of my sear­­ches were reference-type queries, rather than creative prompts or shopping/travel questions. As such, they are just a small window into the overall performance of these sprawling, esoteric systems.

Bard and SGE had wide public release in May 2023. Bard uses PaLM 2, Google’s most advanced large language model (LLM). SGE uses PaLM 2 and other Google LLMs. The same searches in Bard and SGE usually agree on the main points of the topic, but they are separate compositions. Their placements, however, are quite different. Bard is a standalone site, separate from the main Google search page, whereas SGE is tightly integrated into it. SGE automatically generates a reply for many search queries, and for others, it offers to generate one. There is no option to require up front that a search include an SGE.

Bard requires that users sign up for a free account. Access to SGE is via signup in Google’s new Search Labs, where Google experiments are user-tested. SGE is available on the Chrome browser and on Android and iOS phones. According to a Bard chat, responses are kept for 72 hours, but my account has them from several weeks back. SGE does not store responses.

BARD VS. SGE

Let’s see how Bard and SGE compare.

Composition

Bard’s replies are excellent examples of expository writing, with laudable organization, sentence structure, grammar, and spelling. They are typically between 400 and 600 words—concise, but long enough at least to mention the principal points and arguments on the topic. There are usually no links to sources unless, according to Bard, the content is taken substantially from a single source.

Bard can compose in five different templates: Simple, Short, Long, Professional, and Casual. Each template—as its name implies—varies the response according to difficulty, length, or tone. It does all of these reasonably well, demonstrating yet again the eye-opening proficiencies of generative AI. However, I came across these templates only by accident, as I was burrowing through a lengthy Microsoft blog post. Bard itself has no indication on its search page that this useful feature even exists. This is an example of the recurrent sloppiness that I’ve encountered with the new generative AI bots, as if they were rushed onstage before being prime-time ready—thumbs down for fit and finish.

SGE delivers short replies, usually fewer than 200 words. They are also well-written, but their snippet-like length may provide only brief and general information on a topic. The information itself is obtained from a small number of websites, usually five. SGE includes links to these, which typically are credible news, reference, and other content-rich sources. Individual sentences in SGE often have a link to the website where the information was found. Both Bard and SGE will not accept prompts that they judge to be illegal, threatening, offensive, or otherwise unsavory.

In August 2023, Google introduced a refinement—“SGE while browsing”—that applies SGE analysis to an individual web document. It creates a handful of key points, with links to their corresponding locations in the document itself. These are helpful, but for a long or complex document, they may be too short and too general to do justice to the original.

Accuracy

Accuracy continues to be a stumbling block for all AI chatbots, so much so that their use for any serious information-handling purpose is still highly precarious. My tests in both Bard and SGE were generally but not perfectly accurate. I found errors, and in a few cases, they both completely misread the query.

My unsupported speculation is that a more common topic may elicit more accurate results, perhaps because the AM has a larger corpus of training data and live web results from which to draw. Bard and SGE are usually balanced and nonpartisan, but for some current controversial topics, they editorialize in favor of left-wing, progressive positions.

Timeliness

SGE is more up-to-date on breaking news than Bard is. SGE will have recent news as rapidly as a regular Google search, while Bard updates more slowly.

Search Assistance

Neither Bard nor SGE has readily available search assistance or an FAQ page that would provide guidance on writing prompts, interpreting results, etc. There are no pull-down filters to define or refine the search. Bard and SGE each provides three follow-up searches.

THE NEW SEARCH PARADIGM

Bard, SGE, and their peers have a completely different relationship to their sources than preceding information retrieval tools do. The goal of the predecessor systems, generally, is to find sources, while the goal of Bard and SGE is to exploit them, using their content to generate novel responses. Yes, Bard sometimes and SGE always provide links to their sources, but in this respect, the big question is whether end users will explore these or ignore them in favor of the chatbot’s custom response.

The content and search industries are concerned that AI chatbot users will reduce direct interaction with their sites, thereby draining revenue. Information professionals must also gauge how Bard, SGE, and other AI-driven AMs will affect their own work processes, discovery systems, and customer relationships. If books, articles, and websites can be disintermediated, what outcomes might befall other search intermediaries?
Mick O'Leary Mick O’Leary has been reviewing databases and websites for Information Today since 1987. Send your comments about this article to itletters@infotoday.com or tweet us (@ITINewsBreaks)