Information Today
Volume 17, Issue 10 • November 2000
IT Report from the Field •
Online World 2000—The End of an Era
One of the industry’s premier conferences goes out with a bang
by Donald T. Hawkins

The Final (W)Rap Session
by Péter Jacsó
When Susan M. Klopper, director of the Arthur Andersen Business Research Center, asked me to be one of the four panelists in the Schmoozin’ with Susan: (W)Rap Session at the end of the Online World 2000 conference, I didn’t know that it would be the last wrap session in the 22-year history of this event. As you’re already aware, the Online World conference has been discontinued, and it will be replaced by two, more focused events sponsored by Online, Inc. Evidently, the company felt that online information has become a commodity and that the online world has grown tremendously—so much so that an Online World conference became too generic. 

This surprise announcement gave a special spin to this final session, of course. The session’s chair and the panelists—David Curle, a senior analyst at Outsell, Inc.; Susan Fingerman, principal of SMF Information Services; Libby Trudell, vice president of The Dialog Corporation; and your humble writer—tried not to wax nostalgic for too long, but rather to balance the looking-back and looking-ahead portions of the session.

Looking Back
As a longtime searcher, I wanted to corroborate the feelings of my fellow panelists with some telling search results. If you make a cross-database search in the 499 databases on DIALOG for the phrase “online information,” you’ll find a dearth of documents from 1979. There are only 23 databases with at least one record that matches the query. The LISA database has by far the most records with 386 postings (because “online information” is a descriptor and a part of other descriptors). Far behind, with only 10 to 20 articles, are INSPEC, Information Science Abstracts, ERIC, Social SciSearch, and NTIS. Following these, with merely two to seven records, are PASCAL, PAIS, PROMT, the Magazine Database, Compendex, the Trade & Industry Database, LC MARC, and ABI/INFORM.

Doing the same topic search for 1999, you’ll find 200 databases with at least one record that matches the query. On the top is World Reporter with 1,453 hits; then a cluster of four databases, with about 1,000 hits (Trade & Industry, PROMT, Business Dateline, and ABI/INFORM); followed by a group of 600-hit databases.

Of course, comparing searches from 1979 and 1999 isn’t fair. In 1979, there were far fewer databases and there were no full-text databases (on DIALOG), so the second search should have been limited to the title, abstract, and descriptor fields. Even with this restriction there would be more than 100 databases, many with 600 to 900 hits. But it’s an important part of the landscape that now we can search the full text of far more databases than we could 20 years ago. That might explain what Trudell summarized as the renaissance of prepackaged information—a good mix of preselected documents of high nutritional value served ready for digestion. 

At the conference, I noticed the dominant presence of some of the venerable online information services (OCLC, DIALOG, LEXIS-NEXIS, EBSCO, SilverPlatter, and Questel•Orbit) as well as the modest exhibit areas of other industry veterans (Gale and H. W. Wilson). The panel also noted the absence of some power players (Dow Jones and Bell & Howell Information and Learning). The latter two companies are very much present in the information industry, of course, but they just may not have found Online World 2000 compelling enough to come and showcase their products and services. Quite tellingly, those absent from this event are the traditional information services that most fiercely target the end-user market. And we certainly noticed the presence of the relatively new powerhouses (such as Northern Light and netLibrary, Inc.), and the new players, like Questia Media, Inc. and METIS. Questia Media will launch a full-text, searchable, undergraduate textbook collection next year. METIS impressed me with its alpha version of a system that searches select subsets of ProQuest, Gale, and EBSCO databases. 

Looking Ahead
The panelists took an inventory of the most-often-heard buzzwords at the conference; these represent the future, either directly or indirectly. Artificial intelligence, knowledge engineering, and knowledge management were high on the list, and rolled off the tongues of representatives from Google, Bright Station, and in the most well-attended session, which was skillfully moderated by Greg Notess. 

Fingerman picked XML, which symbolizes the importance of information structuring in a standardized format, with some customization. Klopper noticed that she hasn’t heard “recall” and “precision” mentioned so much since she was in library school, and she used the term “identity awakening” to describe what many librarians/information specialists feel when realizing their potential in providing guidance to users and developers of Web products and services. We all need ongoing education, and I have no doubt that Online, Inc.’s upcoming Web University conference is a good idea for information professionals. I felt especially so after attending the deeply disappointing SCOUG (Southern California Online Users Group) session, which in previous years was excellent. It was meant to demonstrate the inferiority of the Web search engines and products born of the Web. This session illustrated that information professionals may not always do their homework and are prone to using wrong search strategies, inefficient syntax, or ill-selected sources. They might also get Google’s rank algorithm upside down, or they can’t figure out what Xrefer (, a spiffy British site with respected, cross-referenced dictionaries and encyclopedias) means, deriding the name instead of glancing at the tag line or—heaven forbid—reading the help file. 

In closing, Curle picked the term that he heard least often. He was surprised that he didn’t hear a word throughout the entire conference about the copyright-infringement suit that was recently brought against some of the companies in attendance. I think this is one of those things that will, indeed, have a big impact on the information industry, and is worth tackling rather than just relegating to the company lawyers.

Although this was the end of the Online World conferences, the online world will keep spinning and bringing new conferences and exhibits with bright new products and services. 

Péter Jacsó is associate professor of library and information science at the University of Hawaii’s Department of Information and Computer Sciences, and a columnist for Information Today. His e-mail address is

The Online World 2000 conference was held September 18–20 in San Diego, and its most significant event was a surprise announcement by its sponsor, Online, Inc., that this year’s conference would be the last of a series that began in Atlanta in 1979. (For more information, see Sheri R. Lanza’s NewsBreak on page 3.)

The announcement was greeted with nostalgia by some longtime attendees of the Online World conferences, which have been an established highlight on the information industry’s crowded calendar of fall events. Regular attendees had come to look forward to Online World’s keynote addresses by leading industry visionaries, sessions conducted by well-qualified experts, practical “how-to” clinics on a wide variety of topics, announcements of new and leading-edge products, the end-of-the-day exhibit hall receptions with their food and drink, and the general high quality of the experience. This year’s event was no different, and certainly lived up to all previous expectations.

Opening Keynote
Online World 2000 opened with a key-note address by Carl Malamud, chairman of Invisible Worlds, Inc. Malamud’s address was somewhat of a departure from previous Online World keynotes, in that he presented a fairly technical look at the Internet, characterizing it as a “space” in which thoughts must be in context to be understood. Today’s search engines are not flexible enough to function effectively in this space because they present their output in rigidly defined formats, they index data how they want, there’s only one way of finding things, and there’s no way to merge the output from several systems and post-process it.

A general-purpose protocol, BXXP (Blocks eXtensible eXchange Protocol—see, currently under development, will allow simple communication between applications using XML tags. Under this protocol, “mixers” will be able to search metadata to find things on the Internet, “space servers” will query databases, and “builders” will interact with the user interface to retrieve data from the XML servers. Malamud has worked with the SEC’s EDGAR database and has used some of these concepts in putting BXXP on the Internet. His vision is one of mapping the Internet using the space metaphor, and stitching search engines together to adapt to changing user requirements.

Three Tracks
Following the opening keynote, the conference broke up into three separate tracks: Taming Technology, Content Review, and Practical Searching. A series of “e-clinics,” many of them very well-attended, rounded out the program. The second day featured the always popular Industry Announcements session, and the third day opened with a keynote speech by well-known industry observer Steve Arnold, who focused on the future of searching. The conference closed with a CEO panel and a wrap session led by Susan Klopper of the Arthur Andersen Business Research Center. (Because of time constraints, I wasn’t able to attend the two closing sessions, but Péter Jacsó has provided an account of the wrap session in his accompanying sidebar.) In the rest of this article, I’ll summarize some of the talks I attended; the majority of them are from the Taming Technology track. Most of the presentations are available on the conference Web site at presentations/index.lasso. (You’ll need to register to gain access to the presentations.)

The first session in the Taming Technology track dealt with technologies that are used to move data to the Web. Information doesn’t just appear on the Web—much effort is required to prepare it and format it properly. Speakers in this session were Darlene Fichter of Northern Lights Internet Solutions, Ltd.; Gabe Luchetta of CaseShare Systems, LLC; and Karl Best of Enigma, Inc.

Fichter discussed XML and how it differs from HTML. She pointed out that XML is a way to structure and code information, and it can handle complex documents easily. Because it’s open and extensible, subject-specific subsets of XML can be developed. Subsets for a variety of subjects are already available, and one to handle chemistry data is coming soon. Viewing information structured with XML will change the way we think—the importance of “documents” will be diminished as the information they contain becomes data.

Luchetta examined Cold Fusion, another technology used in moving data to the Web. Cold Fusion is a script-based language developed by Allaire, Inc. ( that can tie databases and search engines together using browsers and extensions to HTML. With Cold Fusion, business applications can be created with a browser interface.

In a session on metadata (data about data) and automatic indexing, Bill Carney of OCLC reported on the development of the Dublin Core standard, which consists of 15 elements to describe data. It’s optimized for electronic research, international in scope, and easier to use than MARC data elements.

Dublin Core was created to meet the challenges of searching on the Web and navigating its resources; library skills were used throughout its development. CORC, an OCLC project, is a cooperative effort to introduce a system that will automatically tag data using the Dublin Core standard—a prototype can be found at

In the same session, Teresa MacGregor of LEXIS-NEXIS described her company’s research in smart indexing. It has found that the best results are obtained when using a combination of human and computer indexing (which harnesses both the power of the human brain and the speed of computers). LEXIS-NEXIS has a Web page with information about smart indexing at

A series of e-clinics was held in the exhibit hall. The one I attended, conducted by Gary Price, proved to be very popular, drawing a standing-room-only crowd. Price, a compiler of well-known and widely used lists of useful Web sites, described and demonstrated over 20 tools and utilities to help users manage their interactions with the Web. Some of the tools allow users to save pages to individual folders or to a remote server, download parts of a site to a single file, format a bibliography, search only a designated domain, or find official statistical data for countries. Price’s complete list of tools is available at the conference Web site.

The first day of the technology track concluded with the always popular “Search Engine Showdown” moderated by Greg Notess, an authority on search engines and creator of the Web site of the same name. Speakers representing three (possibly) lesser-known search engines, WebTop,, and Google, described the latest developments on their search engines. WebTop is the Web version of the Muscat system that was acquired by Dialog (and remained with the Bright Station platform when Dialog was sold to Thomson).

WebTop attempts to extract key concepts from the user’s query and use rule bases or thesauri to construct the retrieved set. It segments the Web into “zones” based on linguistic concepts and bypasses the search box (studies show that most people use only one or two words in their searches). The WebTop interface allows users to just drag and drop text to a search tool. The analogy is that of a bigger needle and smaller haystacks to enhance retrieval., formerly The Mining Co., uses a network of pre-selected, topic-oriented sites. With 55,000 topics and 250,000 articles indexed, it’s the seventh-most-popular site on the Web, receiving more traffic than AltaVista or Ask Jeeves. Finally, Google, with 40 million users a day, uses link analysis to identify highly linked sites. It then tries to determine the most authoritative site to answer the query and ranks that first in the output. Google considers every word of the query important because people generally don’t waste words in search boxes. Its search algorithm inserts “AND” operators between words and doesn’t allow Boolean search expressions because many people don’t understand Boolean logic.

Day Two
The second day of the conference opened with the Industry Announcements. Nine companies gave 3-minute presentations describing new or enhanced products, and the list of them that appears on the conference Web site is reproduced here.

LEXIS-NEXIS introduced (, a new, Web search interface for its NEXIS service. was created using LEXIS-NEXIS Smart-Indexing Technology and developed with the input of a select group of information professionals. [Editor’s Note: For more on, see Mick O’Leary’s Database Review column on page 32.]

Vigil Technologies ( announced e-Sense, a content filter that automatically locates and monitors Internet sources, distilling content that’s highly relevant to personalized business landscapes. Delivering only the most pertinent information, e-Sense can transfer this content to information portals, allowing an organization to leverage it securely throughout its portal-user communities.

Questel•Orbit ( unveiled PlusPat, a new international patent database with worldwide coverage (more than 65 countries) from the 1960s forward, with selected country coverage back to the 1920s. English-language abstracts and European classification coding, as well as U.S. patent and IPC (International Patents Classification) codes, are also included. [For more information, see the news story on page 37.]

MindBranch (, the new name of PRG (Publications Research Group), announced Alton L. Adams as its new CEO. Additionally, MindBranch announced that Village Ventures, Inc. has taken a sizable stake in the company and is providing $18.4 million in working capital.

The Dialog Corporation ( described Dialog One, a suite of search interfaces that offers users one-click access to the Dialog service. Complex Dialog search strategies, such as mapping D&B D-U-N-S (Data Universal Numbering System) numbers, searching for Japanese patents, and mapping drug synonyms, can be accomplished without having a knowledge of Dialog’s command language. Users will be charged only for their output; no DialUnits will be incurred.

Kiplinger Business Forecasts introduced Kiplinger Finance & Forecasts ( This new, Web-based service offers all the Kiplinger content in one database, including business forecasts and personal finance information created by Kiplinger’s editorial staff. The company’s newsletters, magazines, books, and business forecasts have been integrated into the new service.

CRIBIS Corp. announced SkyMinder’s Ultimate Report (, which offers in one report such information as a company overview, company profiles, financial performance, financial comparatives, news articles, and tables. Using a one-button download function, users can capture reports into Adobe PDF or Microsoft Word files, or they can modify the report to include additional news or tables.

CatchTheWeb’s ( CEO Ray Huger announced recent product enhancements, including off-line presentation of archived content, threaded discussions, folder-level security, the ability to capture Adobe PDF files, and the introduction of its Web site.

The technology track continued with Greg Notess’ review of developments in search engines since last year’s conference. Changes occur with astonishing rapidity in search engines, as each company tries to find the ideal mix of features and capabilities. The size wars continue, with three or four engines trading spots in the rankings every few months. Some of the search engines are moving in contradictory directions, adding features that their competitors have dropped. In an area that’s in such flux, Notess advised the audience to not get fixed on a single search engine, but to be adaptable and watch for new search engines and new options. And never trust search engines to be comprehensive, accurate in processing, or completely up-to-date.

In a session entitled “What’s Ahead on the Web: Expanding Our Horizons,” Vic Rosenberg, developer of the ProCite bibliography management software (now owned by ISI ResearchSoft) and a professor at the University of Michigan, gave an excellent presentation on business models for e-commerce and today’s information industry. He described e-business as the transformation of an enterprise from its old ways of doing business to one functioning in a global networked environment.

Referring to an article in the June 25, 1999 issue of The Economist, Rosenberg said that the Internet is changing our ideas of what businesses are. Originally limited to “Brochureware”—simply reproducing sales and product information—they are now moving into an “infomediary” mode of doing business, exemplified by comparison shopping and the growing prominence of online auctions.

The Internet gives all companies a global reach, so business will gravitate to the lowest-price producer. The Net is virtually impossible to regulate—Rosenberg foresees the emergence of “data havens” (like tax shelters) as the pace of globalization increases. Some business models for the publishing industry are advertising and subscriptions; Internet-only, scholarly journals; and online databases (both free and for a fee). Models for information distribution depend on the type of information being sold.

Information can be given away free with costly server software (like Adobe Acrobat and RealPlayer); recent information can be free, with costly archival information (like newspapers); or the recent information can be costly, with free archival information (like investment newsletters and some journals).

As bandwidth increases and transmission speeds decrease, new models of distribution will emerge. We soon may see a completely interactive, customized system of commerce for each customer. It’s important to recognize that information is a product. Some features of information that can be sold are the data itself, its currency, or a service based on the information.

To sample the Practical Searching track, I attended a session entitled “Search Strategies for the Serious Researcher” that featured three, advanced, and highly respected expert searchers. Reva Basch, Mary Ellen Bates, and Jan Tudor discussed some of their strategies for finding information for their customers.

Basch went back to the basics, examining the questions one must ask in deciding where to find information: Is online the way to go? Is the information the kind that would be found on the Web? What are the user’s needs? Who would know the answer? Would it be found on their Web site? It’s important to think about these issues before plunging into a search engine or an online vendor’s service.

Bates used a case study to illustrate how she searches. She noted that we must break our old and comfortable habits as we move from traditional online to Web searching. She often starts with company or background sites to understand the issues, and then moves on to databases, following the links she uncovers. She finds it useful to sit back and think about the search at several points as it unfolds.

Tudor often uses a collection of reference books in her searches. She finds encyclopedias, almanacs, and yearbooks helpful because they contain articles written by authorities and provide an overall picture of a subject.

Day Three
The final day of the conference began with a keynote address by Steve Arnold, a frequent conference speaker and technology expert. He focused on the search process and the “R” technologies: Revenue, Research, Re-engineering, and Relationships (among people, information, and value). These technologies are important because they help put information in context, promote new ways to solve problems, and add value through technology. Two triggers that Arnold sees are virtual databases such as Napster, Gnutella, and freenets, and smart agent software, which often identifies a key to system use and expected user behavior. He discussed four keys to survival and success in today’s environment:

Further, Arnold noted that innovation is becoming an umbrella business, affecting all areas of the information industry. Personalization is helping to expose content in more helpful and clear ways, and object models are making online functions easy to use, thus enhancing the user experience.

The final session I attended, in the Content Review track, was a special review of e-books. Two speakers active in the rapidly evolving and changing e-book industry discussed their experiences. The first speaker was Lynn Connaway of netLibrary, Inc., who gave an overview of the e-book industry and some of its recent developments. Then Nancy Gibbs from the North Carolina State University Libraries recounted some of her experiences in introducing e-books into a large academic library environment. Both of these speakers highlighted the many changes that are currently taking place in the e-book area. (For an in-depth review of e-books and the e-book marketplace, see my articles in the July/August and September/October 2000 issues of ONLINE.)

And so the Online World conferences ended, not with a whimper but with a bang. Online World 2000 was another excellent experience for information professionals. It has been a remarkable series for the industry, and personally for me, as I’ve been fortunate enough to attend them all. It started with a feeling of excitement and pioneering at Online ’79 in Atlanta, and continued through the turbulent days of the 1980s and ’90s, as new technologies, new players, and new products had their impact on the industry. David Curle of Outsell, Inc., in the September 22, 2000 issue of his e-briefs newsletter, said it well:

This change in the conference lineup comes at the right time, when industry information companies, information professionals, and end-users are broadening the definitions of content and the way it is used in business today. But Online World will live on in many information professionals’ minds as the embodiment of a golden age of electronic information. For just over 20 years, the conference brought together the sorcerers and gurus of electronic content, from its infancy through the Internet explosion. While a lot of information pros fondly remember the 330-baud modem days, we believe there are better things ahead as content issues move into the mainstream to bring more and better content to the people who need it.
It remains to be seen what role Online, Inc.’s two, new successor conferences will play and whether they will continue the heritage established by Online World. One thing is sure, though—they won’t be lackluster experiences.

Donald T. Hawkins is editor in chief of Information Science Abstracts and Fulltext Sources Online. His e-mail address is

Table of Contents Information Today Home Page