Online KMWorld CRM Media Streaming Media Faulkner Speech Technology Unisphere/DBTA
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM EContentMag Faulkner Information Services Fulltext Sources Online InfoToday Europe Internet@Schools Intranets Today KMWorld Library Resource Literary Market Place Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer

Magazines > Computers in Libraries > January/February 2010

Back Index Forward

Vol. 30 No. 1 — Jan/Feb 2010
The Day Search Stood Still
by Will Sexton

That little rectangle with a button next to it? (I mean those things we call search boxes but might just as well call “resource drains.”) Imagine it disappearing from your library’s webpages. The intricate works behind these design elements make up a major portion of what library staff spends time and money developing, populating, supporting, refining, re-imagining, and reliving in work-related nightmares.

I know I’m dancing around the third rail of library IT here (a place where I often find myself in my position as applications analyst at Duke University Libraries), but I can’t help wondering if we might reclaim energy from those monumental efforts. Conceding that there is no immediate threat to its existence, I’d like to take a moment to think outside the library search box.

Libraries Lose Their Advantage

The mindset might be easier to get into by taking a short look at our long history. The library has always enjoyed certain advantages, namely, location, location, location. The basic blueprint of a human city—imitated by campuses of all kinds—places a seat of power in the center, in the form of a fortress, a palace, or a temple. Libraries have always stood near this center. From its privileged position, the library could borrow authority from its neighbors and project it back into the space where intellectual pursuit, discourse, and debate take place. To those who seek knowledge, we say, pass through these portals of discovery. You are at home here.

For a long time, the model sustained an essentially idealist institution through social upheaval, paradigm shifts, and revolutions. But the ground has shifted. We’ve owned that real estate for 4,000 years and gotten great returns on it, but the bubble has burst. Suddenly, it’s easier—and cheaper—for people to participate in knowledge-seeking networks outside of the library than it is to find parking downtown or on campus.

So why on earth have we replicated our traditional service model on the internet, where we enjoy little of that natural advantage? Consider the assumptions that all those little text inputs expose: First, as researchers enter the formative stages of a research inquiry, the library will rise out of the mists to appear as a go-to resource; second, they will take the trouble to find our websites; third, they’re willing to invest time learning to use our localized discovery interfaces.

When they do click our way, we expect researchers to sift among various confusing and competing entry points before they even type in a single query. If they stay, they navigate websites organized according to unhelpful concepts such as departmental structure, rather than according to the themes that they’re researching. It’s alarming—here in the cradle of the long tail and the serendipitous discovery, the service model is cracked.

Fixing What’s Broken

As a humble response, I offer the following thought experiment. Imagine that a business consultant advises your library to revamp its entire online model, to de-emphasize search functionality, and to reclaim staff resources. Your governing body, seized with millennial fervor, sends down an edict. Search boxes must disappear from library webpages! Library IT staff has 6 months to do it, or all our jobs will be outsourced. Yes, it’s a ridiculous scenario, but it’s appropriate to the level of angst that pervades our field in the age of the Google book settlement and Web 2.0.

The Two-Team Approach. If asked for a plan, I might say something like the following: Let’s divide the staff into two teams, each working on one of two complementary approaches. The first one comes up often in discussions about the library and Web 2.0. The second involves a thorough rethinking of the library website.

The first team works toward a comprehensive framework for syndication of library content. Syndication most often finds expression as RSS or Atom feeds. In broader terms, it involves opening up pipes of content, enabling users with a diverse array of client programs to pull materials that matter to them into workspaces that are meaningful to their research objectives. Distribution or diffusion is the basic pattern, but the way to cultivate it is to develop a unified Application Programming Interface (API) for discovering and accessing library content and returning it as RSS and Atom.

As the Many Minds Principle tells us, “The coolest thing to be done with your data will be thought of by someone else” (see slide #14 in the presentation at The rule of thumb is that syndication projects content into the space where people do interesting things. We need the capacity to deliver on request any “discoverable” over the network in any context. But we need more.

The library’s two main assets are collections and human expertise. The first team is exposing our collections in a way that they can figure in any context. The second team needs to enable our people to create context. We take it as one of the core tenets of the library’s mission that small acts of applying our own human expertise to our collections will meet countless research needs. We need to bring our collections and our people together and project them into space, like a rocket engine mixing hydrogen and oxygen to generate propulsion.

Hey, maybe this is rocket science!

The second team takes on the problem of mashing up the first team’s content feeds with the knowledge about our collections that flows from librarians at the reference desk, in the archives, and in the subject specialists’ cubicles. This expertise may flow from blog posts, feeds from third-party platforms such as LibGuides, or other content-management systems that come and go as web trends change. So how do we do put it all together?

Here’s a radical idea: Let’s design our library website to have webpages about the things that researchers research. I know it sounds bat-guano crazy, but we’re in the middle of a thought experiment here. Indulge.

The notion of the library website, to our detriment, has become somewhat outmoded. We owe this state of affairs to Big System Syndrome, where we spend huge amounts of our staff resources on our ILS, our vendor databases, and the dreadnought concept of an institutional repository. These are the altars at which we have sacrificed good design. For all the effort they demand from us, they do not give back what we need in order to compete in the present environment. A sound approach to web development should do for the library what those systems will never do—namely, enable the mechanics for that combustive mixing of content and expertise.

Learn from the competition. Here we might take some cues from our main competitors in the knowledge-seeking game. To illustrate, I conducted a kind of micro-experiment, which you can replicate, by querying three different search engines on the phrase “Mark Twain.” Google, Bing, and Yahoo! all returned the same top result—the Wikipedia article titled “Mark Twain.”

There’s nothing unusual in seeing a Wikipedia article in that position. If you use the internet for research, you come to learn what a Wikipedia discovery looks like even when staring with tired, glazed eyes. It’s easier to pick it out because, chances are, it sits at the top of a result set. Here’s a question to chew on: What does a library discovery look like in a search engine result set?

Now here we are looking at the space where research gets done these days, and we have no idea how the library fits into it. Why exactly is it that we can’t project our authority into that space in the same way that we used to, back when cities still had centers and people thought of traveling to a library as a convenience? Where a favorable spatial architecture once gave it status, a poor understanding of information architecture has relegated the library to the back pages of a search engine result set.

The issue here is not simply search-engine optimization, but a deeper one of information architecture. To understand this point, it’s worth a closer look at the Wikipedia discovery. For one thing, we can predict its URL; in the English-language version of Wikipedia, a URL always takes the form<Topic>.

The interface is both simple and utterly transparent. The elegance impresses all the more, considering the great social complexity hidden behind the creation of some articles. Furthermore, all the discussion that goes into a topic is accessible, because Wikipedia’s interface exposes it behind a separate tab.

Contrast the architecture of the typical library website. Our interfaces do little to hide complexity but tend to reflect the organizational complexity and sprawl of Big Systems that lie behind them.

Returning to our hypothetical second team of developers, let’s ask them to cook up a scheme that mimics Wikipedia’s to-the-point information architecture but draws on our strong background in metadata. Websites such as Delicious and content management systems such as WordPress have demonstrated that values for metadata fields can be mapped onto URLs, as in The developers on the second team, desperate to save their jobs, roll the dice on a reimagined information architecture for the library website:






The developers can now upload pages like these with features such as feeds from collections of books, manuscripts, images, and videos; photographs of smiling subject librarians along with their contact info; blog posts and blog feeds from librarians, faculty, and students in an American Lit course; feeds of Twitter posts with the tag #MarkTwain; timelines of an author’s works; and so on. No, we’re not going to handcraft a page for every single metadata value; templates will generate most of them.

However, in those cases where one of these “facet pages” represents a particular strength of our library, we might put some extra care into it. These custom pages are what we really hope to project, so they appear in search results with heavy hitters such as Wikipedia, YouTube, Amazon. If your library has that truly inspired video archivist who blogs frequently and connects with researchers in a special way, then your /videos/ pages may come to life. Whatever your library’s highlights are, let them shine through.

Pages like these might make the top of search engine result sets. They might even become instantly recognizable to the eyes of researchers.


If it seems like a huge effort to engineer something like what I have suggested, count up the hours we spend on our Big Systems, and tally the returns we get. At the very least, we ought to recognize that the concept of “discovery” has outgrown the confines of the functionality known as “search.” The evolution of the internet has deselected rigid structures, and the present environment compels us to develop flexible architectures that human beings can shape and influence. If we do, we’ll find less and less of the library’s resources disappearing into those search boxes.

Will Sexton ( is an applications analyst at Duke University Libraries, where his main concerns are support for digital collections, archival finding aids, and metadata databases. He led the development of Duke’s system for the discovery and access of digital collections, available at
       Back to top