Information Today, Inc. Corporate Site KMWorld CRM Media Streaming Media Faulkner Speech Technology DBTA/Unisphere
PRIVACY/COOKIES POLICY
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM Faulkner Information Services Fulltext Sources Online InfoToday Europe KMWorld Literary Market Place Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer Unisphere Research



Vendors: For commercial reprints in print or digital form, contact LaShawn Fugate (lashawn@infotoday.com)

Magazines > Computers in Libraries > September 2011

Back Index Forward
SUBSCRIBE NOW!
Vol. 31 No. 7 — September 2011
THE SYSTEMS LIBRARIAN
A Cloudy Forecast for Libraries
by Marshall Breeding
Director for Innovative Technologies and Research, Vanderbilt University Libraries


Service-oriented architectures and browser-based interfaces deployed through cloud-based infrastructure stand today as the key technologies preferred for new software development efforts
Of all the technology trends that I’ve been following for the last couple of years, cloud computing continues to gain the most momentum and stands positioned to most radically transform the shape of library technology. I see that we’re at one of those major turning points where technology rounds a curve into a new vision of the mainstream.

It All Began With Mainframes

Massive mainframes dominated the earliest days of computing. Their processing power, so modest by today’s standards, happened within room-sized processors operated by terminals with no computing capabilities of their own. I first became involved with library technology during these mainframe days, dealing mostly with building and maintaining a network of terminals throughout the Vanderbilt libraries that connected patrons and library staff to our implementation of NOTIS. These fragile networks required a lot of care and patience. I still remember the challenges of these daisy-chained coaxial cables that connected the terminals through quirky bisync protocols, delivering data at a breathtaking speed of 9600bps. It’s only been in the last couple of years that NOTIS finally became extinct, with one of the last remaining implementations in Latin America finally migrating to a new system.

The next chapter in library automation followed basically the same model of computing but brought in products based on somewhat smaller central processing devices called mini-computers. This class of computer reduced costs relative to full-fledged mainframes but didn’t necessarily offer dramatic differences in capabilities. The real work of computing still took place on the central processor, which was now shrunk down to the size of a large closet, with simple terminals delivering a text-only interface. These mini-computers ran operating systems such as Pick, VAX/VMS, or UNIX.

Client/Server Computing Leverages Plentiful PCs

A major shift in technology began in the mid-1990s with the advent of client/server computing and a new wave of library automation systems sporting graphical user interfaces. This new approach to technology aimed to make better use of the personal computers that were then proliferating in workplaces to take up some of the burden of the central server of a major application such as a library automation system.

The client side of the computing equation consisted of software installed on the personal computer. It was designed not only to present a user-friendly interface but also to perform some of the processing that would otherwise run on the central server. Over time, client/server took a more nuanced form, often called n-tier, distinguishing layers for interface, business logic, database, or other architectural components.

The advent of this model of computing was a mixed blessing: Its graphical interfaces proved easier to understand than the complex text-only interfaces but were often less efficient for experts who demanded optimal keyboarding efficiency. And they didn’t really save that much work for the central servers, which continued to gain more power at a lower cost in the natural course of hardware evolution. These client applications introduced a lot of procedural overhead because of the need to perform updates across the large numbers of workstations they are installed on.

Today’s library automation landscape reflects the technologies of this era. Most of the well-established library automation systems now familiar came out of the client/server era: SirsiDynix’s Symphony, Ex Libris’ Aleph and Voyager, Polaris Library Systems’ Polaris, Innovative Interfaces, Inc.’s Millennium, The Library Corp.’s Library.Solution, and VTLS Virtua.

In the world of library automation, product cycles often last longer than those of the broader trends of information technology. Some library automation products have proved able to evolve through the cycles of broad technology architectures. Symphony, Millennium, and Aleph, for example, trace antecedent versions rooted in the days of mini-computers and character-based terminals and were re-engineered to be reshaped into client/server systems.

Cloud Computing on the Rise

In recent years, a new cluster of technologies has begun to displace client/server computing. Service-oriented architectures and browser-based interfaces deployed through cloud-based infrastructure stand as the key technologies preferred for new software development efforts today. The key drivers that led to client/server computing no longer apply. The massive computing power available in globally distributed processing and storage infrastructures dispel the need to offload processing to local computers. Graphical interfaces don’t necessarily offer the most intuitive way to operate software now that everyone is so acclimated to the web. The need for portable access in these days of near-ubiquitous access to the internet trumps any remaining benefits in relying on graphical user environments.

Cloud computing reshapes the technology landscape in a fundamental way and represents the same order of change as when client/server computing usurped mainframe computing almost 2 decades ago. This model of computing obviates the need for local servers or complex desktop client software, relying instead on delivering applications through web-based interfaces or lightweight apps. These new technology services make use of processing and storage resources provided through highly virtualized components clustered together, usually in large-scale data centers distributed throughout multiple physical locations.

Under this model of computing, technology products take the form of software as a service, characterized by web-based applications based on infrastructure provided externally. Different business arrangements apply, including free services supported through ads or as lead-ins to paid premium versions or through subscription fees either set at a fixed monthly rate or billed on a utility model relative to resources consumed. Most software-as-a-service offerings involve many organizations or individuals sharing a single instance of the software. This multitenant approach allows the provider to support all its customers on a single highly scalable platform, where all updates and enhancements can be applied once and for all. The different individuals or institutions using the service can configure their view of the software as needed to customize the branding, color schemes, and navigational controls and to set functional preferences and policies according to local needs. Since the service provider takes care of enhancements, upgrades, and patches, users are relieved from this burdensome responsibility as it would apply to local software.

One initial step away from client/server computing for major library automation systems happened fairly quickly after the advent of the web, with browser-based interfaces replacing graphical clients for online catalogs and other services provided to library patrons. With no effective ways for patrons to manage downloading and installing graphical online catalog applications as they accessed library resources remotely, web-based interfaces quickly became established as the standard approach for all patron-facing library services.

Despite web-based patron interfaces, the essential client/server architecture of library automation systems persisted. Library personnel generally continued to operate their automation systems through graphical clients based on Windows, Mac OS, or platform-independent Java applications. Graphical clients that make use of the sophisticated user interface controls of the Windows or Mac operating environments can offer pretty sophisticated functionality to handle data entry and to handle complex workflows that combine multiple discrete tasks. Simple web-based forms characteristic of the earlier years of the web were fine for the simple tasks involved in patron searching but were not necessarily ideal for the more sophisticated tasks demanded for the efficient support of library personnel.

Library Automation Embraces the Cloud

We’re now entering into a new phase of the history of library automation characterized by new technology underpinnings, including cloud computing, fully web-based systems, and service-oriented architecture and fresh approaches to functionality that recognize current library realities.

The first phase of libraries embracing the general principles of cloud computing can be seen in the large numbers already involved in vender hosting arrangements. One of the main tenets of cloud computing involves offloading the local management of computing infrastructure to external providers. An increasingly large proportion of libraries are entering into full-service hosting arrangements with their ILS vendors, trading the costs of purchasing and maintaining local equipment for an all-inclusive monthly or annual fee. These hosting arrangements tend to be labeled as software as a service, even though the systems involved mostly take the form of client/server systems with graphical interfaces rather than pure browser-based interfaces in a multitenant shared instance. Nevertheless, they result in much of the same efficiencies for both the vendor and the library and are based on well-established, proven, and familiar applications.

Library automation products created in more recent years more fully embrace current-day architectures and technologies and are designed from their inception for delivery through multitenant software as a service. This new round of systems includes Ex Libris’ Alma, OCLC’s Web-scale Management Services, Kuali Foundation’s Kuali OLE, Serials Solutions’ recently announced Web-scale Management Solution, and Innovative Interfaces, Inc.’s Sierra. Others will most likely emerge. These products are at different stages of development. More than 32 early-adopter libraries using OCLC Web-scale Management Services were in production as of July 2011; the others are expected to see completion and implementation over the next year or two.

This new generation of products—more appropriately called something like library services platforms rather than integrated library systems—addresses the fundamental changes that libraries have experienced over the course of the last decade or so toward more engagement with electronic and digital content. In their own distinctive ways, these recently announced or delivered systems aim to break free of the models of automation centered mostly on print materials deeply embodied by the incumbent line of integrated library systems. To make up for functionality absent in their core integrated library systems, many libraries implemented a cluster of ancillary products, such as link resolvers, electronic resource management systems, digital asset management systems, and other repository platforms to manage all their different types of materials. The new products aim to simplify library operations through a more inclusive platform designed to handle all the different forms of content.

Beyond these core library automation products, we see discovery services such as Summon, WorldCat Local, EBSCO Discovery Service, and BiblioCommons and reference tools such as Springshare LibGuides that also fully embody these principles.

The Future Will Take Shape Gradually

This new wave of library tech products will phase in slowly. We’re just now at the beginning of a product and technology cycle that will play out through the next decade. The uptake of new kinds of products can be a bit slow. Libraries tend to hold on to their automation products until the last possible moment of their usefulness. I observe only a small minority of libraries able to engage as early adopters of new technologies. The majority follow later in the product development and deployment cycle when the products have matured and pose lower thresholds of risk.

As this wave of change eventually nears the conclusion of its course, the result will be seen not just in the form of new products deployed but in the way that libraries consume technology. We can anticipate that software as a service will by then stand as the new norm and that locally installed applications will wane. As libraries reach the point of needing to migrate from aging legacy automation systems and as library-housed servers need to be replaced, many are already choosing to shift to some type of vendor-hosting option rather than commit to managing their next generation of hardware and software in-house. Most libraries have more strategic activities for their technology staff than routine tasks such as the administration of servers and operating systems and the upkeep of automation systems.

I don’t expect the change to be absolute. During the transition, and even long into the future, locally deployed systems will continue to find use in some libraries, even as others, which I anticipate will ultimately be the majority, will depend on applications delivered through software as a service for their core automation and service delivery needs.

It will be interesting to watch this next phase of library automation take shape. A number of new products seem set at the starting blocks. Some have a head start, while others are still warming up. In a competition that will play out over the next decade, success will depend much more on the strength and appeal of the products and technologies than on the initial burst of energy off the starting line. Despite some really distinct and interesting choices among the new slate of automation products and services, it’s clear that the pack will generally run steadily along a course of leading libraries to more engagement with some form of cloud computing.


Marshall Breeding serves as director for innovative technology and research at Vanderbilt University Library. He is the executive director of the Vanderbilt Television News Archive and the founder of Library Technology Guides. His email address is marshall.breeding@librarytechnology.org.

       Back to top