Online KMWorld CRM Media Streaming Media Faulkner Speech Technology Unisphere/DBTA
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM Faulkner Information Services Fulltext Sources Online InfoToday Europe KMWorld Literary Market Place Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer Unisphere Research



Magazines > Computers in Libraries > January 2008

Back Index Forward
 




SUBSCRIBE NOW!
Vol. 28 No. 1 — January 2008
FEATURE
Building a Web-Based Laboratory so Users Can Experiment With New Services
by Jason J. Battles and Joseph (Jody) D. Combs

Librarians spend large amounts of time, money, and human resources to develop services and to implement new products that cater to users’ changing needs. Unfortunately, we as librarians aren’t so great at taking the time to find out what our patrons really want. Sure, some of us conduct surveys that broadly query users, or we put them in front of computers after web redesigns to get their impressions. But the considerable effort we’re investing in bringing new products and services into production begs that we involve our users in the development and/or beta-testing process. Otherwise, we end up with virtual paperweights that no one uses, and we’re left scratching our heads as to why.

Of course, we need to make some assumptions about users’ needs in order to present new services or products for them to assess. They’re not aware of all of the applications that are being developed for libraries. It’s our job to know what’s out there, and it’s our job to bring those products to them. However, this doesn’t mean that we should wait until we have purchased, developed, or finished testing products to ask them what they think. During the process of implementing a new web service or application, there are opportunities to reshape the final product. Getting the users’ input at this stage allows us to mold our new offerings into tools they find useful or to completely drop things that don’t work.

In this article, Jody Combs and I will discuss how we recognized the lack of user involvement in our development and marketing of web services. In addition, we’ll show you the web-based laboratory environment that we created so faculty, students, and others at our institutions could experiment with these preproduction services and, most importantly, tell us what they were thinking about them.

The two of us worked together at Vanderbilt University’s Jean and Alexander Heard Library from 2005 to 2007. Vanderbilt is a private research university with an enrollment of more than 10,000 students, about 6,000 of whom are undergraduates. Jody serves as the director of the digital library. During my time at Vanderbilt, I worked as a systems librarian for the library information technology services team that Jody leads. Well before I became the project lead to help turn the “library labs” concept into reality, Jody had recognized problems with the way we developed and introduced new services and the lack of end-user inclusion in the process.

Getting Users Involved

Over the years, I (Jody) had been involved with various strategic planning efforts for our library. These included not only researching what the broader library community was thinking but also spending a good deal of time talking with library staffers and users, faculty, students, and university staff. I recognized several recurring themes.

First, although we had introduced many new services over the years, too often our patrons simply didn’t find out about them. When we asked individuals or groups to suggest new services they thought would be useful, it was very common to have someone propose a “new” service that had been available for years. Whatever else this meant, we were obviously not promoting new services as effectively as we would have liked. We needed to find better ways to inform users about the library of the present and future.

Second, we found that we had often spent significant amounts of time, energy, and money developing services that we thought our users would love, only to realize that—even when they could find them—they didn’t use them very often. We needed to come up with ways to more carefully research our patrons’ actual work practices and needs before getting far down a development path on a new service—perhaps even before starting down that path. We also needed to have relatively continual feedback from our users as a service was in development. With this type of input, we could correct design errors and enhance services in response to end-user suggestions.

However useful this might have been, we faced a dilemma. We wanted input from our patrons, but they increasingly used our services remotely, often when the library wasn’t open. Somewhat like “dark matter,” we could infer their existence from usage logs, but we couldn’t see them. As a result, we feared that we knew less about remote users’ actual behavior than we required. We couldn’t directly observe as they encountered problems. We couldn’t easily ask how to help or what new services might be useful. We needed a way to get feedback from end users who we rarely or never saw, whose needs and expectations might be different from those who come through our doors. Focus groups, suggestion boxes, usage statistics, weblog analyses, and the invaluable experience of our public services staff all helped, but in the age of Library 2.0, we required more.

We also had only a very narrow window of time each year to introduce new services. We were hesitant to release a service until it was “perfect” or at least—as one seasoned library director aptly put it—“breaks only in predictable, logical ways.” There was even more resistance to introducing a new service midsemester.

So, we needed to find ways to deliver these services faster and to allow them to be released for comment before they were fully developed. That way, we could shape them in accordance with the needs and preferences of our end users. In order to temper patron and staff expectations, we also had to make it clear that such services were not quite production-ready.

Ask the User: A ‘Library Lab’

It was against the backdrop of these recurring themes that I began to think about how other remote service providers deal with similar issues. It occurred to me that Google Labs addressed many of them. The online labs environment allowed Google to test-market new services to remote users. Over time, the company could analyze which services actually had a market and which didn’t. If many users downloaded or accessed a service, there might actually be a market for it. If they didn’t, it could be quietly removed. 

The labs environment, by its nature, let users experiment (or play) with services without the expectation that they were necessarily fully developed or fully functional. This allowed Google to release services at an earlier stage of development, when they could still be shaped by end-user experiences and expectations. It also provided an ingenious marketing platform. Many users came to the lab just to see what was new. Finally, the labs environment allowed Google to introduce new services at any time, since users could try them out whenever they had the opportunity.

In 2005, as I took on the role of director of the digital library, I presented the idea of building a sister site for our library’s website. It would be a “library labs.” We would try it out for a few years and see if it helped address some of the issues outlined above. We put together a project charge and appointed a project team to develop the site. I asked Jason Battles to lead the effort. The goals for the project were to create a site that would do the following:

• Showcase projects under development as well as under consideration.

• Provide a venue for beta-testing and refining the services that were nearing production.

• Actively solicit end-user feedback on both the usefulness and usability of collections and services.

• Solicit suggestions for new services.

• Provide a way to recruit members for focus groups and usability studies.

• Assist with marketing new services.

We wanted the site to project a lab-like image, allowing users to experiment with services that were clearly in the developmental stage. We also wanted low-barrier feedback mechanisms. Finally, while the site was clearly related to our digital initiatives, it was not our intent to restrict its use only to digital projects or services. We thought it could be extended to highlight any library service. The work for Jason and his project team now lay in making the concept a reality.

Jason Leads the Team That Builds the Library Lab

In addition to me (Jason), three other librarians were assigned to the library labs project team. They came from different areas within Vanderbilt’s library system. Molly Dahl was a cataloger, Jon Erickson worked in public services at the Stevenson Science and Engineering Library, and Jodie Gambill was archives processor in the special collections library. While all three were very talented, they were not all technologists who were capable of churning out code and whipping up web applications at a moment’s notice. But each was able to bring a unique perspective and different skills to the project. This diversity was important in making the project successful.

Jon had a good sense of usability issues and great ideas about the look and feel of the site. He provided most of the graphics and images. Molly was able to bring new content items to the site and helped stretch the library labs concept to include informational topics on our web tools. Jodie coded the cascading style sheet (CSS) and all the XHTML for the site. In addition to leading the team, I wrote the PHP code that interacted with our site’s MySQL database.

With the goals for the project clearly spelled out, the team began deliberating in March 2006. During the initial meetings, we focused on developing a few prototypes for the site so that we could work out issues with how best to display the items we would feature. The final design consisted of defined sections for each featured application or service. These content pieces contained basic descriptive text, a screen shot, a link to an embedded comment box, and, in some cases, a separate page with additional information.

Since the projects, applications, and services that we hosted would change frequently, it was extremely important to have a site that was easy to update. Jodie was able to develop a layout framework that provided flexibility in terms of display as well as maintenance. Her table-free CSS design stretches to accommodate varying screen resolutions, and the usage of server-side includes for each content piece makes updating the site a breeze. Jodie also worked to provide an RSS feed, which allows our feed subscribers to know when we have a new project available or an update to an existing item.

As for the very important user feedback portion of the site, we recognized the need to store user comments. For this purpose, we utilized MySQL. We chose this open source database because it has a strong community, is widely installed, and is easy to query. We used PHP to handle the interaction with MySQL. Again, I liked the strength of PHP’s open source community support and the way it integrated into our XHTML code. Having decided on our storage mechanism, we talked about the information we wanted to gather.

Aside from basic information, we would eventually record the project or application’s version number, the type of user, and the location of the user who was submitting the comment. Since some projects go through different versions prior to their initial release, we wanted to be able to associate the comments with the correct version so that if a future release addressed the comment, we would have that differentiation in the database.

We took a couple of approaches to getting the user type. If the project was large enough and had its own website, we used a comment box that contained an additional slot for user type. In our case, we defaulted to “visitor” but included “staff,” “undergraduate student,” “graduate student,” and “faculty” as options.

We also mapped users’ IPs to determine their location since the library and campus IP ranges were known. I used some simple PHP scripting to take the referring IP and filter it through the ranges to tell us if the user was in the library, on campus, or off campus. We were able to get additional information about users’ browsers, screen resolutions, etc., through Google Analytics.

Storing the comments and some additional information in a MySQL database meant that we could easily create an administrative side to the website so that project leaders, team members, and administrators could search the comments in a variety of ways. Jodie later added a printer-friendly mode for the results from the administrative page.

Having a flexible and updatable front-end design, a searchable administrative side, and a database back end to record it all still didn’t resolve the issue of having a low-barrier mechanism for users to give us their comments. What we didn’t want to do was take the user from our site to a separate form to fill out information. User comments are hard enough to get, and we wanted a solution that was integrated into the site. After the team looked at several options, I came across script.aculo.us. This site contains an open source set of JavaScripts that provides visual effects such as fade in/out and blind up/down as well as sortable lists.

I installed the script.aculo.us JavaScript library on our web server and worked on using the scripts to create a search box within each content piece on our site. The trick is that when you go to the website, you don’t see the comment box until you click the “add comment” link. At that point, the script.aculo.us blind-down effect causes the hidden comment box to appear within our page. The users aren’t taken off to a form, and they can submit their comments from our site. This seamless integration of the feedback system created the low-barrier function we were trying to achieve. The only shortcoming was that when users entered their comments, they were temporarily taken to a comment-submittal confirmation page before being redirected to their original location on our site.

The site was nearing completion in late May 2006, and the team began showing it to various groups and committees throughout the library. We then conducted an open session on the new site in late June. Molly handled marketing our site to other Vanderbilt libraries so that we had a prominent presence on websites throughout the system as well as on Heard Library’s central website. It went live in July 2006.

Naming the site was the hardest part, but the team decided on “Test Pilot” since it embodied the nature of the site while also being an obvious play on words when combined with our Wright Brothers banner image.

Jason Moves On; Takes the Lab With Him

A year after leading the team through Vanderbilt’s implementation of Test Pilot, I took a new position at The University of Alabama Libraries as the head of its web services department. The University of Alabama (UA) is a public, student-centered research school with an enrollment of more than 23,000. Both the Heard Library and The University of Alabama Libraries are members of the Association of Research Libraries and the Association of Southeastern Research Libraries. My new job includes creating, experimenting with, and testing web-based applications and services, so it’s important to have a means of showcasing our projects and gathering user feedback. The framework that the Vanderbilt project team developed was a perfect fit for the needs of my new position.

The University of Alabama Libraries’ Web Laboratory (www.lib.ua.edu/weblab), which went live in November 2007, demonstrates the portability of the library labs model. Having a dedicated programmer on staff at Alabama enabled us to enhance the Web Laboratory by incorporating Ajax functionality into the comment-submission mechanism. With Vanderbilt’s Test Pilot, I was not satisfied with the way comment submission took the user temporarily off the site to a confirmation page. Now, thanks to the work of UA’s Will Jones, when a comment is submitted, the user gets a confirmation message and the comment box disappears via the blind-up script.aculo.us effect. The user never leaves the page.

Because I continue to collaborate with my former colleagues at Vanderbilt, UA can share the Ajax code and any other enhancements to the general framework so that they’re integrated into Test Pilot.

The Findings of the Experiments

Vanderbilt’s Test Pilot project (http://testpilot.library.vanderbilt.edu) has enjoyed modest success during its 16 months of operation. Vanderbilt users are not finding the Test Pilot site at the rate we hoped. But the staff has received a good number of comments from users both on campus and off, and the university has been able to incorporate suggestions into its projects to make them more useful.

Reflecting on the usage of this framework, Jody and I recognize a couple of challenges to its greater success. First, we may not be effective at marketing our services. This is as true of this project as it is of others. Second, and closely related to the first, is that the marketing of Test Pilot is limited to the Vanderbilt libraries’ own websites. Those who use the library’s website are analogous to those who come through the physical doors. They represent a self-selected subset of our end users.

While usage of The University of Alabama’s much newer Web Laboratory cannot be determined as of this writing, it’s become clear that both Vanderbilt and Alabama need to find ways to embed this service where the users are rather than expecting them to come to us. We are currently researching ways to expose this service, and others, in nonlibrary locations, such as social networking sites and other student resource sites. What seems clear from Vanderbilt’s experience is that this service is one part of what must be a multifaceted approach to address the needs and expectations of the current generation of users.

Vanderbilt’s Test Pilot and Alabama’s Web Laboratory have featured the following items:

• Vanderbilt’s beta test of Ex Libris’ Primo search-and-discovery interface

• The Heard Library’s web refresh

• UA’s complete web redesign, beginning at the prototype stage

• Firefox search plug-ins for Vanderbilt’s Acorn catalog and UA’s Libraries’ Catalog

• LibX toolbars customized for each university

• New blogs at Vanderbilt for trial databases and leisure reading

• Informational comparisons between the EndNote Web and Zotero citation tools

• UA’s new staff directory

The items hosted on the sites vary in magnitude and scope, but each of them impacts all or at least a portion of our users. What you choose to display is really determined by what guidance you want from your users or what information you want to convey.


Jason J. Battles was a systems librarian at Vanderbilt University from 2005 to 2007. He is now the head of the web services department at The University of Alabama Libraries in Tuscaloosa. He holds an M.A. in history and an M.L.S. from The University of Alabama. His email address is jjbattles@ua.edu.

Joseph (Jody) D. Combs has worked at Vanderbilt University’s Heard Library for the past 14 years and has served as the director of the digital library for the past 3 years. He holds an M.A. in religious studies from the University of Virginia and an M.A. in religion from Vanderbilt University. His email address is jody.combs@vanderbilt.edu.


       Back to top