Online KMWorld CRM Media Streaming Media Faulkner Speech Technology Unisphere/DBTA
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM Faulkner Information Services Fulltext Sources Online InfoToday Europe KMWorld Literary Market Place Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer Unisphere Research



Magazines > Computers in Libraries > March 2004
Back Index Forward
 




SUBSCRIBE NOW!
Vol. 24 No. 3 — March 2004
FEATURE
Turning Patrons into Partners When Choosing an Integrated Library System
by Terry Ryan

We knew that it would be challenging to involve library patrons in system selection in any substantive way, but to us, it was crucial.

I work at UCLA, which has one of the top 10 research libraries in the country, with almost 8 million volumes and over 1 million circulation transactions a year. So our size and complexity challenge the capabilities of integrated library systems. We had been using Taos (originally from Data Research Associates) since 1996. Then in 2001, Sirsi Corp. announced that it was freezing development on Taos software. As the associate university librarian for information technology at UCLA, it was my responsibility to lead the way through the process of selecting a new integrated library system.

The Taos implementation had been slow, the system was still not complete, and our patrons needed assurance that the selection we made this time would lead to a smoother transition. The UCLA Library Executive Committee, made up of the university librarian and the associate university librarians, was committed to seeking input from faculty and students in developing the Request For Proposal (RFP) and in evaluating competing vendor systems.

We knew, though, that it would be challenging to involve library patrons in system selection in any substantive way. Including a representative user or two on the evaluation team requires a level of commitment that few faculty members or students can afford. Also, their needs vary so much by discipline, research interest, and level that the views of one or two cannot represent the user community as a whole. But to us, involving faculty and students in the process was crucial. In this article, I want to share the techniques that we found most effective in turning patrons into partners.

Defining the Selection Criteria: What Are the Key Differences

To select from a small subset of possible ILS packages, all of which perform the same basic functions, we needed something more targeted than a traditional needs analysis. Asking patrons to identify core functions was unnecessary since all of the vendor systems share the same core functions. Asking patrons to create a "wish list" of new and innovative functions would not help us choose between vendor systems, because none would support these capabilities. Instead, we decided to first identify how the systems differed from each other, and then to get input from our patrons on which differences were most important to them.

We charged an Overview Team of library staff to review potential systems and report on the key differentiators among them. The six team members--three librarians, two senior staff managers, and one programmer/analyst--had overlapping skill sets that included expertise in acquisitions/serials, cataloging, circulation, collection development, document delivery/ILL, financial issues, indexing/searching, non-Roman scripts, OPAC interface issues, remote storage processing and paging, and technical/system issues.

The Overview Team identified the four ILS vendors that had university library customers comparable to UCLA in size and complexity. Each of the four was invited to give presentations to our staff, and Endeavor Information Systems, Inc., Ex Libris (USA), and Sirsi Corp. agreed to do so. Then our team visited customers of those three vendors to find out how the systems compared in actual use. The team's report, appropriately entitled "No Perfect System," highlighted the features that varied among the packages.

Weighing the Criteria: What Differences Matter?

We then needed to decide which of the differences among the systems were important enough to form the basis of our evaluation and selection process. The Overview Team was charged to work with the library staff to get its feedback on that issue. It was my job to find an effective way to gather user feedback.

As a first step, the Library Executive Committee recruited an advisory group of faculty and students, known as "Functional Sponsors," as recommended by the UCLA IT planning process for all IT projects with campuswide impact.1 The Functional Sponsors were invaluable to us, though we recognized early on that they could not be surrogates for all patrons. The faculty members, especially, did not feel qualified to answer questions on behalf of all their colleagues. Instead, they could advise us on how to reach their colleagues, what questions they were likely to answer, and how to get the maximum information from them.

The Functional Sponsors' first task was to help us create a user survey on the key differentiators among the systems. When I showed them the first draft, their first reading proved that the attempt to eliminate jargon had been unsuccessful. Their reactions also told me that we had to change more than the words--we needed to make the questions match the way that faculty and students think about the catalog.

In my draft, I had divided the searching questions into keyword search and heading/browse search sections. Not surprisingly, the Functional Sponsors suggested amended wording, such as changing "keyword searching" to "Searching by words in titles, authors, subjects, etc." More substantively, though, they also found the two categories confusing. They suggested that the questions be organized by title searches, author searches, and subject searches, with subsets under each for keyword and heading questions, since that reflected the way they approached searching. The group also helped me craft the phrase "search combinations" as an acceptable way to ask about Boolean searching.

Based on this feedback, I also deleted some sections of the survey altogether. Questions about number searches such as ISBN and ISSN were considered "rather obscure," and a question about having labels in Subject Browse lists to indicate the kind of heading (LC, MeSH, or LC Children's) proved impossible to ask in a form that wasn't either confusing, arcane, or both. Some questions really didn't need to be asked, such as whether it was important to have search results returned in some logical order. As one of the Sponsors said in an e-mail to me, "Would anyone really choose to get results out of order?"

The Functional Sponsors also showed me that it was not enough to ask questions only about the subtle areas of difference between the systems. I needed to address the basic functions, if only to show that we had not forgotten them, and I needed to acknowledge the problems with our current catalog to demonstrate that we knew that they must be addressed in the new system.

Finally, the Functional Sponsors noted that with 22 questions, some of which were compound and complex to answer, the full survey would appear quite long to many faculty and students. To improve response rates, they advised me to put the questions that we most wanted answered at the beginning of the survey and to let respondents skip the rest of the questions unless they were interested. They also encouraged me to put the basic issues first and to include an open-ended question early, enabling people to talk immediately about those things that mattered most to them. The final survey (http://www.library.ucla.edu/new-orion/appendix_a.html) began with three demographic questions about the respondents, asked three questions about basic functions, and then offered the option to go directly to question 22 to complete the survey.

Writing the RFP: Articulating the Differences

We received 770 responses to the survey from people reflecting a broad range of disciplines and academic status, for a response rate of 4 percent. Though we realized that the instrument and the sample weren't rigorous enough for drawing firm conclusions, the survey results were revealing,2 with some obvious input and some less obvious. It was no surprise that 75 percent of patrons believed that "Combination searches such as combining author and title" are crucial. Less predictable to us was that 54 percent of patrons believed that "Ability to re-sort search results in author, title, or date order" is crucial. We were also somewhat surprised to find that navigation (the ability to move easily through a search results list) was crucial to 72 percent of respondents, while tools for refining searches (such as limiting) were crucial to only 62 percent. Clearly, both retrieval and navigation mattered, but we had to avoid the assumption that patrons would trade ease of use for search power.

Once we had the user survey and the library staff feedback, we were ready to start our formal procurement process. In 2002, the Executive Committee appointed an Evaluation Team to write the RFP and to evaluate vendor responses. The Evaluation Team included five of the six members of the Overview Team. We added two new members, including the head of a departmental library who had management and public service expertise, and me as a member of the Executive Committee and as the Functional Sponsor chair.

The Functional Sponsors were again a good sounding board as the Evaluation Team finalized the RFP. The full group reviewed the text and made suggestions for changes. The most substantive change was to bring all of the interoperability requirements into a single section, to emphasize how important it has become for an ILS to integrate into a wider technology landscape. A subset of the Functional Sponsors group, made up of the campus CIO and a faculty member with strong database expertise, served as technology advisers and strengthened the technology section of the RFP by adding detailed questions about database structure and design.

The user input helped us create a substantive and probing RFP. All of the vendors who responded to it commented on the depth and complexity of the information we requested. Our next challenge was to assess that information.

Evaluating ILS Systems: What Matters to Patrons?

Three vendors responded to our RFP: Endeavor, Ex Libris, and Sirsi. In addition to the traditional elements of evaluation--review of the vendor responses to the RFP, verification of the responses through visits and calls to existing customers, demos by the vendors, and follow-up questions to the vendors--the Evaluation Team worked with the Functional Sponsors to add patron input to the process. We invited faculty and students to attend vendor demos but found, as expected, that few were able to attend. Clearly, we needed to pursue other methods for gathering faculty and student assessment.

Our patron partners pointed out that some of our survey questions were 'rather obscure.'

The Functional Sponsors suggested a very productive method, something we called "test drives." We created a Web page with links to the online public access catalogs (OPACs) of three library customers for each of the three vendors. After one of our assessors had taken a test drive of any of the nine sites, he or she was directed to an online survey form to give us their opinions of the system in action. We received more than 300 responses. The input did not show a clear favorite among the systems--in fact, for every user who raved about an interface, there was another user who hated it--but the reasons cited for liking a system were very revealing as far as what interface and search attributes were important to our patrons. In addition to helping us assess the vendors, this input will be valuable when we configure our new integrated library system.

Another good source of input was queries by faculty and graduate students to their colleagues at other universities where the three vendor systems were used. Talking to other librarians is a time-honored and effective way to gather information about the strengths and weaknesses of a system, since colleagues will often speak candidly about their experiences. The same is true when faculty ask faculty, or grad students ask grad students. As is often the case with other librarians, faculty and student perceptions of a new system are colored by what system was in use before. Allowing for that bias, the perceptions of patrons who use a particular system are invaluable--direct, unfiltered, and based on their daily needs.

Our final method for gathering user input was holding a series of focus groups to discuss trade-offs among the three systems. There is no one vendor system that has only strengths, so every system selection involves weighing the relative importance of strengths and weaknesses. The Evaluation Team distilled the major trade-offs and reviewed them with the library staff and patrons. For the user review, we hosted a faculty and student focus group where we demonstrated the differences that we believed would have the most impact on them. We invited everyone we had heard from during the overview and evaluation process and, at the advice of the Functional Sponsors, we offered lunch as an incentive. The group turned out to be small, only 10 people, but the format allowed us to review the issues in detail. The feedback we got in this process confirmed some earlier indications that many faculty and students value the ability to navigate a search result more highly than the ability to craft a flexible search.

What Would We Do Again?

All of these consultation and outreach methods helped us raise campus awareness of the ILS selection process and gave faculty and students a way to feel engaged in the decision. We would probably repeat all of them for that purpose alone. Some of the methods, though, yielded input that had particular impact on our decisions and approach.

• The Functional Sponsors were excellent partners, and we would recommend such a faculty and student advisory group to any university library that's evaluating and selecting a system. To be effective, the members of such a group should be active library users who care about the outcome, should reflect a mix of disciplines and areas of expertise, and should be sufficiently experienced to bring perspective and credibility to the group.

• Tapping faculty expertise on technology and database design was very helpful and we would recruit such advice again. To be most effective, such an expert should have a mix of scholarly and real-world perspective. We were fortunate to have a faculty member who had extensive consulting experience with corporate clients as well as impressive scholarly credentials.

• Having faculty and students test drive the systems deployed at other institutions was a wonderful way to involve patrons in concrete evaluations. To be more effective, if we did this again we might structure the feedback survey more tightly, and we might be more aggressive in recruiting test drivers.

• Having people conduct "reference checks" among their colleagues at other universities was very helpful. We asked only the Functional Sponsors to participate in this process. To be more effective, we would want to recruit more faculty and graduate students to get a broader set
of comments.

Understanding Patrons' Views Influenced Our Selection

This assessment process reaffirmed for us that involving patrons in system selection is both crucial and challenging. By focusing on the real differences among the systems and asking faculty and students questions that reflected their approach to the catalog, we were able to elicit thoughtful feedback. With a committed, talented user advisory group and a willingness to see the system selection process through their eyes, you, too, can turn your patrons into valuable partners in the system evaluation and selection process.

 

 

References

1. See the UCLA IT Process at http://www.oit.ucla.edu/CommonDocuments/
Process/Project_Development_Flow_Diagram.pdf
.

2. Full survey results are at
http://www.library.ucla.edu/new-orion/survey_report.html.

 


Terry Ryan is the associate university librarian for information technology at the UCLA Library in Los Angeles, where she has served in increasingly responsible positions related to library IT since 1985. During her 34-year career as a librarian, she has participated in selecting and implementing four different integrated library systems. She holds a B.A. in history from Stanford University in Palo Alto, Calif., and an M.L.S. from the University of Washington in Seattle, and has taken courses toward a degree in computer science. Her e-mail address is tryan@library.ucla.edu.
       Back to top