The Usability Toolbox
by Andrew K. Pace
Usability. Next to electronic
content management systems and institutional repositories, this one just might
be the biggest panacea sweeping the library world. If you had asked me my opinion
of usability 18 months ago, I might have said: "If someone doesn't build it,
no one can use it. How's that for usability?" Then,I embarked on a writing
project for Library Technology Reports on building better library Web
services ("Optimizing Library Web Services: a Usability Approach," LTR,
v. 38 no. 2, Mar./Apr. 2002). The more research I did, the more I kept coming
back to usability procedures, testing, and best practices. Before I knew it,
I was writing a 28,000-word "article" on usability approaches to optimizing
library Web services.
Soon, my initial glib reaction to usability was supplanted by realizations
about how important the practice is for good library Web services. But I was
equally struck both by how simple some aspects of usability engineering can
be, and by how what we think passes for usability can really be detrimental
to sophisticated service offerings. I'd like to capitalize on this month's
theme to point out some of the usability tools that folks might not know about,
and then take a moment to draw your attention to what often passes (unfortunately)
for usability in a library setting. (You can look to the rest of this issue
for a definition of Web usability, and proofat least to some extentof
Is There a Web Engineer in the House?
I bristle a bit when I hear the phrase "usability test." This is often used
too broadly to describe what is actually a suite of tools more commonly referred
to among experts as usability engineering. If you look in the usability toolbox,
the test itself is just one tool at your organization's disposal. Though the
notion of administering an actual test is less daunting than most libraries
might think, a quick look at the toolbox might expose several usability engineering
techniques that are either already informally administered in your library,
or that might be easier to undertake than formal usability testing.
Tool #1: Participatory Design
Participatory design brings developers, frontline librarians, and end users
together to design a digital service solution. This method is distinctive in
that it directly involves one or more members of the design team itself. In
a library setting, designers are likely to be involved anyway, but they should
use caution when assessing usability themselves. Their feedback can prove useful
in the beginning stages of a projectbut the closer the participant is
to the product, the less likely it is that he or she can provide unbiased criticism.
Another important distinction of this tool, which sets it apart from usual
practice in libraries, is the early involvement of users. Many library development
strategies include various stakeholders, but users are rarely included in the
process during the early stages. Libraries are more likely to pursue user input
from focus groups.
Tool #2: The Focus Group
This is the method most often employed in the library community. Libraries
know their users well, which is a distinct advantage over the small Web start-up
trying to build a useful product and gain market share at the same time. Focus
groups are small group discussions, moderated by a trained facilitator. Though
focus groups might not raise awareness of actual user behavior, they can certainly
determine users' attitudes and beliefs about a new product or service; they
can even be used to gauge reaction to a prototypical service. Moderated control
of the focus group meeting is essential to avoid it turning into a 2-hour gripe
session. Libraries should also consider some form of compensation for focus
group participants: A free lunch, a cool gadget, or even a $5 copy card might
Tool #3: The User Survey
Libraries also excel at implementing surveys. Unfortunately, the energy expended
often seems greater than the benefit reaped from survey data collection. The
results are good for generalizing from small samples to a larger population.
Like the focus group, surveys do not add much to the evaluation of user behavior
but can represent a useful forum for submitting experiences and attitudes.
A well-marketed survey encourages users to think of their responses as a wish
Tool #4: The Individual Interview
The individual interview serves as a good follow-up to a survey, although
like the survey, the interview tells you little about actual user behavior.
Interviewers should rely on a script, although the interview should be thought
of as a conversation. The interview can take place face to face, on the telephone,
or even online via chat software.
Tool #5: The Contextual Interview
This type of interview is actually more akin to the usability test than to
the individual interview. Much more natural than a formal usability test, the
contextual interview takes place in a setting with which the user is familiar,
such as an office or computer lab; there, the interviewer observes and listens
to actual user behaviors. The dialogue can be informal, as long as purely qualitative
results can be usefully applied afterward.The contextual interview sheds light
on several aspects that might remain hidden in a formal usability test, like
modem speeds, physical space limitations, browser preferences, and the like.
While maintaining an informal air, interviewers should make careful notes either
during the session or immediately after it.
Reference librarians know all about the contextual interviewit's their
stock in trade. Often dismissed by systems librarians and administrators as "mere
anecdotal evidence," user interaction at a reference deskformally collected
and presented as a contextual interviewcan greatly enhance online services.
Next to the usability test itself, the contextual interview is probably the
best tool in the box.
Tool #6: Prototype and Walk-Through
There's a prominent phrase in the library IT world that easily defines the
prototype: "Wanna see something cool?" Everyone wants to see something cool,
and for those who make a living out of looking for, writing about, and showing
off cool things, the Web offers a permanent escape from boredom. (Unfortunately,
the "quick fix" and instant gratification are byproducts of this Internet era.)
Though there's nothing wrong with "cool," it should not be used synonymously
with "good." The practiced conveyor of cool will seek out like-minded coolness "experts," who
gather and bestow coolness on each other. Often, this grass-roots method passes
for prototyping a new service.
In reality, when you find or create something cool and want to share it,
the closest observer will usually do. To put cool to the test, use simple prototypes
in conjunction with other tools in the usability toolbox, such as interviews
and focus groups. Sometimes, early prototyping can kill a senseless project
before it starts, or redirect it toward something more useful, either putting
cool in context, or evolving something truly cool into something cool and useful.
Tool #7: The Card Sort
The article I wrote for Library Technology Reports goes into a lot
of detail on card sorting for interface design, and given a little more time
to research it and actually conduct one, I might become convinced that it's
useful. But you know what? I really think that this is one of those tools that
sounds good theoretically, but in practice is actually pretty useless. HTML
is an easy enough tool for prototyping, and does not seem so much like one
of those Psych 101 experiments from college. So as far as I'm concerned, card
sorts are sillyskip to the next tool.
Tool #8: The Usability Audit
One way to seek quantifiable data on a Web site's usability is through systematic
evaluation. One type of audit is simply expert evaluation. The problem in libraries
is that not everyone is an interface expert, but librarians often assume everyone's
opinion should be given equal weight. Although this tendency is explained by
human nature and kindness, it nevertheless defies logic. As my favorite reference
librarian once put it, "Not everyone's opinion matters." My rule of thumb:
Any opinion about a product's or service's general inadequacy must be accompanied
by suggestions for making it better in order for the criticism to be taken
Another type of audit compares general standards. With this method, the design
of the site is compared with a known body of standards either already established
or created by the organization itself. Compliance with the laws of the Americans
with Disabilities Act, for example, might necessitate comparing the HTML content
of a site with the standards that remove access barriers for people with disabilities.
[Editor's Note: See Cheryl Kirkpatrick's article on the relationship
between usability and accessibility on page 26.]
And finally, there are heuristics. Simply put, applying heuristics involves
using general knowledge gained by experiencein layman's terms, a rule
of thumb. A combination of the first two types of audits, heuristics for the
Web involve the systematic inspection of an interface to determine if the site
design complies with recognized usability. This explanation does not necessarily
mean that an expert must evaluate your site, but using existing heuristics
developed by professionals might help you. Jakob Nielsen's list of heuristics
is probably the best known (http://www.useit.com/papers/heuristic/heuristic_list.html).
[Editor's Note: For another tool for grading your site's usability,
see the HEP Test in CIL's Nov./Dec. 2002 issue.]
Tool #9: The Field Study
A field study puts the product or service in its natural environment. Think
of a field study as a contextual interview combined with prototyping, usually
in the late stages of development. The Web decreases the need for field studies,
since the Web interface and many environmental variables can be reproduced
elsewhere. More interesting, though, is the field study's evolution into the
beta test. (Interesting that these are not called omega tests, since they are
usually a product's last stage before being foisted on the public.) In my opinion,
a formalized visit, scripted questions, and quantitative results from a field
study are much more helpful than the try-this-and-tell-us-what-you-think approach.
Tool #10: The Usability Test
The most recognized and extremely misunderstood method of conducting usability
engineering, the actual usability test is one of the most precise tools in
the toolbox. This issue of CIL has undoubtedly given broad treatment
to the usability test itself, but here's a quick recap to justify why testing
Testing ultimately saves the user time and saves the organization
time and money.
Testing counters the whims of designers.
Testing is good public relations for the organization offering
Testing settles disagreements among design team members, whether
stated or unstated.
Combating Faux Usability
I hope that most people reading this notice that they are already doing some
form of usability engineering. Failure to recognize true usability methods
can lead to two dangerous bastardizations.
Usability Demonstrations: Without a basic knowledge of usability,
less formal methods can easily become part of a library's Web culture. One
such method is what I call the usability demonstration. The usability demonstration
phenomenon occurs when the designer or committee of a service shows you how
a new online service functions. The usability demonstration is marked by phrases
such as the following:
"This link will pop up some kind of useful information that someone
else is writing ..."
"Picture that bottom part of the screen you just saw with the top-left
part of the screen that everyone sees now ..."
And the best: "This is all fully customizable ..."
Vendors are the real masters of usability demonstrations. The good ones can
click a Back button on a Web browser faster than you can say "404." Closer
to home, the in-house demonstration is the sole opportunity not only to demonstrate
the new service, but also to summarize the entire process of the design: justification,
purpose, usability, longevity, the list goes on. Demonstrators might find that
they are showing how a product will be used, defending why it is needed, and
fighting a new focus group all at the same time. At this stage of development,
reliability often suffers as designers enter a second phase of testing: usability
Usability Consensus: The usability demonstration gone awry, usability
consensus can also be thought of as reverse-engineering, which is taking the
finished product and working backward through its creative stages to justify
its existence and its application in the library. The service designer usually
blames librarians' negative reaction to new services on fear of change, lack
of understanding, overly empathetic connections to users, or worst of all,
technophobia. The last resort of a designer defending a new product or service
is to say, "It's better than nothing."
On the other hand, even if blame is aptly placed on librarians, and even
though something is better than nothing, the service designer has no
data on which to base the call for consensus. Hours spent coding, high levels
of administrative support, and the admiration of a few key peers might win
the day, but this after-the-fact attempt at service acceptance cannot compete
with qualitative and quantitative data provided by actual users, including
If usability engineering is indeed a panacea, it's sort of like a cycle of
antibiotics for an infection that will never go away. The list above is not
a step-by-step approach; nor is it a prescription that will make everything
OK in the end. Making services more usable is an endless circle of trial, error,
success, and feedback. Making usability engineering a transparent part of the
library development culture is a worthy goal.
Andrew K. Pace is head
of the systems department at North Carolina State University Libraries. His e-mail
address is firstname.lastname@example.org.
You can also reach him through his Web site at http://www.lib.ncsu.edu/staff/pace.