Information Today, Inc. Corporate Site KMWorld CRM Media Streaming Media Faulkner Speech Technology DBTA/Unisphere
PRIVACY/COOKIES POLICY
Other ITI Websites
American Library Directory Boardwalk Empire Database Trends and Applications DestinationCRM Faulkner Information Services Fulltext Sources Online InfoToday Europe KMWorld Literary Market Place Plexus Publishing Smart Customer Service Speech Technology Streaming Media Streaming Media Europe Streaming Media Producer Unisphere Research



Vendors: For commercial reprints in print or digital form, contact LaShawn Fugate (lashawn@infotoday.com)

Magazines > Computers in Libraries > June 2013

Back Index Forward
SUBSCRIBE NOW!
Vol. 33 No. 5 — June 2013
FEATURE
Overcoming Our Habits and Learning to Measure Impact
by Moe Hosseini-Ara and Rebecca Jones

Libraries are in the relationship business; it is our relationships with our users and our stakeholders that enable us to fulfill our purpose of positively impacting people’s lives.
Why do we struggle so much to define and capture measures that convey to decision makers the value of our libraries’ services and programs to our communities, campuses, corporations, or organizations? Let’s be honest, there are a number of reasons for our struggle—all of which we can address and, most importantly, solve. First, we have to face the reasons, frame them as problems, and implement the solutions. There are five basic problems, and all we have to do is resolve them:

1. Libraries do not set targets for their measures.

2. There’s not enough understanding of stakeholders’ value measures.

3. Measures are not viewed as an integral element of services or programs.

4. Value measures are not differentiated from operating measures; outcomes are confused with outputs, which confuses everyone.

5. There’s no clear responsibility for managing measures.  

Problem 1: No Targeted Measure

Think about the meaningful measures in your life that tell you what you need to know. They convince you that whatever you are investing has value. You are thinking, “Hmmm … what kind of measures are we talking about?”

Well, your weight for one. You have a target weight, correct? Sure you do. You invest in reaching this target with the food you eat (or don’t eat) and the exercise you do (or don’t do). It may not be an exact number, but you know your targeted range. What determines your target? Usually you want your weight to align with your health, clothes, lifestyle, and perhaps your doctor’s advice. You decide if your calorie intake and exercise program are reaching your target, delivering the impact you desire.

Or, consider a charity, association, or program you support. You may not think of what you want from these organizations as a target, but that’s really what it is. Those in public libraries often invest their time and money in library associations because their target is government or societal support for public libraries. The association provides specific reports, advocacy programs, and marketing materials for member libraries to customize and use. The impact for individual public libraries is that it is easier, and ultimately cheaper, for them to market their services, generate local support, and retain government funding.

In both of these cases, the targeted impact is clear. It’s much easier to hit the target when you know what the target is.

What positive impact do you want your library to have on a community segment, or on a specific group of students or faculty? An impact moves something or someone. Yet, many libraries are not clear on the targets they are aiming for. They exist to have a positive impact on people’s lives. So, what positive influences do you want your programs or services to have on different groups of people—and on your funders or decision makers?

To answer these questions, you must know what impacts are important for these people. Funders want impacts that make voters happy and show voters that tax dollars are directly benefiting them; if one of the library groups is for parents of preschoolers, funders may want to see an impact that makes these busy constituents’ lives a bit easier and that makes their children happy and successful in school.

With these people in mind, consider a public library preschool program. Your targeted impact for funders is that those preschoolers (children of voters) participating in library programs are school-ready for kindergarten; the targeted impact for the preschoolers’ parents is that their kids enjoy the program, are more confident entering kindergarten, and perform well in class.

Our colleagues in the nonprofit sector have long used the logic model to define, manage, and convey impact measures. The logic model is, in a word, logical. It is based on the premise that if the library provides this program, then the impact will be that those people participating in the program will have specific skills, awareness, or know-how as a result.

Senior management of colleges is concerned that first-year students drop out, or do not perform well academically, because they are overwhelmed in the first semester. The college library designs and implements a service that assigns every first-year student a personal librarian who offers to simplify the library for the student. The library’s targeted impact is that 25% of the first-year students who stay and do well cite the library as a contributor to their first-year success. It’s a target based on if library staff members reach out to first-year students personally, rather than as a large amorphous group, then these students will see library staff as people who genuinely care about their personal success, and the students will be more likely to consult library staff and use the library’s services and resources for their assignments. The library has a definite target, and that target is aligned with decision makers’ goals.

One of the key solutions to easing the struggle of conveying meaningful measures to decision makers is to clarify how they define “meaningful.” Then the library needs to set its sites on those few impacts that are meaningful for the decision makers. And the only way to understand what stakeholders see as meaningful is through conversations—conversations about their goals, their challenges, and their focus.

Problem 2: Not Beginning the Conversation With Stakeholders

Who are your stakeholders, and what do they value? If you’re unsure, then you need to spend time building relationships. Don’t wait until budget time or until the library needs more funding. By then, it is too late. To ensure that you are aligned with your stakeholders, you need to identify them, talk with them, and develop an understanding of who they are and what they value as meaningful. Building any good relationship requires patience and trust; in this case, it requires some research and lots of conversations.

Your stakeholders can be individuals or groups (see the sidebar). They can directly or indirectly be involved in determining your budget and providing support for your existence. Is there a council, a board, or a committee that impacts these decisions? Who sits on these governing bodies? You will also need to find out who influences these people, and who are the key opinion leaders in your larger environment. This may not be as easy, or as obvious, as you think. In order to find out who the influencers are, you need to observe relationships and group dynamics. In addition, you need to determine and understand your stakeholders’ goals and objectives, how they define and measure value, and how that is communicated by your various stakeholders.

To that point, you must communicate in their language. For example, in a public library setting you may call end users “customers,” but when speaking to municipal officers you may want to call them residents. When speaking to local politicians, you may want to refer to them as constituents, voters, or taxpayers. It’s all about ensuring that they understand your value in their own terms, within their own context, and relate it to what is important to them. The only way to do this is through continuous contact and communication, well in advance of any type of reporting or requests for funding support. This problem is solved through building relationships, which starts with conversations. It results in common understanding, trust, and mutually beneficial, aligned targets.

Problem 3: Not Building the Measure Into the Design

Here’s a common scenario: A library has a great idea for a service or program for a particular user group. Significant staff time, resources, and supplies are invested in developing the idea into the service, marketing it, training staff in its delivery, and counting the people using the services or attending the program. Sound familiar?

The problem is that the targeted outcome and impact have not been defined in the design or early in the development. We spend all our design and development time on the how question rather than the important why and what questions. Why are we investing in this service or program? What do we want to “come out” of this service for users? What impact do we want this program to have on the community of users? What is the difference between this service and other services? Does this service complement or compete with other services? What will success look like for this service? Why will this success measure be valuable for the users, our stakeholders, and us?

By asking these probing questions at the idea stage of a new initiative, libraries are setting up themselves, and the proposed service, for success. These questions help the library quickly determine the following:

  • If the initiative is duplicating or cannibalizing another service (either a library service or one offered by partner organizations)
  • Whether they know enough about the users for whom the service is intended
  • The desired outcomes and impacts

 It’s very difficult for libraries to effectively and efficiently pull together the right inputs until they are clear on what the outcomes are—i.e., what is to come out of the input use. This is where the logic model comes in handy and allows you to make some decisions that start with the end in mind.

Problem 4: Confusing Operating Measures With Value Measures

Many libraries are adept at gathering all sorts of statistics—circulation, door counts, website visits, articles downloaded, databases searched, programs offered and attended, and so on. And we’re really good at reporting those numbers to our stakeholders—boards, provosts, and management.

What we are not good at is relating those numbers to value or impact. Why does it matter if the program visit numbers are up and the article downloads are down? Are these numbers showing our value? Do they demonstrate why we need to exist and why we should continue to be supported and funded? Have we determined in advance, with stakeholders, what success is going to look like and how these numbers indicate success?

There’s no question that these statistics are important. But they are only one piece of the value pie. There are three pertinent pieces of that pie: operational statistics, customer/user satisfaction metrics, and value/impact measures. When these three measures are combined, they create the value sweet spot and convey a compelling story of the library’s impact.

Operational statistics are excellent for determining resource allocation, efficiencies, and budget. If you know that your institution is busier during the morning and not as busy in the evening, then you can schedule staffing and hours accordingly.

Satisfaction measures tell us if our users are happy with how we are allocating our resources and how we do what we do. Once we have used the operational data to determine resource allocation, the satisfaction results tell us what improvements are required.

How Should You Measure the Success of Your New Job Skills Program?

Let’s look at how the logic model works using the creation of a job skills program for a public library. This is a program that is offered by many public libraries, but how many libraries actually report or communicate beyond the output stage? Typically, what is reported is that the library offered a series of five job skills workshops, and the total attendance at these workshops was 124 participants. Is this a measure of success? Is this the reason why the program is being offered? Are you simply trying to collect numbers? The numbers may sound impressive, but what difference did the program, and ultimately the library, make in the lives of the attendees? What impact did the library have in the community? The place to start is with the outcomes and impact. What difference do you want to make?

Using this chart, consider the difference between how the library and success of the program is perceived when what is reported to your stakeholders is the outcome and impact, as opposed to the outputs.

Job Skills Workshop Performance Sheet

Input

  • Library staff
  • Funding
  • Computer lab
  • Supplies

Output

  • Creation of 2-day job skills workshop
  • 5 sessions offered
  • 124 participants attended
  • Creation of a job skills resource guide

Outcome

  • Participants reported increased confidence in writing resumes and attending interviews
  • Participants reported that they were successful in landing interviews
  • Participants reported acquiring jobs as a result of having attended the workshops offered by library

Impact

  • Reduced unemployment rates
  • Increased quality of life for residents in the community
  • Reduced reliance on social services offered by municipality

There are many tools on the market to survey library users and determine satisfaction levels. But it’s not good enough to simply ask if our customers are satisfied; we also need to measure importance. Importance gets at a deeper understanding of survey questions. You could be scoring a solid 7 out of 10 on the satisfaction level for a service offering; however, if the importance for that service is scored at 9.5 out of 10, then you will need to examine the gap between satisfaction and importance. On the flip side, if end users score that same service as a 6 out of 10 with an importance score of 4 out of 10, you will need to decide if it is worth continuing to offer the service.

Operational and satisfaction measures are essential for decision making. But even satisfaction and importance don’t get at the value or impact that libraries have on users and are important to stakeholders.

Consider these statements:

1. The library offers four job skills training workshops a month, attended by 20 people.

2. Participants at the library job skills programs report an 8 out of 10 on overall satisfaction with the programs offered.

3. 50% of the residents participating in the library job skills programs report that they used the resumes and job-hunting skills resulting from the program to secure five to 10 interviews; 30% of the participants report securing a job within 4 weeks of participating in the program.

Which statement is more powerful for stakeholders’ decision making? Of course the answer is 3. And the only way to arrive at that measure is to know our stakeholders and know what they view as valuable. They value those things that have a positive impact on specific community, campus, or organizational goals.

Problem 5: No One Manages the Measures

We are all familiar with management consultant Peter Drucker’s pronouncement, “If you can’t measure something, you can’t manage it.” Yet few libraries have a role responsible for managing the library’s measures. The responsibility for setting outcome targets for assessing services tends to be distributed throughout the library, and the process of pulling together statistics and measures to be conveyed to decision makers is often piecemeal and viewed as a painful part of annual reporting or budgetary requests.

This viewpoint has to change. We have been talking about the importance of conveying meaningful measures in the library sector for years. Some libraries have translated the talk into action. More libraries must do the same.

Libraries must have a designated role, or roles, responsible for overseeing operational, satisfaction, and value measures. These measures ensure that success measures are built into services and programs from the beginning. We need to define and implement rigorous assessment processes that go beyond counts and “How did you like it?” surveys. We need to analyze and translate the data and stories gathered into evidence comparable to other organizations and aligned with stakeholder’s objectives.

Libraries are in the relationship business; it is our relationships with our users and our stakeholders that enable us to fulfill our purpose of positively impacting people’s lives. The role of investigating the quantitative and qualitative data underpinning these relationships is critical. By adopting the logic model, you will ramp up the odds that your programs will be viewed as critical to those who matter most to your survival.

The Logic Model Is, er, Logical

The logic model is based on the if/then principle: If we offer this program/service, then a specific impact/value will be realized. Give your library initiatives more value by considering each of the model’s four aspects:

• Inputs (operational perspectives)—These are the resources that are used to produce or develop a program or service. Inputs include funding, staffing, equipment, supplies, and essentially anything used to create the program or service.

• Outputs (operational perspectives)—Outputs are produced as a result of using the inputs to create or develop the program or service. Outputs include the creation of a training module or program, a report, the number of programs held, number of program attendees, or number of items loaned. Outputs are typically tangibles; they’re quantitative and can be used to determine value from an operational perspective. Although most libraries report outputs to their stakeholders, these outputs show activities, not value or impact.

• Outcomes (user perspectives)—Most explanations of the logic model combine outcomes and impacts together into one perspective. While they are similar in nature, they differ in terms of the audience. Outcomes are the change from the perspective of the individuals who have participated in the program/service. Impact is that same change from the perspective of stakeholders. Outcomes, from the participants’ perspective, include new or deeper skills, know-how, a change in behavior or attitude, or a change in status.

• Impacts (stakeholder perspectives)—The difference between outcome and impact is the long-term, overall effect of the program/service in the larger community or selected audience. It is typically referred to as a change in the human condition and includes that which has been reported as outcomes from the user perspective. Impacts are reported as the higher-level change. In the case of the first-year students receiving one-on-one training, the outcome is increased knowledge of the library and its services, whereas the impact is higher grades and lower dropout rates.

Stakeholders and Customers—What’s the Difference?

It’s important to recognize the difference between stakeholders and customers. Stakeholders and customers are interested in very different measures.

A stakeholder or decision maker is an individual who, through her choices or influence, can put a stake of support under the library, or a stake through the heart of it. Stakeholders are not faceless groups; they are individuals with their own challenges and responsibilities. They may also be customers of the library, but if they are, they wear both a customer hat and a stakeholder hat.

A customer is an individual who uses the library’s services and programs.


Moe Hosseini-Ara (mhosse@markham.library.on.ca) is the director of service excellence, Markham Public Library (MPL), where he has led the organization through numerous innovative and transformational changes. Moe is a regular presenter at library conferences, speaking on use of library metrics, RFID technology, improving the customer experience and MPL’s winning customer centered classification system.

Rebecca Jones, M.L.S. (rebecca@dysartjones.com), is a partner at Dysart & Jones Associates, where she consults with libraries of all sizes in public, government, academic, and corporate sectors. She speaks regularly on portfolio management, measures, and organizational design and development.
       Back to top