Who’s Comparing? Benchmarking Library Performance


Comparing apples

Benchmarking is a comparison of performance measures between similar entities against recognized standards. Library benchmarks are typically comparisons of numerical statistics such as circulation, visits, and revenues.

It is one of several tools, that libraries, government agencies, and non-profits can use to measure performance. Including areas such as customer feedback and outcomes measurement, benchmarking helps organizations assess strengths and identify areas for improvement. Management expert Peter Drucker called benchmarking "critical" to good government and nonprofit management.

Why should a library conduct a peer benchmarking study?

Peer benchmarking provides an opportunity for a library to compare its performance to libraries similar in size, population served, budget, collection or any combination of these or other factors. It can highlight areas of excellence and underperformance that may require further study or attention.

The comparisons also provide concrete and persuasive data for advocacy, reports to elected officials, fundraising and grant applications. For example, benchmarks that show a library as under-staffed compared to its peers can help build a case for additional personnel. Moreover, the majority of high-performance libraries have excellent funding, highly educated and affluent populations, large collections and multiple outlets. (Why would we be surprised?)

What are some of the limitations of peer benchmarking?

  • It's not a stand-alone, complete assessment of library performance.
  • There are very few established quantitative standards defining success for libraries.
  • Some generally accepted benchmarks are that library materials expenditures should comprise 12% or more of a budget and personnel expenditures between 60-70% of a budget.
  • Some numbers, like holdings (number of items in the collection), need complementary qualitative data to be meaningful. The number of holdings doesn't reflect the age, relevance or other attributes that fully describe the quality of the collection.
  • Some statistics have hidden "cause-and-effect" attributes, revealed only after further investigation. For example, libraries with short loan periods and maximum renewals will tend to have larger circulation numbers than peers with longer loan periods and fewer renewals.
  • Library numbers tend to focus on transactions and outputs, whereas patron outcomes, or the actual changes in user behavior that libraries create, are the most convincing measure of library success. Outcome assessments are more difficult to complete and are typically done for specific projects or grants as opposed to overall library operations. For example, a library can collect and benchmark the number of children registered for Summer Reading (output), but the change in reading ability and scores after participation (outcome), requires additional data from schools or parents.
  • There are many opportunities for data entry errors, beginning with the library, and with the databases providing access to the numbers.
  • Data is not current and is about past performance, not the present.

How can a library find its peers?

We recommend finding libraries with similar (within +/- 15% or less) of the following:

  1. service area populations
  2. revenues or expenditures
  3. number of outlets (locations/facilities)

These criteria result in a peer group that has similar resources (money and facilities) and population.

Benchmarks can then help answer questions such as "What results come from the resources and community the library has?"

Using three criteria for peers provides a group with more specificity than two popular library rating tools that use simplistic groupings: Hennen Public Library Rankings clusters libraries by population served, while LJ Index groups by total operating expenditure.

It's easy to find peer libraries using the Institute of Museum and Libraries (IMLS) Public Libraries Survey. A group of 5-10 peers is sufficient for meaningful comparisons.

What if the libraries aren't all in the same state as the library of interest?

States vary widely in public library funding, organization and standards. However, it is still worthwhile to use significantly similar peer libraries across the country to compare financial and service performance. There is also the option to self-select other libraries for comparison or limit benchmarking to libraries in a particular state or region.

How can a large and complex set of benchmarking numbers be made more meaningful?

Benchmarking results must be viewed with consideration of a library's unique situation and context. Take heed of community demographics, facilities, financial situation, and management. For a comprehensive assessment of performance, use benchmarking in combination with other tools, such as customer feedback and surveys.

It can seem overwhelming to gather and process the data. It's best to "start small" and look at figures most important to the strategic plan, vision, and projects at hand. When a library has above- or below-average performance, specific factors can merit further study. For example, public libraries in large college towns often have below-average reference numbers. This is due to the presence of academic libraries and tech-savvy customers in their service area. Other libraries can have relatively low program attendance if they are in communities with many cultural and recreational programs.

How should benchmarking findings be presented?

Supported by a spreadsheet detailing numbers, a benchmarking report should be straightforward and concise. Most readers cannot absorb a slew of statistics. Instead, a report should highlight key findings, especially significant variances from peers. Charts, graphics, and anecdotes can support a narrative format with a limited number of statistics.

Are you interested in benchmarking your library's performance? Contact us to learn more!

Featured photo by Elena Koycheva on Unsplash


Category: Libraries
Tags: libraries, research