Benchmarking is a comparison of performance measures between similar entities and/or against recognized standards. Library benchmarks are typically comparisons of numerical (quantitative) statistics such as circulation, visits, and revenues.
It is one of several tools, including customer feedback and outcomes measurement, that libraries, government agencies, and non-profits can use to measure performance and assess strengths and areas for improvement. Management expert Peter Drucker called benchmarking "critical" to good government and nonprofit management.
Why should a library conduct a peer benchmarking study?
Peer benchmarking provides an opportunity for a library to compare its performance to libraries similar in size, population served, budget, collection or any combination of these or other factors. It can highlight areas of excellence and underperformance that may require further study or attention.
The comparisons also provide concrete and persuasive data for advocacy, reports to elected officials, fundraising and grant applications. For example, benchmarks that show a library as under-staffed compared to its peers can help build a case for additional personnel and that the majority of high-performance libraries have excellent funding, highly educated and affluent populations, large collections and multiple outlets. (Why would we be surprised?)
What are some of the limitations of peer benchmarking?
- It's not a stand-alone, complete assessment of library performance.
- There are very few established quantitative standards defining success for libraries.
- Some generally accepted benchmarks are that library materials expenditures should comprise 12% or more of a budget and personnel expenditures between 60-70% of a budget.
- Some numbers, like holdings (number of items in the collection), need complementary qualitative data to be meaningful. The number of holdings doesn't reflect the age, relevance or other attributes that fully describe the quality of the collection.
- Some statistics have hidden "cause-and-effect" attributes, revealed only after further investigation. For example, libraries with short loan periods and maximum renewals will tend to have larger circulation numbers than peers with longer loan periods and fewer renewals.
- Library numbers tend to focus on transactions and outputs, whereas patron outcomes, or the actual changes in user behavior that libraries create, are the most convincing measure of library success. Outcome assessments are more difficult to complete and are typically done for specific projects or grants as opposed to overall library operations. For example, a library can collect and benchmark the number of children registered for Summer Reading (output), but the change in reading ability and scores after participation (outcome), requires additional data from schools or parents.
- There are many opportunities for data entry errors, beginning with the library, and with the databases providing access to the numbers.
- Data is not current and is about past performance, not the present.
How can a library find its peers?
We recommend finding libraries with similar (within +/- 15% or less) of the following:
- service area populations
- revenues or expenditures
- number of outlets (locations/facilities)
These criteria result in a peer group that has similar resources (money and facilities) and population.
Benchmarks can then help answer questions such as "What results come from the resources and community the library has?"
Using three criteria for peers provides a group with more specificity than two popular library rating tools which use simplistic groupings: Hennen Public Library Rankings clusters libraries by population served, while LJ Index groups by total operating expenditure.
It's easy to find peer libraries using the Institute of Museum and Libraries (IMLS) Public Libraries Survey. A group of 5-10 peers is sufficient for meaningful comparisons.
What if the libraries aren't all in the same state as the library of interest?
Acknowledging that states vary widely in public library funding, organization and standards, it is still worthwhile to use significantly similar peer libraries across the country to compare financial and service performance. There is also the option to self-select other libraries for comparison or limit benchmarking to libraries in a particular state or region.
How can a large and complex set of benchmarking numbers be made more meaningful?
Benchmarking results must be viewed with consideration of a library's unique situation and context, including its community demographics, facilities, financial situation, and management. For a comprehensive assessment of performance, benchmarking should be used in combination with other tools, such as customer feedback and surveys.
It can seem overwhelming to gather and process the data. It's best to "start small" and look at figures most important to the strategic plan, vision, concerns, and projects at hand. When a library has above- or below-average performance, specific circumstantial factors can merit further study. For example, public libraries in large college towns often have below-average reference numbers due to the presence of academic libraries and tech-savvy customers in their service area. Other libraries can have relatively low program attendance if they are in communities with many cultural and recreational programs.
How should benchmarking findings be presented?
Supported by a spreadsheet detailing numbers, a benchmarking report should be straightforward and concise. Most readers cannot absorb a slew of statistics. Instead, a report should highlight key findings especially significant variances from peers. A narrative format supported by a limited number of statistics and can be supported by charts, other similar graphics, and anecdotes.
Are you interested in benchmarking your library's performance? Contact us to learn more!