13 September 2016 Blogs, Academic, Community College, Corporate, Government, K-12, Public, Librarian

Hunt & Gather: The Fundamentals of Collection Assessment

Hunt & gather, verify, analyze, and disseminate are the 4 steps of collection assessment, which focus both on the collection and the use/user

By Caroline Muglia, Collection Assessment Librarian at University of Southern California Libraries
Collection assessment is “an organized process for systematically analyzing and describing a library's collection.” 
Sounds simple enough. Libraries are incredibly organized places. To ensure users have access to resources at the point of need we perform library collection analyses all the time. 
But, like most things, it’s a bit messier than that.
The field of collection assessment has evolved in the last decade to account for an interest in data-driven metrics and the changing landscape - both for the way patrons consume information and how they do it. 
The definition hasn’t changed much: determine how collections support the goals and mission of the organization, use available data to drive decision making, and institute collection development changes based on the analysis (Johnson, 2004). 
What has changed is the amount of information readily available for decision making. That’s where collection assessment gets fun, creative, and sometimes dead ends and requires the librarian to pivot, and start all over again.
There are two main approaches to collection assessment: collection-centered and use/user-centered. 
The former analyzes the contents of a collection for quantity and quality, makes comparisons to peer institutions, focuses on the collection’s condition, and measures against core subject titles. 
The latter is focused on how materials are being used and by whom. This approach helps the library gain insight into the perceived needs of a library user, or perceived demand (Agee, 2005). 
Both methods of assessment are integral to shaping a strong and relevant collection, and both raise important guiding questions discussed later in the post. Today, it is more common to take a hybrid approach of collection and use/user-centered assessment. 
My job is the first of its kind at University of Southern California Libraries. Before I started, librarians performed localized assessments. No single person was trained in assessment nor was it considered worthy of a full-time position, which meant the libraries suffered from a lack of large-scale strategic assessment decisions. 
If you look deep enough into your institution's history, this trend occurred everywhere. With the perfect storm of declining budgets, space constraints, consolidation of vendors, smarter ILS’s, and the market release of new ways to consume information, libraries had to adapt. 
Some libraries have thriving departments engaged in systematic assessment, such as the University of Washington and University of Maryland. Overall, the trend of metrics-driven decision-making, or what I like to call Collection Development 2.0, has taken hold across academic libraries.
Hunt & gather, verify, analyze, and disseminate are the 4 steps of collection assessment, which focus both on the collection and the use/user. There’s a lot of literature that explains these concepts in more detail. See here, here and here for some great resources. These principles will help you gain some understanding of the importance and process of collection assessment. 
Let me explain.
Hunt & Gather
These days, collection assessment starts with a numbers game. How many resources do we have? How often are they being accessed? What do we pay for the resources? Is there an inherent cost? How many resources overlap in content? 
Catch my drift? Especially when your library doesn’t have a long-standing tradition of assessing collections, there’s a good chance you’ll have to identify data or sources from which to gather data in order to start your work. 
There are 3 general types of data to hunt for: vendor-provided, institutionally generated, and peer comparison.
Vendor-provided data:
- Counting Online Usage of Networked Electronic Resources (COUNTER) data, or “Codes of Practice” enables publishers and vendors to report usage data in a standardized format. It’s a great starting point for finding consistently reported data on a wide array of resources including books, databases, journals, accreditation, and a new report on Gold Open Access journals. (See also SUSHI protocol and ProQuest’s 360 Counter.) 
- Altmetric enables authors to see the impact of their publication (paper, book, or dataset) by monitoring references not only from scholarly journals, but also social media outlets, newspapers, and policy documents. Altmetrics data can be easily applied to assessment projects to measure usage and assign value to costly resources. Here’s an example of the use of bibliometrics to assess the value of journals.
Institutionally generated data:
- Analyze the strength of the print collection with circulation rates using a gap analysis, assessing the age or physical condition of the collection, and identifying the subjects that are most used in print format.
- Ebook use statistics, similar to data from the print collection, enables librarians to perform several types of analysis. As you may imagine, ebook statistics are in abundance! ProQuest’s LibCentral provides metadata on the book as well as download, browse, print, page view, cost, and time statistics. Here’s a great presentation about ebook usage.
- Interlibrary loan (ILL) statistics are often de-prioritized. Lending data can provide information on the strength of the collection compared to neighboring institutions, and borrowing information can uncover potential gaps in the collection. 
- Web analytics whether it’s from a homegrown platform or a paid service can provide valuable details on what users are accessing, from where, and for how long. Sometimes librarians pair this data with EZ Proxy or authentication data for a deeper understanding of the types of users accessing resources. Here’s a great presentation on this topic.
Peer comparison data:
- ProQuest’s product Intota Assessment Peer Analysis enables libraries to compare unique and overlapping titles from specific peer institutions, parsed by subject, category, and classification.
Of course, collection assessment is not solely quantitative. A collection assessment librarian can incorporate surveys, focus groups, and user studies to round out the use/user-centered assessment, which enables sustainable changes to a collection.
Verify.
Early on, much of my job consisted of identifying data that had been used to guide decisions, but which I could not replicate. Those situations required finding the data’s origin, learning it wasn’t quite as accurate as it needed to be in order to make library-wide decisions, changing it a bit, reapplying, and registering the results. It’s as tedious as it sounds. But, it’s also as rewarding. 
For example, when the library grants officer asked me to find statistics related to a collection for which we received grant funds to develop, he sent along the statistics from the previous year. My numbers didn’t come close to those reported - the numbers should have increased and mine significantly decreased. 
After extensive investigation, I learned that where I had counted unique items, previous counts registered all items skewing the statistics considerably. In this case, I documented my methodology for next year when the grants officer comes knocking. I also wrote a brief statement for the donor agency about the change in the assessment of the funded collection.
Collection assessment needs to be consistent especially because the statistics reported today will likely be used tomorrow without consideration of accuracy or, more to the point, for methodology. 
Data hygiene is the responsibility of the collection assessment librarian, so I always validate the source of data and how the data was used. 
If I cannot replicate the report, I ask questions. If my report, using the same parameters, yields different results, I ask questions. You get the idea: once the data comes across your desk, you are accountable for its use, accuracy, and history (as sordid as it may be).
Analyze. 
Collection assessment analysis has evolved over the last decade. Most library data are historical - it’s about looking back on usage that already occurred. Historical data is incredibly valuable whether it’s one month, one year, or even one decade old. The data can reveal gaps in the collection, high-usage areas, and changes in cost over time. More recently, collection assessment has moved into forecasting trends, costs, and usage. These predictive analyses can model what will likely happen to the library collection, usage, and costs by considering historical data to forecast future events. This librarian is applying predictive analysis to ILL activity.
For many of us, this is the fun part. There are so many entry points into the data that we hunted, gathered, and verified. 
We can start with single collections-specific questions: What is the cost per use of this journal? or Should we change our approval plan to e-preferred for Architecture? 
We can answer programmatic questions: Are freshmen accessing the library resources? 
Often times, it’s a pressing need that drives the collection assessment librarian’s priorities.
Part of analysis is storing the data in a highly functional system. I use a lot of Excel spreadsheets, which certainly have their limitations. I store bigger data sets in Access databases. I use visualization tools through Excel and Tableau. Many colleagues use SpringShare products and LibQual products to store data and as a platform for analysis, survey development, and sharing. It is important for the collection assessment librarian to have a system of storing, accessing, and naming data to maintain its accuracy, especially if others will use it.
Disseminate. 
Share stuff! All the time! 
This shows stakeholders, peers, and administrators that collection assessment is a value-added service for the institution. It also illustrates what collection assessment looks like in practice - providing systematic assessment of library resources. 
For purely quantitative results, some libraries offer Facts & Figures pages on the library’s website. This is ours. I also produce a monthly Content Gains & Losses report, a curated document that shows new and withdrawn acquisitions. Usage and cost reports are available through the library’s Intranet to promote transparency and re-use of the information. 
When my schedule permits, I facilitate ILS, and other systems, report-running workshops that enhance my colleagues’ skills. Empowering my colleagues to take on small-scale collection assessment in their subject areas of expertise. 
The goal of the collection assessment librarian is not to be the only one with all the knowledge. Rather it’s to ensure that everyone has basic knowledge of collection assessment ensuring an ongoing, widespread, and sustainable process.
Bio:
Caroline Muglia is the Collection Assessment Librarian at University of Southern California Libraries. Before moving to Los Angeles, Caroline lived in Washington, D.C. where she worked first for the Library of Congress, and later for an education technology firm. She received her MLIS from University of North Carolina, Chapel Hill where she focused not on assessment, but on digital archives. Email her: muglia@usc.edu 
Sources:
Johnson: Peggy Johnson. “Fundamentals of Collection Development & Management.”  Chicago: American Library Association (2004): 269.
Jim Agee. ‘Collection evaluation: a foundation for collection development.” Collection Building. Vol. 4, no. 3 (2004): 93.

This fall, data-driven libraries (and librarians) take the spotlight in the ProQuest Blog. Look for this helpful series starting in September.

By Caroline Muglia, Collection Assessment Librarian at University of Southern California Libraries

Collection assessment is “an organized process for systematically analyzing and describing a library's collection.” 

Sounds simple enough. Libraries are incredibly organized places. To ensure users have access to resources at the point of need we perform library collection analyses all the time. 

But, like most things, it’s a bit messier than that.

The field of collection assessment has evolved in the last decade to account for an interest in data-driven metrics and the changing landscape – both for the way patrons consume information and how they do it. 

The definition hasn’t changed much: determine how collections support the goals and mission of the organization, use available data to drive decision making, and institute collection development changes based on the analysis (Johnson, 2004). 

What has changed is the amount of information readily available for decision making. That’s where collection assessment gets fun, creative, and sometimes dead ends and requires the librarian to pivot, and start all over again.

There are two main approaches to collection assessment: collection-centered and use/user-centered. 

The former analyzes the contents of a collection for quantity and quality makes comparisons to peer institutions, focuses on the collection’s condition, and measures against core subject titles. 

The latter is focused on how materials are being used and by whom. This approach helps the library gain insight into the perceived needs of a library user, or perceived demand (Agee, 2005). 

Both methods of assessment are integral to shaping a strong and relevant collection, and both raise important guiding questions discussed later in the post. Today, it is more common to take a hybrid approach of collection and use/user-centered assessment. 

My job is the first of its kind at University of Southern California Libraries. Before I started, librarians performed localized assessments. No single person was trained in assessment nor was it considered worthy of a full-time position, which meant the libraries suffered from a lack of large-scale strategic assessment decisions. 

If you look deep enough into your institution's history, this trend occurred everywhere. With the perfect storm of declining budgets, space constraints, consolidation of vendors, smarter ILS’s, and the market release of new ways to consume information, libraries had to adapt. 

Some libraries have thriving departments engaged in systematic assessment, such as the University of Washington and University of Maryland. Overall, the trend of metrics-driven decision-making, or what I like to call Collection Development 2.0, has taken hold across academic libraries.

Hunt & gather, verify, analyze, and disseminate are the 4 steps of collection assessment, which focus both on the collection and the use/user. There’s a lot of literature that explains these concepts in more detail. See here, here and here for some great resources. These principles will help you gain some understanding of the importance and process of collection assessment. 

Let me explain.

Hunt & Gather.

These days, collection assessment starts with a numbers game. How many resources do we have? How often are they being accessed? What do we pay for the resources? Is there an inherent cost? How many resources overlap in content? 

Catch my drift? Especially when your library doesn’t have a long-standing tradition of assessing collections, there’s a good chance you’ll have to identify data or sources from which to gather data in order to start your work. 

There are 3 general types of data to hunt for: vendor-provided, institutionally generated, and peer comparison.

Vendor-provided data:

- Counting Online Usage of Networked Electronic Resources (COUNTER) data, or “Codes of Practice” enables publishers and vendors to report usage data in a standardized format. It’s a great starting point for finding consistently reported data on a wide array of resources including books, databases, journals, accreditation, and a new report on Gold Open Access journals. (See also SUSHI protocol and ProQuest’s 360 Counter). 

- Altmetric enables authors to see the impact of their publication (paper, book, or dataset) by monitoring references not only from scholarly journals, but also social media outlets, newspapers, and policy documents. Altmetrics data can be easily applied to assessment projects to measure usage and assign a value to costly resources. Here’s an example of the use of bibliometrics to assess the value of journals.

Institutionally generated data:

- Analyze the strength of the print collection with circulation rates using a gap analysis, assessing the age or physical condition of the collection, and identifying the subjects that are most used in print format.

- Ebook use statistics, similar to data from the print collection, enables librarians to perform several types of analysis. As you may imagine, ebook statistics are in abundance! ProQuest’s LibCentral provides metadata on the book as well as download, browse, print, page view, cost, and time statistics. Here’s a great presentation about ebook usage.

- Interlibrary loan (ILL) statistics are often de-prioritized. Lending data can provide information on the strength of the collection compared to neighboring institutions, and borrowing information can uncover potential gaps in the collection. 

- Web analytics whether it’s from a homegrown platform or a paid service can provide valuable details on what users are accessing, from where, and for how long. Sometimes librarians pair this data with EZ Proxy or authentication data for a deeper understanding of the types of users accessing resources.

Peer comparison data:

- ProQuest’s product Intota Assessment Peer Analysis enables libraries to compare unique and overlapping titles from specific peer institutions, parsed by subject, category, and classification.

Of course, collection assessment is not solely quantitative. A collection assessment librarian can incorporate surveys, focus groups, and user studies to round out the use/user-centered assessment, which enables sustainable changes to a collection.

Verify.

Early on, much of my job consisted of identifying data that had been used to guide decisions, but which I could not replicate. Those situations required finding the data’s origin, learning it wasn’t quite as accurate as it needed to be in order to make library-wide decisions, changing it a bit, reapplying, and registering the results. It’s as tedious as it sounds. But, it’s also as rewarding. 

For example, when the library grants officer asked me to find statistics related to a collection for which we received grant funds to develop, he sent along the statistics from the previous year. My numbers didn’t come close to those reported – the numbers should have increased and mine significantly decreased. 

After an extensive investigation, I learned that where I had counted unique items, previous counts registered all items skewing the statistics considerably. In this case, I documented my methodology for next year when the grants officer comes knocking. I also wrote a brief statement for the donor agency about the change in the assessment of the funded collection.

Collection assessment needs to be consistent especially because the statistics reported today will likely be used tomorrow without consideration of accuracy or, more to the point, for methodology. Data hygiene is the responsibility of the collection assessment librarian, so I always validate the source of data and how the data was used. 

If I cannot replicate the report, I ask questions. If my report, using the same parameters, yields different results, I ask questions. You get the idea: once the data comes across your desk, you are accountable for its use, accuracy, and history (as sordid as it may be).

Analyze. 

Collection assessment analysis has evolved over the last decade. Most library data are historical – it’s about looking back on usage that already occurred. Historical data is incredibly valuable whether it’s one month, one year, or even one decade old. The data can reveal gaps in the collection, high-usage areas, and changes in cost over time. More recently, collection assessment has moved into forecasting trends, costs, and usage. These predictive analyses can model what will likely happen to the library collection, usage, and costs by considering historical data to forecast future events. This librarian is applying predictive analysis to ILL activity.

For many of us, this is the fun part. There are so many entry points into the data that we hunted, gathered, and verified. 

We can start with single collections-specific questions: What is the cost per use of this journal? or Should we change our approval plan to e-preferred for Architecture? 

We can answer programmatic questions: Are freshmen accessing the library resources? 

Often times, it’s a pressing need that drives the collection assessment librarian’s priorities.

Part of an analysis is storing the data in a highly functional system. I use a lot of Excel spreadsheets, which certainly have their limitations. I store bigger data sets in Access databases. I use visualization tools through Excel and Tableau. Many colleagues use SpringShare products and LibQual products to store data and as a platform for analysis, survey development, and sharing. It is important for the collection assessment librarian to have a system of storing, accessing, and naming data to maintain its accuracy, especially if others will use it.

Disseminate. 

Share stuff! All the time! 

This shows stakeholders, peers, and administrators that collection assessment is a value-added service for the institution. It also illustrates what collection assessment looks like in practice - providing a systematic assessment of library resources. 

For purely quantitative results, some libraries offer Facts & Figures pages on the library’s website. This is ours. I also produce a monthly Content Gains & Losses report, a curated document that shows new and withdrawn acquisitions. Usage and cost reports are available through the library’s Intranet to promote transparency and re-use of the information. 

When my schedule permits, I facilitate ILS, and other systems, report-running workshops that enhance my colleagues’ skills. Empowering my colleagues to take on small-scale collection assessment in their subject areas of expertise. 

The goal of the collection assessment librarian is not to be the only one with all the knowledge. Rather it’s to ensure that everyone has basic knowledge of collection assessment ensuring an ongoing, widespread, and sustainable process.

Caroline Muglia is the Collection Assessment Librarian at University of Southern California Libraries. Before moving to Los Angeles, Caroline lived in Washington, D.C. where she worked first for the Library of Congress, and later for an education technology firm. She received her MLIS from University of North Carolina, Chapel Hill where she focused not on assessment, but on digital archives. Email her: muglia@usc.edu 

Sources:

Peggy Johnson. “Fundamentals of Collection Development & Management.”  Chicago: American Library Association (2004): 269.

Jim Agee. ‘Collection evaluation: a foundation for collection development.” Collection Building. Vol. 4, no. 3 (2004): 93.

arrow_upward