Competency N

Competency Statement: Evaluate programs and services using measurable criteria.

Introduction

Providing programs and services to our information communities can run the risk of not being helpful and completely avoided if there are no mechanisms in place to know whether they are serving their purpose well. This competency enables information professionals to evaluate programs and services using measurable criteria to ensure that they are effective and useful to patrons, and to understand what is not working to change and improve them.

Evaluation of programs and services

In addition to collection development, where materials must be selected and evaluated for acceptance, information professionals and librarians also provide programs and services to attract patrons to the library and utilize the collection. By evaluating them, libraries can discover trends, whether benchmarks are being met, and needs. For example, the Virtual Reference Committee (VRC) at Berkeley College Library launched a task force to “discover current instruction trends in chat reference, evaluate information literacy (IL) instruction in the Library’s chat transcripts, and deliver resources applicable to instructing in chat reference” (Hunter, et al, 2019, p. 135). Reference services, as a core program at academic libraries, consistently undergoes evaluation and has changed in significant ways, such as “combining reference and circulation desks, implementation of on-call library reference,” handling less research inquiries but more question complexity, and direct reference services through live chat or SMS (Lapidus, 2019).

Libraries that are physical in nature will inevitably need to evaluate how it uses space. For example, the Burton Memorial Library received an assessment that “various collection spaces and service areas did not reflect current priorities, much less future needs” (Haywood, 2007). While large libraries may have the budget to remodel or expand, smaller ones can decide to simply relocate or update furnishings for their purposes.

Measurable criteria

Measurable criteria are purposeful and can inform better evaluation. Collecting measurable criteria for analysis can reveal objective trends that aren’t founded on subjective opinions and can provide clarity of reality. Measurable criteria can be as simple as what is directly observable, such as gender, number of patrons, or number of times a library item was checked out. This can inform evaluations whether to take steps, such as those for space, to de-prioritize room for collections that are declining in use and prioritize those with increasing use, as the Bruton Memorial Library found with their “adult nonfiction, reference, and periodical collections” compared to their “popular audiovisual collection” (Haywood, 2007).

With complex unobservable concepts, such as library anxiety or library use, the criteria can be broken into a combination of indicators to collect data from (Luo, et al, 2017). For instance, the number of library visits within a period can be a measurement for library use, or one can measure library anxiety using the Library Anxiety Scale, which is a “collective measurement of … 43 indicators” (Luo, et al, 2017).

In some cases, the data is embedded in feedback that is unstructured and isn’t easily measurable, such as a reference chat log or comments collected at the end of a program. The VRC at Berkeley College Library codified the content of their reference log transcripts into types of instruction that were given, as well as failure to give instruction cases, which they measured to determine missed instructional opportunities and propose recommendations and best practices (Hunter, et al, 2019).

Evidence

The evidence I chose to present to demonstrate this competency include a website evaluation report for INFO 202 (Information Retrieval), a reference evaluation report for INFO 210 (Reference Services), and  an academic research proposal draft for INFO 285 (Applied Research Methods for Academic Libraries). 

Evidence #1– INFO 202 – Information Retrieval – Humble Bundle Games website evaluation

In this assignment, I conducted an evaluation of the Humble Bundle games website. I counted various observations of confusion and inconsistencies, including categories with low or no items returned under Top Genres or Top Platform, and an inconsistent number of items when clicking each of the 3 appearances of the Featured link, making it unclear what conditions the website is using to prioritize certain items and whether it fits the user’s preferences. In the worst case, having three “Featured” pages means having to update each one, and risks showing the user outdated information.

Using the number of links as criteria to evaluate consistency and sense-making, I identified that some of these categories were not fulfilling their purpose. I made recommendations based on this criterion, such as eliminating categories with no link/results, removing categories with low item count from “popular” categories, and reducing the number of locations where the “Featured” category appears to avoid redundancy and avoid displaying outdated information.

This evidence demonstrates my experience evaluating how well a website satisfies its purpose by using categorical item count as measurable criterion.

Evidence #2– INFO 210 – Reference Services – Virtual Reference evaluation discussion

In this assignment, I evaluated a virtual reference with the UC Davis Shields library using the Reference and User Services Association (RUSA) guidelines as criteria to evaluate my reference interview (Reference and User Services Association, 2023). After conducting the interview, my feelings as a patron were mixed: while I received help and some hints on how to continue my search, I felt like it took up a lot of time. With the RUSA criteria, I was able consider what specific areas went well and what didn’t. While they are qualitative in nature, I was able to quantify positive and negative moments during the interview when the criteria were met and produce a more clarified evaluation of the reference interview.

This evidence demonstrates my experience of evaluating reference interviews using criteria offered by RUSA.

Evidence #3 – INFO 284 – Research Methods – Academic Research Proposal

In this research proposal draft, I presented my research question, followed by creating operational definitions for my variables. I wanted to study the perceived effectiveness of library instruction modes, but I was confronted with the issue of how to measure effectiveness without ambiguity or opinion. In the end, I broke down the criteria for effectiveness as “awareness”, “interest”, and “impressions” (“impressions” was later changed to “participation” and “outcome “in the final draft). These criteria could be measured by survey questions directly asking if the participant was aware or interested in the modes of library instruction they received, and whether they had positive or negative feelings about the instruction. Evaluating the combination of these criteria could clarify or reinforce perceived effectiveness, for example, a participant that had negative impressions of instruction may not have had interest in the first place.

This evidence demonstrates my understanding of the significance of creating measurable criteria for future evaluation of survey results.

Conclusion

As an information professional, conducting periodic assessments of programs and services in my information environment is essential to staying current with changing content, demographics, technology, and information behavior. By using measurable criteria in my evaluations, I can clarify the intent and necessity of my proposals to advocate for change. Other information environments may have already tried what I’m trying to achieve, so researching library journals and peer-reviewed papers is a resource for insight on effective criteria that I will be relying on before starting any evaluation.

References

Haywood, A. (2007). Plant City’s not so extreme makeover – Library edition. Florida Libraries, 50(2), 21-23

Hunter, J., Kannegiser, S., Kiebler, J., & Meky, D. (2019). Chat reference: evaluating customer service and IL instruction. Reference Services Review, 47(2), 134-150.

Lapidus, M. (2019). Not all library analytics are created equal: LibAnswers to the rescue! Medical Reference Services Quarterly, 38(1), 41-55. https://doi.org/10.1080/02763869.2019.1548892

Luo, L., Brancolini, K. R., & Kennedy, M. R. (2017). Enhancing library and information research skills: A guide for academic librarians. ABC-CLIO, LLC. 

Reference and User Services Association (2023). Guidelines for Behavioral Performance of Reference and Information Service Providers. American Library Association. https://www.ala.org/rusa/resources/guidelines/guidelinesbehavioral

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *