Competency N


Evaluation

Evidence Conclusion

Evaluate programs and services using measurable criteria.


Any library or information organization has a mission statement and goals. Within these goals are measurable objectives that suggest programs that can be achieved within a reasonable amount of time (Rosenblum, 2018, p. 242). An important part of what makes an organization successful is how well they meet the stated objectives. They achieve the goals and objectives by developing collections, establishing programs and offering services that are of benefit to, and meet the needs of, the respective communities and patrons. As information professionals, it is our duty to ensure the success of the library and that the needs of the community are being properly addressed and met. Throughout this ePortfolio, I have discussed different aspects of library and information science and varying services and programs that a library can offer to properly serve the community. However, how do we assess the effectiveness of the services and programs provided? How do we make sure that programs and services are meeting the stated objectives and meeting the needs of the community? In this competency, I will discuss how these services are measured and evaluated for effectiveness and efficiency at their stated purpose.

Evaluating Efficiency Vs. Effectiveness

Evaluation is critical for information professionals. Evaluation determines how productive a service or program is to the community and identifies whether it is achieving the stated objective. The evaluation process is meant to help information professionals determine if a service or program should be revised or discarded in favor of a better system. When conducting an evaluation, it is important that we differentiate between effectiveness and efficiency. Effectiveness is about doing something correctly, producing the intended result. Efficiency is about doing things in the most optimal way, even if it is not correct. For example, when examining how children use a library for reading and education purposes, efficiency would measure the percentage of children that have a library card at the end of an academic year or participate in a reading program. Effectiveness would measure how often schoolchildren use their library card or access materials, and whether participating in a reading program improved their reading level. When measuring efficiency and effectiveness, output, which is a measure of efficiency is easier to measure than outcome, which is a measure of effectiveness. However, efficiency and outcome are ultimately more meaningful and measures more closely the objectives in a strategic plan (Rosenblum, pp. 243-244). Be wary, however, as a balance must be reached. Striving to be too effective can have a negative impact on efficiency, and vice versa.

Measurable Criteria

Measurable criteria in a library comes from a variety of sources. As previously identified, such criteria may be created by the library itself from objectives stated in the library's mission and goals statements. Quantitative data may be collected from statistics involving circulation numbers, program attendance, computer use, and journal database access to determine the success rate of the respective services. For example, following the above example of schoolchildren, if the library established an objective of increasing attendance of schoolchildren to the library by 25%, the data could be analyzed and measured against previous and future statistics regarding visitation. Consequently, outreach efforts could be evaluated and revised based on whether the objective was accomplished.

Qualitative data is also useful as measurable criteria. In determining the quality of a service or program, the library can gather data from surveys and interviews, which will be explored more below, to determine if services and programs meet the needs of the community. For example, if a library is located in a primarily Spanish-speaking community, the library can conduct surveys on the quality of the Spanish literature collection and establish an initial marker of satisfaction. The library can then determine that they would like to increase the patron satisfaction with materials offered and conduct a follow-up survey in one year to determine if the criteria was met.

External criteria can also be utilized. Various library associations publish standards and guidelines detailing how services and programs should be performed in a library. This helps establish uniform standards across many libraries and provides a benchmark for evaluation. Some examples of these standards are the Reference and User Services Association (RUSA) Guidelines for the Behavioral Performance of Reference and Information Service Providers and the California Department of Education's School Library Standards.

Evaluation Methods

There are many methods available for evaluating programs and services. By far the most popular and frequently used type of evaluation method is the survey. According to Cassell and Hiremath (2018), a survey is "a set of questions asked of a defined community in order to get a quantitative handle on community values, activities, qualities, or perceptions (p. 400). Surveys are popular likely because they are relatively easy to implement, and survey responses are relatively easy to analyze" (Luo, 2019, slide 3). One of the factors contributing to the ease of surveys is the delivery method, as a survey can be in-person, mailed, e-mailed, over the telephone or online. Surveys are constructed using a questionnaire, which is an "instrument specifically designed to elicit information that will be useful for analysis (Babbie, 2013, p. 230). Surveys can provide a wealth of information and research, both qualitative and quantitative, which I covered in detail in Competency L. However, when collecting survey data, it is important to remember that this type of data is not always reliable or accurate. Bias may be introduced from either the researcher or the respondent, and people may not give accurate responses due to poor memory, lack of knowledge or because they are lying (Luo, 2019, Slide 6)..

Other useful methods for gathering data are interviews, focus groups and observations. Individual interviews are comprised of guided questions that can assist information professionals in gathering qualitative data based on user experiences. Focus groups serve a very similar purpose as individual interviews, but the data is examined in a group setting rather than an individual one. Cassell and Hiremath highlight that "the focus group aims at probing community experience and perception (p. 407). Thus, focus groups can provide individual thoughts in the context of a community perspective on survey results.

Observations are simply observing what is taking place and recording the information. A live transaction, such as a reference librarian providing assistance to a patron, is observed and evaluated in accordance with some pre-established guidelines or standards, such as previously mentioned RUSA guidelines. Cassell and Hiremath highlight three distinct types of observation methods: direct, hidden and self-imposed (p. 403). Direct observation is the most intuitive, and the observation must be evaluated within the specific context it is measuring. There must be a set of predetermined questions with an analysis plan in place so that the criteria is fully answered. Hidden observation is a variation of the direct observation method, but this method is conducted without the subject being aware they are being observed and provides for a more natural result for analysis. Finally, self-imposed observation applies to journals, activity notebooks and transaction diaries, to name a few. These materials can be observed and recorded and used for several assessment scenarios, such as reliance on one format over another, success of librarian search strategies, and/or transcribed data to support an argument or statement (Cassell and Hiremath, p. 406).

Supporting Evidence

INFO 284: Tools, Services and Methodologies for Digital Curation - Repository Case Study

evidence thumbnail I performed an evaluation of a repository for INFO 284: Tools, Services and Methodologies for Digital Curation. In this case study, I evaluate the Travelers in the Middle East Archive (TIMEA), assessing if the repository meets its intended mission statement, and whether it fulfills requirements as a digital repository. I evaluate the repository against a number of guidelines and standards, and the records and metadata are evaluated for adherence against Dublin Core standards, the Text Encoding Initiative and Digital Curation Centre's recommended best practices. By evaluating measurable criteria, I can determine if the repository's objectives are being met, and could potentially provide assessment results if employed by the repository.

INFO 202: Information Retrieval System Design - Website Evaluation and Site Maps

evidence thumbnail The next piece of evidence is my final group assignment for INFO 202: Information Retrieval System Design. This assignment tasked my group with evaluating a website to determine a potential re-design of the website to improve its navigation. We focused on the Hegeler Carus Mansion website, evaluating the organization and web design, mapping the existing website and creating a proposed redesign of the site map. This practical assignment allowed me to evaluate the primary objective of the organization and its website, and propose changes to better support the objective. Being able to provide an objective evaluation of a service offered (website) gave me an opportunity to identify measurable criteria that can be periodically evaluated for revision.

Conclusion

Information professionals have access to a wide range of tools for program and services evaluation using measurable criteria. First and foremost, a strategic plan must be in place and needs-assessments, such as an environmental scan and SWOT analysis, must be conducted. This will help determine the strengths and weaknesses of the organization, and will establish a benchmark from which evaluation of programs and services can be measured. Many courses at the iSchool have prepared me for such evaluations, even if I did not realize it at the time. Learning about needs-assessments in INFO 204 prepared me for being able to initiate a benchmark for program evaluation, and exploring in-depth about qualitative and quantitative research methods in INFO 285 has prepared me for objective evaluations, as well as how to properly conduct surveys and interviews to determine the efficiency and effectiveness of services offered. This has undoubtedly prepared me to be a critical thinker when determining the value of library services and programs offered to the community.


References

Babbie, E. (2013). The practice of social research (13th ed., international ed.) Wadsworth Cengage Learning

Cassell, K.A., & Hiremath, U. (2018). Reference and information services: An introduction (4th ed.). ALA Neal-Schuman

Luo, L. (2019). Unit 1. - Introduction to survey research [PowerPoint Slides]. INFO 285 Applied Research: Survey Research. San Jose State University.

Rosenblum, L. (2018). Strategic planning. In S. Hirsh (Ed.), Information services today: An introduction (2nd ed., pp. 231-245). Rowman & Littlefield.

Back to top