IMLS is committed to helping libraries and museums provide evidence-based knowledge of the value of innovative museum and library services. IMLS is also committed to strengthening standards that can be widely used to support library and museum success at the program, organizational, and community levels. Resources listed below provide program planning tools, evaluation definitions, and methodologies and case examples of sound evaluation practices—some are familiar, while others have not been widely used by museums and libraries in the past. All will help us communicate the public value of IMLS's grants and grantees.
Read more about our Outcome Based Evaluations.
Questions regarding the resources listed below can be directed to Matt Birnbaum at mbirnbaum@imls.gov or 202-653-4760.
General Guides for Program Evaluation and Outcome Monitoring
- Designing Evaluations 2012 Revision, Government Accounting Office (PDF, 721 KB)
- Outcome Monitoring Guidebooks, The Urban Institute
- Templates for Creating a Logic Model, University of Wisconsin Extension
- The Program Manager's Guide to Evaluation, Administration on Children, Youth, and Families, Department of Health and Human Services
Project Planning Tools for Museum and Library Services
- Shaping Outcomes: An On-Line Curriculum for Outcomes-Based Planning and Evaluation Designed for the Museum and Library Field
- Inspiring Learning: An Improvement Framework for Museums, Libraries and Archives
- Framework for Broadening the Impact of Outreach Efforts in Informal Science Initiatives (PDF, 2.5 MB)
- Information Behavior in Everyday Contexts (IBEC), Toolkit Version 2.0. 2004.
Common Evaluation Methods and Terms
(From the Harvard Family Research Project)
- Experimental Design: Experimental designs all share one distinctive element: random assignment to treatment and control groups. Experimental design is the strongest design choice when interested in establishing a cause-effect relationship. Experimental designs for evaluation prioritize the impartiality, accuracy, objectivity, and validity of the information generated. These studies look to make causal and generalizable statements about a population or impact on a population by a program or initiative.
- Non-Experimental Design: Non-experimental studies use purposeful sampling techniques to get information rich cases. Non-experimental evaluation designs include: case studies, data collection and reporting for accountability, participatory approaches, theory based/grounded theory approaches, ethnographic approaches, and mixed method studies.
- Quasi-Experimental Design: Most quasi-experimental designs are similar to experimental designs except that the subjects are not randomly assigned to either the experimental or the control group, or the researcher cannot control which group will get the treatment. Like the experimental designs, quasi-experimental designs for evaluation prioritize the impartiality, accuracy, objectivity, and validity of the information generated.
- Document Review: This is a review and analysis of existing program records and other information collected by the program. The information analyzed in a document review was not gathered for the purpose of the evaluation. Sources of information for document review include information on staff, budgets, rules and regulations, activities, schedules, attendance, meetings, recruitment, and annual reports.
- Interviews/Focus Groups: Interviews and focus groups are conducted with evaluation and program/initiative stakeholders. These include, but are not limited to, staff, administrators, participants and their parents or families, funders, and community members. Interviews and focus groups can be conducted in person or over the phone. Questions posed in interviews and focus groups are generally open-ended and responses are documented in full, through detailed note-taking or transcription. The purpose of interviews and focus groups is to gather detailed descriptions, from a purposeful sample of stakeholders, of the program processes and the stakeholders' opinions of those processes.
- Observation: Observation is an unobtrusive method for gathering information about how the program/initiative operates. Observations can be highly structured, with protocols for recording specific behaviors at specific times, or unstructured, taking a more casual, "look-and-see" approach to understanding the day-to-day operation of the program. Data from observations are used to supplement interviews and surveys in order to complete the description of the program/initiative and to verify information gathered through other methods.
- Secondary Source/Data Review: These sources include data collected for other similar studies for comparison, large data sets such as the Longitudinal Study of American Youth, achievement data, court records, standardized test scores, and demographic data and trends. Like the information analyzed in a document review, these data were not gathered with the purposes of the evaluation in mind; they are pre-existing data that inform the evaluation.
- Surveys/Questionnaires: Surveys and questionnaires are also conducted with evaluation and program/initiative stakeholders. These are usually administered on paper, through the mail, in a highly structured interview process in which respondents are asked to choose answers from those predetermined on the survey, or more recently, through email and on the Web. The purpose of surveys/questionnaires is to gather specific information—often regarding opinions or levels of satisfaction, in addition to demographic information—from a large, representative sample.
- Tests/Assessments: These data sources include standardized test scores, psychometric tests, and other assessments of the program and its participants. These data are collected with the purposes of the evaluation in mind. For example, the administration of achievement tests at certain intervals to gauge progress toward expected individual outcomes documented in the evaluation.
Measuring Outcomes in Museum and Libraries: A Partial Bibliography
- Diamond, Judy. 1999. Practical Evaluation Guide: Tools for Museums and Other Informal Educational Settings. Walnut Creek, CA: Alta Mira Press.
- Falk, J.H., and L.D. Dierking. 2000. Learning from Museums. Walnut Creek, California: AltaMira Press.
- Falk, J.H., and L.D. Dierking. 2002. Lessons without Limit: How Free-Choice Learning is Transforming Education. Walnut Creek, California: AltaMira Press.
- Falk, J.H., and L.D. Dierking (eds.). 1995. Public Institutions for Personal Learning: Establishing a Research Agenda. Washington, D.C.: American Association of Museums.
- Hernon, Peter, and Robert E. Dugan. 2002. Action Plan for Outcomes Assessment in Your Library. Chicago, IL: American Library Association.
- Korn, Randi, and Laurie Sowd. 1999. Visitor Surveys: A User's Manual, Professional Practice Series, Nichols, Susan K. (Compiler); Roxana Adams (Series Editor), Washington, DC: American Association of Museums.
- Korn, Randi, and Minda Borun (1999), Introduction to Museum Evaluation, American Association of Museums, Washington, DC.
- Matthews, Joseph R. 2004. Measuring for Results: The Dimensions of Public Library Effectiveness. Westport, CT: Libraries Unlimited.
- Rubin, Rhea. 2005. Demonstrating Results: Using Outcome Measurement in Your Library. Chicago, IL: ALA.
Networks and Associations that Provide Evaluation Resources
- American Evaluation Association
- American Library Association, Office of Research and Statistics, Research and Statistics
- Association of Research Libraries Statistics and Measurement Program
- National Science Foundation, Center for Advancement of Informal Science Education / Informal Science.Org, Evaluation
- The Free Management Library, Evaluation Activities in Organizations
- Visitor Studies Association