Building evaluation capacity in museum professionals throughout the Denver metro area
Denver Evaluation Network
  • Home
  • About
    • Members
    • Get Involved
    • Intership Opportunities
  • Toolkit
    • Network Benefits
    • Creating & Building a Network
    • Maintaining a Network
    • Building Evaluation Capacity
    • Network-wide Studies >
      • Conducting Pan-Institutional and Cohort Studies
      • DEN Case Studies
      • Sample Instruments
  • Resources
    • Professional Associations & Evaluators
    • Information & Publications
    • Professional Development
    • Index of DEN Studies
  • Contact
  • DEN ACCESS

Information & Publications

General Guides
  • Designing Evaluations 2012 Revision, Government Accounting Office
    http://www.gao.gov/assets/590/588146.pdf
  • A Primer on Program Evaluation and Performance Measurement, Office of Justice Programs
    http://www.ojp.usdoj.gov/BJA/evaluation/index.html
  • Outcome Monitoring Guidebooks, The Urban Institute
    • Finding Out What Happened to Former Clients
    • Developing Community-wide Outcome Indicators for Specific Services
    • Surveying Clients about Outcomes
    • Using Outcome Information
    • Analyzing Outcome Information
  • Templates for Creating a Logic Model, University of Wisconsin Extension
    http://www.uwex.edu/ces/pdande/evaluation/evallogicmodel.html
  • Logic Model Development Guide, Kellogg Foundation
    http://www.princeton.edu/pccm/mrseceducation/resources/LogicModel.pdf
  • The Program Manager's Guide to Evaluation, Administration on Children, Youth, and Families, Department of Health and Human Services.
    http://www.acf.hhs.gov/programs/opre/other_resrch/pm_guide_eval/reports/pmguide/pmguide_toc.html
Measuring Outcomes in Museum and Libraries: A Partial Bibliograph
  • California State Library California Research Bureau, http://www.library.ca.gov/crb/CRBSearch.aspx. Multiple titles. See for example Preparing Youth to Participate in State Policy Making, 2007 and The Educational Success of Homeless Youth in California: Challenges and Solutions, 2007.
  • Diamond, Judy. 1999. Practical Evaluation Guide: Tools for Museums and Other Informal Educational Settings. Walnut Creek, CA: Alta Mira Press.
  • Falk, J.H., and L.D. Dierking. 2000. Learning from Museums. Walnut Creek, California: AltaMira Press.
  • Falk, J.H., and L.D. Dierking. 2002. Lessons without Limit: How Free-Choice Learning is Transforming Education. Walnut Creek, California: AltaMira Press.
  • Falk, J.H., and L.D. Dierking (eds.). 1995. Public Institutions for Personal Learning: Establishing a Research Agenda. Washington, D.C.: American Association of Museums.
  • Hernon, Peter, and Robert E. Dugan. 2002. Action Plan for Outcomes Assessment in Your Library. Chicago, IL: American Library Association.
  • IMLS. 2000. Perspectives on Outcome Based Evaluation for Libraries and Museums. Washington, D.C.: IMLS.
  • Informal Science.Org, Evaluation, http://informalscience.org/evaluations/eval_framework.pdf. Multiple titles, including formative and summative studies. Framework for Evaluating Impacts of Informal Science Education Project. See also Evaluation Reports, http://informalscience.org/evaluation, for example STEPS Summative Evaluation Report; IDEA Cooperative: Select Findings from the Invention Crew Exit Survey, Science Museum of Minnesota; and Evaluation: Flight Planning Program, Hiller Aviation Museum.
  • Korn, Randi, and Laurie Sowd. 1999. Visitor Surveys: A User's Manual, Professional Practice Series, Nichols, Susan K. (Compiler); Roxana Adams (Series Editor), Washington, DC: American Association of Museums.
  • Korn, Randi, and Minda Borun (1999), Introduction to Museum Evaluation, American Association of Museums, Washington, DC.
  • Matthews, Joseph R. 2004. Measuring for Results: The Dimensions of Public Library Effectiveness. Westport, CT: Libraries Unlimited.
  • Rubin, Rhea. 2005. Demonstrating Results: Using Outcome Measurement in Your Library. Chicago, IL: ALA.
Project Planning Tools
  • Shaping Outcomes: An On-Line Curriculum for Outcomes-Based Planning and Evaluation Designed for the Museum and Library Field. http://www.shapingoutcomes.org/
  • Inspiring Learning: An Improvement Framework for Museums, Libraries and Archives.http://www.inspiringlearningforall.gov.uk/
  • Framework for Broadening the Impact of Outreach Efforts in Informal Science Initiatives.http://www.nsf.gov/od/broadeningparticipation/framework-evaluating-impacts-broadening-participation-projects_1101.pdf
  • Information Behavior in Everyday Contexts (IBEC), Toolkit Version 2.0. 2004.http://ibec.ischool.washington.edu/toolkit.php
  • Framework for Evaluating Impacts of Informal Science Education Projects.http://informalscience.org/evaluations/eval_framework.pdf
Common Evaluation Methods and Terms
(From the Harvard Family Research Project)

  • Experimental Design: Experimental designs all share one distinctive element: random assignment to treatment and control groups. Experimental design is the strongest design choice when interested in establishing a cause-effect relationship. Experimental designs for evaluation prioritize the impartiality, accuracy, objectivity, and validity of the information generated. These studies look to make causal and generalizable statements about a population or impact on a population by a program or initiative.
  • Non-Experimental Design: Non-experimental studies use purposeful sampling techniques to get information rich cases. Non-experimental evaluation designs include: case studies, data collection and reporting for accountability, participatory approaches, theory based/grounded theory approaches, ethnographic approaches, and mixed method studies.
  • Quasi-Experimental Design: Most quasi-experimental designs are similar to experimental designs except that the subjects are not randomly assigned to either the experimental or the control group, or the researcher cannot control which group will get the treatment. Like the experimental designs, quasi-experimental designs for evaluation prioritize the impartiality, accuracy, objectivity, and validity of the information generated.
  • Document Review: This is a review and analysis of existing program records and other information collected by the program. The information analyzed in a document review was not gathered for the purpose of the evaluation. Sources of information for document review include information on staff, budgets, rules and regulations, activities, schedules, attendance, meetings, recruitment, and annual reports.
  • Interviews/Focus Groups: Interviews and focus groups are conducted with evaluation and program/initiative stakeholders. These include, but are not limited to, staff, administrators, participants and their parents or families, funders, and community members. Interviews and focus groups can be conducted in person or over the phone. Questions posed in interviews and focus groups are generally open-ended and responses are documented in full, through detailed note-taking or transcription. The purpose of interviews and focus groups is to gather detailed descriptions, from a purposeful sample of stakeholders, of the program processes and the stakeholders' opinions of those processes.
  • Observation: Observation is an unobtrusive method for gathering information about how the program/initiative operates. Observations can be highly structured, with protocols for recording specific behaviors at specific times, or unstructured, taking a more casual, "look-and-see" approach to understanding the day-to-day operation of the program. Data from observations are used to supplement interviews and surveys in order to complete the description of the program/initiative and to verify information gathered through other methods.
  • Secondary Source/Data Review: These sources include data collected for other similar studies for comparison, large data sets such as the Longitudinal Study of American Youth, achievement data, court records, standardized test scores, and demographic data and trends. Like the information analyzed in a document review, these data were not gathered with the purposes of the evaluation in mind; they are pre-existing data that inform the evaluation.
  • Surveys/Questionnaires: Surveys and questionnaires are also conducted with evaluation and program/initiative stakeholders. These are usually administered on paper, through the mail, in a highly structured interview process in which respondents are asked to choose answers from those predetermined on the survey, or more recently, through email and on the Web. The purpose of surveys/questionnaires is to gather specific information—often regarding opinions or levels of satisfaction, in addition to demographic information—from a large, representative sample.
  • Tests/Assessments: These data sources include standardized test scores, psychometric tests, and other assessments of the program and its participants. These data are collected with the purposes of the evaluation in mind. For example, the administration of achievement tests at certain intervals to gauge progress toward expected individual outcomes documented in the evaluation.
Evaluation Resources retreived from the Institute of Museum and Library Service



Links to Web sites outside the DEN web site are offered for your convenience. DEN does not control the web sites listed above and takes no responsibility for the views, content or accuracy of the information you may find there. Providing a link to this site does not in any way constitute an endorsement of the site or its content on the part of DEN.

Powered by Create your own unique website with customizable templates.