Evaluation essentials: methods for conducting sound research
Daponte, Beth Osborne
The book introduces students and practitioners to all necessary concepts and tools used to evaluate programs and policies. It focuses on issues that arise when evaluating programs, using those offered by non-profit and governmental organizations serving different sectors (social services, health, education, social work) as case studies. The author gives the reader a solid background in the core concepts, theories, and methods of program evaluation. Readers will learn to form evaluation questions, describe programs using program theory and program logic models, understand causation as it relates to evaluation, and perform quasi-experimental design, grant writing, outcome measures, survey design, and sampling. INDICE: 1. Introduction. Learning Objectives. The Evaluation Framework. Summary. Key Terms. Discussion Questions. 2. Describing the Program. Learning Objectives. Motivations for Describing the Program: Money and Success. Common Mistakes Evaluators Make When Describing the Program. Conducting Initial Informal Interviews. Pitfalls in Describing Programs. The Program Is Alive, and So IsIts Description. Program Theory. The Program Logic Model. Challenges of Programs with Multiple Sites. Program Implementation Model. Program Theory and Program Logic Model Examples. Summary. Key Terms. Discussion Questions. 3. Laying the Evaluation Groundwork. Learning Objectives. Evaluation Approaches. FramingEvaluation Questions. Insincere Reasons for Evaluation. Who Will Do the Evaluation?. External Evaluators. Internal Evaluators. Confidentiality and Ownership of Evaluation Ethics. Building a Knowledge Base from Evaluations. High Stakes Testing. The Evaluation Report. Summary. Key Terms. Discussion Questions. 4.Causation. Learning Objectives. Necessary and Sufficient. Types of Effects. Lagged Effects. Permanency of Effects. Functional Form of Impact. Summary. Key Terms. Discussion Questions. 5. The Prisms of Validity. Learning Objectives. Statistical Conclusion Validity. Small Sample Sizes. Measurement Error. UnclearQuestions. Unreliable Treatment Implementation. Fishing. Internal Validity. Threat of History. Threat of Maturation. Selection. Mortality. Testing. Statistical Regression. Instrumentation. Diffusion of Treatments. Compensatory Equalization of Treatments. Compensatory Rivalry and Resentful Demoralization. Construct Validity. Mono-Method Bias. Mono-Operation Bias. External Validity. Summary. Key Terms. Discussion Questions. 6. Attributing Outcomes to the Program: Quasi-Experimental Design. Learning Objectives. Quasi-Experimental Notation. Frequently Used Designs That Do Not Show Causation . One-Group Posttest-Only. Posttest-Only with Nonequivalent Groups. Participants' Pretest-Posttest. DesignsThat Generally Permit Causal Inferences. Untreated Control Group with Pretestand Posttest. Delayed Treatment Control Group. Different Samples Design. Nonequivalent Observations Drawn from One Group. Nonequivalent Groups Using Switched Measures. Cohort Designs. Time Series Designs. Archival Data. Summary. Key Terms. Discussion Questions. 7. Collecting Data. Learning Objectives. InformalInterviews. Focus Groups. Survey Design. Sampling. Ways to Collect Survey Data. Anonymity and Confidentiality. Summary. Key Terms. Discussion Questions. 8.Conclusions. Learning Objectives. Using Evaluation Tools to Develop Grant Proposals. Hiring an Evaluation Consultant. Summary. Key Terms. Discussion Questions. Appendix A. Census 2000, Long Form. Appendix B. Census 2000, Short Form. Glossary. References. Index.
- ISBN: 978-0-7879-8439-7
- Editorial: Jossey Bass
- Encuadernacion: Rústica
- Páginas: 192
- Fecha Publicación: 22/07/2008
- Nº Volúmenes: 1
- Idioma: Inglés