Experimental Evaluation Design for Program Improvement. Laura R. Peck

Experimental Evaluation Design for Program Improvement - Laura R. Peck


Скачать книгу

      

Experimental Evaluation Design for Program Improvement

       Evaluation in Practice Series

      Christina A. Christie & Marvin C. Alkin, Series Editors

      1. Mixed Methods Design in Evaluation, by Donna M. Mertens

      2. Facilitating Evaluation: Principles in Practice, by Michel Quinn Patton

      3. Collaborative Approaches to Evaluation: Principles in Use, edited by J. Bradley Cousins

      4. Culturally Responsive Approaches to Evaluation, by Jill Anne Chouinard and Fiona Cram

      5. Experimental Evaluation Design for Program Improvement, by Laura R. Peck

      Sara Miller McCune founded SAGE Publishing in 1965 to support the dissemination of usable knowledge and educate a global community. SAGE publishes more than 1000 journals and over 800 new books each year, spanning a wide range of subject areas. Our growing selection of library products includes archives, data, case studies and video. SAGE remains majority owned by our founder and after her lifetime will become owned by a charitable trust that secures the company’s continued independence.

      Los Angeles | London | New Delhi | Singapore | Washington DC | Melbourne

      Experimental Evaluation Design for Program Improvement

       Laura R. Peck

       Abt Associates Inc., Social & Economic Policy Division

       Los Angeles

       London

       New Delhi

       Singapore

       Washington DC

       Melbourne

      Copyright © 2020 by SAGE Publications, Inc.

      All rights reserved. Except as permitted by U.S. copyright law, no part of this work may be reproduced or distributed in any form or by any means, or stored in a database or retrieval system, without permission in writing from the publisher.

      All third party trademarks referenced or depicted herein are included solely for the purpose of illustration and are the property of their respective owners. Reference to these trademarks in no way indicates any relationship with, or endorsement by, the trademark owner.

      FOR INFORMATION:

      SAGE Publications, Inc.

      2455 Teller Road

      Thousand Oaks, California 91320

      E-mail: [email protected]

      SAGE Publications Ltd.

      1 Oliver’s Yard

      55 City Road

      London, EC1Y 1SP

      United Kingdom

      SAGE Publications India Pvt. Ltd.

      B 1/I 1 Mohan Cooperative Industrial Area

      Mathura Road, New Delhi 110 044

      India

      SAGE Publications Asia-Pacific Pte. Ltd.

      18 Cross Street #10-10/11/12

      China Square Central

      Singapore 048423

      Print ISBN: 978-1-5063-9005-5

      This book is printed on acid-free paper.

      Printed in the United States of America

      Acquisitions Editor: Helen Salmon

      Editorial Assistant: Megan O’Heffernan

      Production Editor: Astha Jaiswal

      Copy Editor: Diane DiMura

      Typesetter: Hurix Digital

      Proofreader: Ellen Brink

      Indexer: Amy Murphy

      Cover Designer: Candice Harman

      Marketing Manager: Shari Countryman

      Volume Editors’ Introduction

      Impact evaluation is central to the practice and profession of evaluation. Emerging in the Great Society Era, the field of evaluation holds deep roots in the social experiments of large-scale demonstration programs—Campbell’s utopian ideas of an Experimenting Society. Since then, the fervent search for “what works”—for establishing the impact of social programs—has taken on many different forms. From the early emphasis on experimental and quasi-experimental designs, through the later emergence of systematic reviews and meta-analysis, and onwards to the more recent and sustained push for evidence-based practice, proponents of experimental designs have succeeded in bringing attention to the central role of examining the effectiveness of social programs (however we chose to define it). There is a long and rich history of measuring impact in evaluation.

      The landscape of impact evaluation designs and methods has grown and continues to grow. Innovative variants of and alternatives to traditional designs and approaches continue to emerge and gain prominence, addressing not only “what works” but also “what works, for whom, and under what circumstances” (Stern et al., 2012). For the novice (and perhaps even the seasoned) evaluator, the broadening array of designs and methods, not to mention the dizzying array of corresponding terminology, may invoke a mixed sense of methodological promise and peril, opportunity and apprehension. How can randomization be applied across multiple treatments, across multiple treatment components, and across stages of a program process? What exactly is the difference between multistage, staggered, and blended impact evaluation designs? And are there any practical and methodological considerations that one should award particular attention to when applying these designs in real-world settings?

      These are but a few of the questions answered in Laura Peck’s Experimental Evaluation Design for Program Improvement. Grounded on decades of scholarship and practical experience with real-world impact evaluation, Peck begins the book by providing a concise and accessible introduction to the “State of the Field,” carefully guiding the reader through decades of developments in the experimental design tradition, including large-scale experimental designs, nudge experiments, rapid cycle evaluation, systematic reviews and the associated meta-analysis, and more recently design options for understanding impact variation across program components.

      After this introduction, Peck describes a “framework for thinking about the aspects of a program that drive its impacts and how to evaluate the relative contributions of those aspects,” rooted in the idea of using a well-developed program logic model to discern the most salient program comparisons to be examined in the evaluation. As Peck states, “From a clear and explicit program logic model, the evaluation logic model can also be framed to inform program operators’ understanding of the essential ingredients of their programs” (p. 27). The remainder of the book is dedicated to a broad variety of experimental design options for measuring program impact, covering both traditional designs and more recent variants of these (e.g., multistage and blended designs). Bringing these designs closer to practice, an illustrative application and a set of practical lessons learned are provided. A set of hands-on principles for “good practice” concludes the book.

      The present book is an important contribution to the growing landscape of impact evaluation. With her aim to identify a broader range of designs and methods that directly address causal explanation of “impacts,” Peck opens new frontiers for impact evaluation. Peck directly challenges, and correctly so, the longstanding perception that experimental designs are unable to get inside


Скачать книгу