Collaborative Approaches to Evaluation. Группа авторов
on>
Collaborative Approaches to Evaluation
Principles in Use
Edited By
J. Bradley Cousins
University of Ottawa
Los Angeles
London
New Delhi
Singapore
Washington DC
Melbourne
FOR INFORMATION
SAGE Publications Inc.
2455 Teller Road
Thousand Oaks, California 91320
E-mail: [email protected]
SAGE Publications Ltd.
1 Oliver’s Yard
55 City Road
London EC1Y 1SP
United Kingdom
SAGE Publications India Pvt. Ltd.
B 1/I 1 Mohan Cooperative Industrial Area
Mathura Road
New Delhi 110 044
India
SAGE Publications Asia-Pacific Pte. Ltd.
18 Cross Street #10-10/11/12
China Square Central
Singapore 048423
Copyright © 2020 by SAGE Publications, Inc.
All rights reserved. Except as permitted by U.S. copyright law, no part of this work may be reproduced or distributed in any form or by any means, or stored in a database or retrieval system, without permission in writing from the publisher.
All third party trademarks referenced or depicted herein are included solely for the purpose of illustration and are the property of their respective owners. Reference to these trademarks in no way indicates any relationship with, or endorsement by, the trademark owner.
Printed in the United States of America
Library of Congress Cataloging-in-Publication Data
Names: Cousins, J. Bradley, editor.
Title: Collaborative approaches to evaluation : principles in use / edited by J. Bradley Cousins.
Description: Los Angeles : SAGE, 2020. | Series: Evaluation in practice series ; 3 | Includes bibliographical references and index.
Identifiers: LCCN 2019003137 | ISBN 9781544344645 (pbk. : alk. paper)
Subjects: LCSH: Educational evaluation. | Health services administration—Evaluation | Community development—Evaluation. | Participatory monitoring and evaluation (Project management)
Classification: LCC LB2822.75 .C63 2020 | DDC 371.14/4—dc23 LC record available at https://lccn.loc.gov/2019003137
This book is printed on acid-free paper.
Acquisitions Editor: Helen Salmon
Editorial Assistant: Megan O’Heffernan
Production Editor: Andrew Olson
Copy Editor: Ashley Horne
Typesetter: C&M Digitals (P) Ltd.
Proofreader: Victoria Reed-Castro
Indexer: Nancy Fulton
Cover Designer: Gin Khan Siam
Marketing Manager: Shari Countryman
Volume Editors’ Introduction
One of the challenges faced by evaluators is that professional evaluation is complex work that looks deceptively simple. The general public has had ample opportunity to engage in informal evaluation practice. These experiences include, for example, using Amazon reviews to decide which new toaster to buy, evaluating classroom teaching drawing from both the public discourse and our own personal experiences in school, making decisions about which policies to support at the polls based on arguments from public intellectuals, and analyses of the likely outcomes from trusted sources, etcetera. In short, everyone has had some type of experience with informal evaluation, which is one of the reasons why many in our field claim that evaluation is an ancient practice, but a new discipline (Mathison, 2005; Scriven, 1991). An argument for the complexity of professional evaluation therefore requires an understanding of what differentiates informal evaluation practice the public engages in on a daily basis from the work that occurs in formal evaluation studies.
In an effort to demarcate the boundaries of professional evaluation and describe that work, for the past 60 years or so, academics and practitioner-academics have focused on the demands of evaluation practice, arguing that professional evaluation requires a great deal of expert knowledge that is specific to the work that evaluators do (Schwandt, 2015). We see this, for example, in the evaluation theories and approaches that have proliferated, which are arguments about what the work of professional evaluators ought to look like. We also see this in studies of evaluation education in formal and informal settings, which are arguments about how evaluators are or ought to be trained to carry out formal evaluation studies. More recently, we see this in the current drafts of evaluator competencies or capabilities documents, all of which have a section specifically devoted to what makes evaluators distinct as practicing professionals (see also, American Evaluation Association, 2017; United Nations Evaluation Group, 2016). These efforts have resulted in a large body of work meant to establish and inform the theoretical, technical, and practical aspects of evaluation practice.
At the same time, as noted in the editors’ introduction for other volumes in this series, there is a need for additional resources that fill a gap between academic publications and general textbooks. The Evaluation in Practice series was born out of a recognition that there are core topics in evaluation, which are fundamental to the work of evaluators, but that have not yet received the focus they deserve. Instead, these topics are often treated as a subheading or a subtheme situated within a larger conversation. Professional practice is at the heart of this series.
In Collaborative Approaches to Evaluation: Principles in Use, the third volume in this series, J. Bradley Cousins and colleagues have presented a rich, empirically derived description of how the conceptual, technical, and practical tools of evaluation are enacted by experts in the everyday aspects of professional work. We expand on this in order. Conceptual tools are the theories, approaches, effectiveness, and moral principles or guidelines that evaluators use to guide their evaluation practice decisions. The conceptual tool that is the focus of this volume is Collaborative Approaches to Evaluation (CAE) principles. In the first chapter of this book, readers are presented with an overview of CAE principles. This chapter lays out what the CAE principles are, describes their evolution, and offers examples of potential uses. Importantly, these principles are grounded in research on evaluation (RoE), that is, studies that examine evaluation as the object of inquiry.
Technical tools are the research designs, measurement techniques, and analysis strategies that evaluators use to guide their evaluation practice decisions. Across the chapters, technical tools are used in two ways: the technical tools used to carry out the evaluation and the technical tools used to engage in RoE. For the chapters that are grounded in a specific evaluation, almost all (Chapter 3, Chapter