Collaborative Approaches to Evaluation. Группа авторов
of one another. In other words, in theory, ratings of a particular CAE project for each respective dimension are free to vary from 1 to 5, regardless of scores on the other dimensions.
Box 2: Dimensions of Form in CAE Practice
Figure 1 shows how the three dimensions can be used as a device to differentiate among CAE family members by plotting rating scores in three-dimensional space. Hypothetically, in the figure, we can see that practical and transformative participatory evaluation streams would be located in two different sectors of the device despite being quite similar on two of the three dimensions. The dimension on which they differ is diversity; typically, a wide range of stakeholders are actively involved in transformative participatory evaluations, whereas in practical participatory evaluation engagement with the knowledge production function is most often limited to primary users, those with a vested interest in the program and its evaluation. We can see also that conventional stakeholder-based evaluation is rated to be quite distinct from the other two hypothetical examples. In this approach, originally described by Bryk (1983), participating program community members are essentially in a consultative role: the evaluator tends to control decision-making about the evaluation and stakeholder participation in the knowledge production function is limited to such activities as helping to set evaluation objectives and perhaps interpreting findings.
Figure 1 ■ Dimensions of form in CAE (adapted from Cousins & Chouinard, 2012)
This device can be used to describe what any given CAE family member looks like in practice at any given point in time. It is noteworthy that CAE projects evolve over time and can actually change according to one or more of these dimensions of form as the project progresses. For example, in a hypothetical empowerment evaluation where the evaluator starts out in the role of critical friend and/or facilitator, deferring to program community members the control of decision-making, he/she may need to take more of a directive role if the project bogs down with controversy and/or acrimony among participating stakeholders. Or in practical participatory evaluation, initial deep engagement with evaluation implementation on behalf of stakeholders may wane in the face of competing job demands; ultimately, responsibility for the implementation of the evaluation may defer to the evaluator. In retrospective ratings of CAE projects, however, it seems likely that rating scores would be more holistic, representing an aggregate or average for the project.
A while back, we actually challenged the assumption that these three process dimensions were fundamental and toyed with a five-dimensional version of the framework that took into account stakeholders differential access to power and manageability (Cousins, 2005; Weaver & Cousins, 2004). Later, however, Daigneault and Jacob (2009) published a logical critique of the framework and concluded that, in fact, the three original dimensions should be considered fundamental. Consequently, we have once again embraced the three-dimensional framework in considering what CAE looks like in practice (Cousins & Chouinard, 2012).
Why Do We Need Principles to Guide Practice?
Why and How Are Principles Valuable?
A wide range of human services and indeed scientific pursuits ranging from accounting to nursing to geology rely on well-developed sets of principles to guide practice. Evaluation is no exception. For example, the AEA has developed and periodically revised its “Guiding Principles for Evaluators,”6 which may be considered to be doctrines or assumptions forming normative rules of conduct. Patton (2017) differentiates between moral principles and effectiveness principles: moral principles tell us what is right whereas effectiveness principles tell us what works. In this book, we are concerned with effectiveness principles to guide CAE practice, but we hasten to add that we do not see effectiveness concerns and moral-political concerns as being mutually exclusive.
Effectiveness principles to guide practice are important and valuable because they help actors to understand not only which practices or behaviors are likely to lead to desirable consequences but also to help them avoid practices that could be in some sense detrimental or counterproductive. Therefore, with a set of principles, it must be possible for actors to subscribe to or follow all of the principles in the set and in doing so avoid the potential for contradictory processes or actions that could be counterproductive. Effectiveness principles generally derive from a careful examination of, or reflection on, experience with effective practice. They could be attributed to the wisdom of an expert practitioner or the result of serious processes of consultation, dialogue, and deliberation. They may also be grounded in empirical evidence, which is the process that we selected for the development of CAE principles.
But how can we assess the quality of effectiveness principles? Patton (2017), in his book Principles-Focused Evaluation,7 has given some serious treatment to this question. Ultimately, he came up with a set of criteria useful for this purpose. This set of criteria is captured in the acronym GUIDE and is described in Box 3.
7 In Principles-Focused Evaluation, Patton does not describe effectiveness principles for evaluation per se, rather he elaborates on using principles of practice within the substantive domain of the target intervention to inform the evaluation of the intervention.
Box 3: GUIDE Criteria for Evaluating Effectiveness Principles
Guidance: The principle is prescriptive. It provides advice and guidance on what to do, how to think, what to value, and how to act to be effective. It offers direction. The wording is imperative: The guidance is sufficiently distinct that it can be distinguished from contrary or alternative guidance.
Useful: A high quality principle is useful in informing choices and decisions. Its utility resides in being actionable, interpretable, feasible, and pointing the way toward desired results for any relevant situation.
Inspiring: Principles are values-based, incorporating and expressing ethical premises, which is what makes them meaningful. They articulate what matters, both in how to proceed and the desired result. That should be inspirational.
Developmental: The developmental nature of a high-quality principle refers to its adaptability and applicability to diverse contexts and over time. A principle is thus both context sensitive and adaptable to real-world dynamics, providing a way to navigate the turbulence of complexity and uncertainty. In being applicable over time, it is enduring (not time-bound), in support of ongoing development and adaptation in an ever-changing world.
Evaluable: A high quality principle must be evaluable. This means it is possible to document and judge whether it is actually being followed, and document and judge what results from following the principle. In essence, it is possible to determine if following the principle takes you where you want to go.
Source: Patton, M. Q. (2017). Principles-focused evaluation: The GUIDE. New York, NY: Guildford.
Warrants for CAE Principles
As mentioned above, five years ago, we published an article titled “Arguments for a Common Set of Principles for Collaborative Inquiry in Evaluation” (Cousins et al., 2013). In that paper, we identified three interrelated warrants for developing a set of principles to guide CAE practice. First, there is a growing corpus of CAE family members suggesting their appeal