Collaborative Approaches to Evaluation. Группа авторов
and formative approaches, accountability and learning functions remain paramount. DE is all about creating innovative interventions through evidence-based learning, sometimes through trial-and-error, but accountability considerations factor in as well. For example, one of us (Shulha) is currently involved in a multisite DE in the Ontario education sector where accountability is being defined as taking snapshots over time where each picture describes what the team is doing; why the team is doing it; evidence (stories) that can confirm that logic is sound and that the appropriate needs are being addressed; and next-step planning.
Most certainly in developmental contexts, actors stand to benefit from the use of evaluation findings, be they instrumental or conceptual. But they also stand to benefit from their proximity to, or even participation in, evaluative activities. Patton (1997) dubbed learning of this sort process use, a phenomenon which has been actively studied and integrated into contemporary thinking about evaluation consequences (Cousins, 2007; Shulha & Cousins, 1997). Process use is a very powerful benefit of CAE and indeed can factor directly into decisions to use such approaches.
When Transformation Is Intentional
Given its evident connection to evaluation-related learning, process use is very much implicated in ECB and therefore highly relevant in evaluations that are intended to be transformational in form and function (Mertens, 2009; Whitmore, 1998b). In transformational approaches, interest is less about generating evaluation findings that will be acted upon to leverage change and more about the experience. Through participation in the cocreation of evaluation knowledge, members of the program community, particularly intended beneficiaries of interventions, stand to profit. Much of this benefit will be cognitive or conceptual, which is to say, members stand to learn not only about the program and its functions but also about the historical, political, social, and educational aspects of the context in which it is situated. But of course, the idea is that when people critically analyze and learn about their situation, they will use this learning to push for change (Freire, 1970). It is through the deepening of understanding by virtue of engaging with evaluation that transformation and/or empowerment is likely to occur (Mertens, 2009).
Previously we discussed tensions between accountability and learning, which are often acknowledged as fundamental functions of evaluation, and we hinted that transformation may provide a third perspective. In a recent chapter, Cousins, Hay, and Chouinard (2015) argued that learning is often juxtaposed to compliance-oriented accountability as opposed to accountability as a democratic process, and that this is the root source of tension between the two. The authors went on to argue that
when rooted in transformative participatory evaluation approaches and motivated by political, social-justice interests, accountability and learning approaches are no longer in opposition … [they are] essential, necessary, and supplementary, to be most appealing and indeed, necessary if evaluation is to be relevant to addressing issues of poverty, inequity and injustice. (p. 107)
Transformational interests provide a natural fit for CAE.
Why CAE?
The Three P’s of CAE Justification
For some time, we have tried to capture justifications for CAE as being a blend of three specific categories: pragmatic, political, and philosophical (Cousins & Chouinard, 2012; Cousins, Donohue, & Bloom, 1996; Cousins & Whitmore, 1998). These categories, to our way of thinking, are not mutually exclusive; the justification for any CAE will draw from two or more of them depending on interests, and perhaps more importantly, whose interests are being served. Pragmatic interests driving CAE are all about leveraging change through the use of evaluative evidence, in other words, using evaluation for practical problem solving. Of primary concern would be instrumental (discrete decision-making about interventions) and conceptual (learning) uses of evaluation findings. Program community members working with evaluators learn about how to change programs to improve them or make them more effective. Historically, we have considered political interests driving CAE to be largely socio-political and focused on empowerment and the amelioration of social inequity. Through participation in the evaluation knowledge production function, intended program beneficiaries (often from marginalized populations) and other program community members learn to see their circumstances differently and to recognize oppressive forces at play. Such engagement may lead to the development of an ethos of self-determination. Finally, philosophic justifications for CAE are grounded in a quest for deeper understanding of the complexities associated with the program and the context within which it is operating. Through evaluators working hand-in-hand with program community members, the joint production of knowledge is grounded in historical, sociopolitical, economic, and educational context. Thanks to the insider insights of participating program community members, deeper meaning of evaluative evidence and knowledge is achieved.
As mentioned, these categories are understood not to be mutually exclusive, and as such, any given CAE will place relative emphasis on one or more depending on information needs, contextual exigencies, and circumstances. Cousins and Whitmore (1998) identified two principal streams of participatory evaluation as being practical and transformative. The former would emphasize the pragmatic justification, whereas the latter privileges the political justification; both streams, however, draw from all three justifications. For example, in practical participatory evaluation, program community members may find the experience to be rewarding in terms of their own professional development even though the primary purpose is to generate knowledge supporting program improvement. Such capacity building is an example of process use even though it is an unintended positive consequence of the evaluation. On the other hand, transformative participatory evaluation where empowerment and capacity building are central may also lead to positive changes to interventions as a result of evaluation findings. We observe that Fetterman and colleagues (Fetterman & Wandersman, 2005; Fetterman et al., 2018) have followed this lead in describing two streams of empowerment evaluation.
Ethical Considerations
In our current work, we are considering a fourth justification for CAE, which is distinct from but also overlaps with the other three. Cousins and Chouinard (2018, forthcoming) are now seriously exploring an ethical or moral-political justification for CAE, which is rooted in considerations of responsibility, recognizing difference, representation, and rights. In this work, which at least partially arises from prior conversations with our colleague Miri Levin-Rozalis (2016, personal communication), we have come to understand a moral-political justification for CAE to be distinct from and yet overlapping with the other categories in obvious ways. For example, while representation is understood to be obligatory in a democratic sense, it may also be thought of in political terms even though it is not ideological per se (e.g., representative governance). Long ago, Mark and Shotland (1985) made the case for representation as a reason for engaging stakeholders in evaluation. In a different example, we might consider ethical justifications for involving indigenous peoples in evaluations of their own programs from a responsibility and recognition-of-differences perspective. Such considerations are part and parcel of post-colonial discourse in economics and philosophy. But such ethical justification could also overlap with epistemological considerations; for example, CAE could provide a bridge between indigenous and western ways of knowing in the joint production of evaluative knowledge (Chouinard & Cousins, 2007). Justification along these lines would draw from the philosophical category.
What Does CAE Look Like in Practice?
Previously we argued that three specific dimensions of form or process are fundamental to CAE in practice (Cousins, Donohue, & Bloom, 1996; Cousins & Whitmore, 1998). These dimensions are i) control of technical decision-making about the evaluation, ii) diversity among stakeholders selected for participation in the evaluation; and iii) depth of the participation of stakeholders along a continuum of methodological stages of the evaluation process. We considered each of these dimensions to operate like semantic differentials. That is to say, any given CAE at any given point in time could be rated on a scale of 1 to 5 depending on how the evaluation was taking shape. We can see each of the three scales in Box 2. We also made the claim that the three dimensions were orthogonal or independent