Collaborative Approaches to Evaluation. Группа авторов

Collaborative Approaches to Evaluation - Группа авторов


Скачать книгу
evaluation options is substantial. The list appearing in Box 1 is incomplete. CAE is on the rise in a range of evaluation contexts including international development evaluation, cross-cultural evaluation, and DE contexts. In response to mainstream privileging of the statistical counterfactual as the gold standard for impact evaluation, there is growing concern for the development of alternative approaches, many of which could be considered CAE (e.g., Rugh, Steinke, Cousins, & Bamberger, 2009). In North America, CAE is the most commonly used approach for the evaluation of interventions with indigenous peoples (Chouinard & Cousins, 2007; Hoare, Levy, & Robinson, 1993). In the evaluation of social innovation, DE is most commonly used (Milley, Szijarto, Svensson, & Cousins, 2018); as health care innovations such as patient engagement develop, CAE becomes a much better fit than traditional approaches to evaluation in this sector (Gilbert & Cousins, 2017). All of these approaches share a common theme: evaluators work in partnership with members of the program community to produce evaluative knowledge. As such, it is both reasonable and desirable to develop a set of effectiveness principles to guide CAE practice.

      A second warrant relates to a recent development in the field, specifically, that Fetterman and colleagues (2018) have not only framed collaborative, participatory, and empowerment approaches as being comprehensive, but they have taken it upon themselves to nuance the specific dimensions distinguishing these three approaches. They concluded that control of evaluation decision-making (one of the dimensions of process in Figure 1) is the essential dimension along which the three approaches can be differentiated.

      Collaborative evaluators are in charge of the evaluation, but they create an ongoing engagement between evaluators and stakeholders…. Participatory evaluators jointly share control of the evaluation…. Empowerment evaluators view program staff members, program participants, and community members as the ones in control of the evaluation. (Fetterman, Rodriguez-Campos, Zukoski, & Contributors, 2018, p. 2, emphasis in the original)

      The authors cited a long list of colleagues whom they argued recommend that stakeholder involvement evaluation approaches be differentiated. Yet, we observe that some of these publications provided critiques of only empowerment evaluation and suggested it to be, in practice, indistinguishable from other CAE approaches (e.g., Miller & Campbell, 2006; Patton, 2005); that is to say, they did not explicitly advocate differentiating among collaborative, participatory, and empowerment evaluation. Our main concern with this line of reasoning is that it runs the risk of evaluators self-identifying with particular approaches in seeking to apply them wherever they seem appropriate. In the foregoing excerpt, for example, we see reference to collaborative evaluators, participatory evaluators, and empowerment evaluators. From our perspective, decisions about i) whether CAE is warranted in the first place, ii) what will be its purposes, and iii) what will it look like in practice, will depend on the context within which the program is operating and the information needs that present. Perhaps this is why Miller and Campbell (2006) discovered that a wide range of alleged empowerment evaluations in their sample did not align well with theoretical tenets of the approach and instead resembled other CAE family members. While there may be some value in compartmentalizing different members of the CAE family, we remain somewhat opposed to this direction on the grounds that (i) it runs the risk of privileging method/approach over context, and (ii) it is exclusive of a plethora of related collaborative approaches (Cousins et al., 2013; Cousins, Whitmore, & Shulha, 2014).

      The indispensable role of context in shaping evaluation approaches is, in fact, a third warrant for principles to guide CAE practice. In our view, a thorough analysis of the social, historical, economic, and cultural context within which focal programs operate, as well as the impetus for evaluating the program in the first place, are critical considerations for deciding i) whether a collaborative approach would be an appropriate alternative, ii) and if so, what will be its purposes, and iii) what form it should take (see Figure 2). Recent work by colleagues such as Alkin, Vo, and Hansen (2013) and Harnar (2012) to develop visual representations of theory has great value in our view. By representing theories in this way, readers are provided with an accessible overview of a given theory on which to build their deeper understandings. They may also use such representations to draw comparisons among given evaluation theories. This work has great potential to help bridge the gap between theory and practice in the evaluation community. However, despite this inherent value, we remain somewhat skeptical about developing visual representations in relation to CAE. First, we would argue that the CAE family members are properly thought of as approaches and not necessarily models or theories (see, e.g., Cousins, 2013). Visual representation of practical and transformative participatory approaches runs the risk of unintentionally framing them more as prescriptive models or prototypes than the fluid, context-sensitive approaches that are intended. We hasten to acknowledge Alkin’s point (2011, personal communication) that evaluation theories represented visually are ideals, and their application in practice will be very much influenced by context. Alkin also subscribes quite directly to the notion of the thorough analysis of the organizational, community, and political context of a program as being essential evaluation practice (e.g., Alkin & Vo, 2018).

      A figure expresses the essential features of CAE.Description

      Figure 2 ■ Essential features of CAE (adapted from Cousins et al., 2013)

      The importance of context cannot be understated, and that is why the systematic analysis of contextual exigencies before deciding the purpose and form of CAE is critical. As we have represented in Figure 2, program context is an ever-present filter through which subsequent activities and decisions flow. Essentially, context defines what we do, why we do what we do, how, and even the methods that we use. Borrowing from Snowden and Boone’s (2007) Cynefin framework, we previously argued that contexts can vary from simple to complicated, to complex, and even to chaotic situations (Cousins et al., 2013). Simple contexts are relatively predictable and controlled and cause-and-effect relationships well understood. In such cases, identified best practices may be warranted as solutions to important problems. In complicated contexts perhaps more than one alternative solution would be worthy of consideration, yet in complex situations where a high degree of uncertainty and unpredictability exists, cause-and-effect may be unknowable in advance. In fact, context-specific approaches that emerge in practice may be the best course of action. Finally, uncertainty may be so extreme and turbulent that cause-and-effect relationships are ultimately unknowable. Each of these program contexts is unique in some sense and would require differentiated approaches to program evaluation, particularly CAE. It is imperative therefore that contextual exigencies are well understood before deciding what CAE looks like and what can be expected to accomplish. This being the case, we are heartened by the recent contribution by Vo and Christie (2015) who developed a conceptual framework to support RoE focused on evaluation context.

      Context is at the center of all three of the justifications for developing the principles to guide CAE practice described above. With the emergence of a wide range of family members and increasing enthusiasm for using the CAE around the globe, it is essential to understand the implications of cultural and sociogeographic situations. Although there is some merit in compartmentalizing different approaches to CAE, we must guard against evaluators identifying with specific approaches and therefore being consciously or unconsciously drawn toward implementing them in situations that are not ideal. Finally, will the visual representation of theory inadvertently diminish the centrality and importance of contextual analysis? For all of the foregoing warrants and on the basis of privileging context, we argue that it is now prudent and necessary to develop a set of effectiveness principles to guide CAE practice. In the next section we describe the systematic, empirical approach to the problem that we took and the initial set of principles that we developed and validated.

      Evidence-based Principles to Guide CAE Practice

      Systematic Approach

      It will come as no surprise to those familiar with our work that the approach to the development of CAE principles that we took was empirical. We have long supported the concept of RoE, having identified it as an underdeveloped yet increasingly important gap in our field (e.g., Cousins & Chouinard, 2012). Through systematic inquiry, we sought to tap into this domain of evaluation practice


Скачать книгу