Collaborative Approaches to Evaluation. Группа авторов

Collaborative Approaches to Evaluation - Группа авторов


Скачать книгу

      In considering the list of family members appearing in Box 1, it is important to keep in mind that the list is incomplete. But it is also critical to recognize the fluid nature of participation and collaboration, even within a single project. For example, an evaluation might start out to be highly collaborative but in response to resource constraints, competing interests or other emerging, perhaps totally unforeseen exigencies, it may become less so. It may even be the case that the evaluation is ultimately completed only by evaluator members of the team. Yet, we would still consider such an example to be an instance of CAE because it involved at some point, members of the program community in the knowledge production process.

      Another consideration is that some members of the list, depending on precisely how they are implemented, may or may not be collaborative. Consider, for example, contribution analysis and utilization focused analysis. While both approaches are framed as reliant on stakeholder participation and genuine contribution, it may be entirely possible to implement these approaches in such ways that participation is merely performative or symbolic.

      And so, in considering whether a specific evaluation is collaborative or not, it is always important to come back to the essential criterion: Did nonevaluator members of the program community authentically engage with evaluators in the evaluative knowledge production process? This, regardless of how the approach is labelled.

      When Do We Use CAE?

      Many would agree that there are two fundamental functions for evaluation. On the one hand, there is the accountability function—the main driver of technocratic approaches favored by public sector governance and bi- or multilateral aid agencies (Chouinard, 2013). On the other hand, is the learning function, which has appeal to a much broader range of stakeholders (Dahler-Larsen, 2009; Preskill, 2008; Preskill & Torres, 2000). Arguably, another consideration is the transformational function of evaluation (Cousins, Hay, & Chouinard, 2015; Mertens, 2009), which seems particularly relevant to CAE considerations as we elaborate below. We argue that CAE is most suited to evaluation contexts where learning and/or transformational concerns are paramount, although some aspects of accountability are implicated as well.

      When It’s About More Than Impact

      The accountability function is essential to the overt demonstration of fiscal responsibility, that is, showing the wise and justifiable use of public and donor funds. It comes as no surprise that in accountability-driven evaluation, the main interests being served are those of the senior decision and policy makers on behalf of taxpayers and donors. As such, a premium is placed on impact evaluation particularly on the impartial demonstration of the propensity for interventions to achieve their stated objectives. Such information needs are generally not well served by CAE, although some approaches are sometimes used to these ends (e.g., contribution analysis, empowerment evaluation, most significant change technique). In fact, contribution analysis seems well suited in this regard (Mayne, 2001, 2012). Contribution analysis is committed to providing an alternative to obsessing about claims of program attribution to outcomes through the use of a statistical counterfactual; instead, it focuses on supporting program contribution claims through the use of plausible, evidence-based performance stories. While the accountability agenda is and is always likely to be essential and necessary, many have observed that reliance on associated single-minded evaluation approaches serves to diminish, even marginalize the interests of the much broader array of stakeholders (e.g., Carden, 2010; Chouinard, 2013; Hay, 2010).

      If we take into account, indeed embrace, the legitimate information needs of a very broad array of program and evaluation stakeholders, traditional mainstream evaluation designs are not likely to be particularly effective in meeting those needs. What good, for example, is a black box approach to evaluation (e.g., randomized controlled trial) to program managers whose main concern is to improve program performance, thereby making it more effective and cost-efficient? Or how could such an evaluation possibly assist program developers to truly appreciate the contextual exigencies and complex circumstances within which the focal program is expected to function and how to design interventions in ways that will suit? What about the program consumers? It is relatively easy to imagine that their concerns would be associated with their experience with the program and their sense of the extent to which it is making a difference for them. Evaluations which are single-mindedly focused on demonstrating program impact are likely to be of only minimal value for such people, if any at all.

      Single-minded impact evaluations are likely to be best suited to what Mark (2009) has called fork-in-the-road decisions. When decisions to continue to fund or to terminate programs define the information needs associated with the impetus for evaluation, the evaluation will be exclusively summative in nature and orientation. But such decisions, as a basis for guiding evaluation, are relatively rare. Often, it is the case that summative and formative, improvement-oriented evaluation interests are comingled with summative questions about the extent to which programs are meeting their objectives and demonstrating effectiveness (Mark, 2009).

      To the extent that formative interests are prevalent in guiding the impetus for evaluation, the learning function of evaluation carries weight, and CAE would be a viable evaluation option to consider. In formative evaluations, program community members, particularly primary users who are well-positioned to leverage change on the basis of evaluation findings (Alkin, 1991; Patton, 1978), stand to learn a great deal about the focal program or intervention as well as the context within which it is being implemented. Creating the opportunity for such learning, some would argue, is a hallmark of CAE (e.g., Cousins & Chouinard, 2012; Dahler-Larsen, 2009).

      When It’s Developmental

      In addition to, and quite apart from, summative and formative evaluation designs, is developmental evaluation (DE) (Patton, 1994, 2011). Unlike contexts where the specific intervention already exists and is being implemented, in DE, evaluators work alongside organizational and program community members to identify and develop innovative interventions through the provision of evidence-based insights. With evaluators at the decision-making table, DE by definition is collaborative and therefore a member of the CAE family.

      Despite the argument that DE is distinct from summative


Скачать книгу