Collaborative Approaches to Evaluation. Группа авторов
understand what characterizes or describes effective work and differentiates it from practice that is less so. Other approaches to principle development have been heavily grounded in practice and relied on the experience of renowned experts in the domain (e.g., DE principles, Patton, 2011) or based on fairly intensive consultative, deliberative processes (e.g., empowerment evaluation principles, Fetterman & Wandersman, 2005). In both instances, proponents draw heavily from practical wisdom. Our intention was to do the same but to do so through a rather significant data collection exercise.
Our methodology was comparative, but we relied on practicing evaluators to generate the comparisons from their own experience (i.e., within-respondent comparisons). Essentially, we wanted to ask evaluators who practice CAE (in whatever form) about their positive and less than positive experiences within the genre. Our sample (from three evaluation professional associations) of over 300 evaluators derived largely, but not exclusively, from North America; a substantial portion corresponded to those working in international development contexts. The approach that we took was to have participants think about a CAE project from their own experience that they believed to be highly successful. They were then asked to describe the project according to a set of questions, and in particular, they were asked to identify the top three reasons why they believed the projects to be successful. Having completed this first part, participants were then asked to identify from their experience a project they considered to be far less successful than hoped. They responded to an identical set of questions for this project, but they were asked to identify the top three reasons as to why the project was not successful.8 We had done some preliminary pilot work, and we are quite pleased with the response that we got (N=320). The data from this online survey were predominantly qualitative and provided us with a rich sense of what works in CAE practice.
8 The order of successful and less-than-successful projects and corresponding sets of questions was counterbalanced to protect against response bias.
Themes (reasons) emerged through an analysis of the qualitative responses, and these provided the basis for our development of higher-order themes (contributing factors) and ultimately draft principles. Some themes we considered to be particularly critical because they represented both a reason why a given project was perceived to have been highly successful, but also why, in a separate instance, it was perceived to have been limiting. For example, for a hypothetical CAE that had ample resources, this factor may have contributed substantially to success. Conversely, in another project, a lack of resources may have been limiting and intrusive. We called these critical factors. Ultimately, we generated a set of eight principles and then asked 280 volunteer members of our sample to look over the 43-page draft as part of a validation exercise. Given the enormity of this task (realistically, requiring at least a half day), we greatly appreciated the generosity of the 50 participants who responded.
Based on the feedback, we made a range of changes to the wording and characteristics of the draft principles and developed the final version of the preliminary set, subsequently published in the American Journal of Evaluation (Shulha, Whitmore, Cousins, Gilbert, & Al Hudib, 2016).
Description of the CAE Principles
Figure 3 provides an overview of the set of eight CAE principles resulting from our validation process. There are at least four important considerations to bear in mind in thinking about this set. First, the set is to be thought of as a whole, not as pick-and-choose menu. This aligns with the point made above that each and every principle in the set, if followed, is expected to contribute toward the desired outcome, that is, a successful CAE project. It is therefore possible for evaluation practitioners to follow each of the principles without risk of confusing or confounding purposes. The extent to which each principle is followed or weighted will depend on context and the presenting information needs. A second consideration is associated with the individual principles being differentially shaded and yet separated by the dotted lines in the visual representation. These two features in the diagram imply that while each principle contributes something unique, there is expected to be a degree of overlap among them. That is to say, they are not to be thought of as being mutually exclusive. Third, we make the claim that the principles are in no specific order although it may be argued that there is a loose temporal ordering beginning with clarify motivation for collaboration and ending with follow through to realize use. Important to note is that we intend for the CAE principles to require an iterative process, as opposed to a lockstep sequential one. Many of the principles described below require ongoing monitoring and adjustments to the evaluation and collaboration as time passes. For example, foster meaningful relationships requires continuous attention and may reassert itself as a priority during a clash of values or a change in stakeholder personnel. Finally, it might be noted that some of the principles laid out in Figure 3 might apply as equally to mainstream approaches to evaluation as they do to CAE. This may be true, but it is important to recognize that (i) these principles emerged from detailed data from evaluators practicing CAE, and (ii) each is somehow unique in its application to the collaborative context, as we elaborate below.
Figure 3 ■ Evidence-based CAE principles (adapted from Shulha et al., 2016).
We now turn to a brief description of each of the principles. Readers interested in a more detailed description and commentary may wish to consult Shulha et al. (2016). In the text to follow, supportive factors for each principle, which were derived from themes in our data, are identified in parentheses (following the title) and through the use of italics (in the descriptive text).
Clarify Motivation for Collaboration (evaluation purpose; evaluator and stakeholder expectations; information and process needs): Evaluators should be able to describe and justify why a CAE was selected in the first place. Why use CAE as opposed to a conventional or another alternative approach to evaluation? The principle encourages the development of a thorough understanding of the justification for the collaborative approach based on a systematic examination of the context within which the intervention is operating.
Clarity on these issues will help to ensure CAE is both called for and appropriate as a response to the evaluation challenge. Program improvement, opportunities for individual and organizational learning, and organizational capacity building were among the evaluation purposes suggested to be most conducive to CAE. On the other hand, accountability-oriented and legitimizing purposes could be counterproductive. Clarifying evaluator and stakeholder expectations for collaboration early on can be quite beneficial and can potentially lead to stakeholders leveraging networks and resources to help. CAE processes that are somehow mandated are less likely to be successful. Finally, clarification about information needs and priorities is an important supportive factor; evaluators can work with organizational or program stakeholders to help generate such clarity. Such activity helps to focus the evaluation and ensure that it will generate information that will be valued.
Foster Meaningful Relationships (respect, trust and transparency; structured and sustained interactivity; cultural competence): The principle inspires the conscious development of quality working relationships between evaluators and program stakeholders and among stakeholders, including open and frequent communication. Successful CAE projects benefit from “highly cooperative and collaborative organizational context, with abundant positive peer/professional relations and a wholesome, trusting, organizational climate” (study participant). Trust and respect are not givens and must be developed through ongoing interaction and transparency. While there is certainly a role for evaluators here, efforts on behalf of program and organizational stakeholders are implicated as well. Trust and respect can be leveraged through ongoing sustained interactive communication