Collaborative Approaches to Evaluation. Группа авторов

Collaborative Approaches to Evaluation - Группа авторов


Скачать книгу
and the development of program skills, including systematic evaluative inquiry. To the extent that stakeholders are directly engaged with knowledge production, the evaluation will have greater success in getting a serious hearing when program decisions are made. Transformative outcomes reflect change in the way organizations and individuals view the construction of knowledge and in the distribution and use of power and control. Enhanced independence and democratic capacities are the sorts of social change that could be labelled transformative. Working collaboratively can deepen the sense of community among stakeholders and enhance their empathy toward intended beneficiaries through the development of their understanding of complex problems. Transformational outcomes are more likely when the facilitating evaluator is skillful in promoting inquiry and has expertise in human and social dynamics. Being prepared to work toward transformational outcomes almost certainly means being prepared to work in contexts where there are differences and even conflict. Given the interplay between practical and transformative outcomes, evaluators working on CAE would be wise to negotiate with stakeholders about i) the range of possible outcomes given the scope of the evaluation, ii) the outcomes most worthy of purposeful attention, and iii) how joint efforts might best facilitate these outcomes.

      The foregoing description of the principles provides a good overview to support the development and implementation of CAE. The principles are grounded in the rich experiences of a significant number of practicing evaluators. Their credibility is enhanced by virtue of the comparative design we used to generate the evidence base as well as the validation exercise described above. In his recent book on principle-based evaluation, Patton (2017) explicitly acknowledged their quality: “For excellence in the systematic and rigorous development of a set of principles, I know of no better example than the principles for use in guiding collaborative approaches to evaluation” (p. 299).

      But in and of themselves, mere descriptions of the principles remain somewhat abstract. In order to enhance their practical value to guide CAE decision-making and reflection, we developed for each principle summary statements of evaluator actions and principle indicators in the form of questions that could be posed as an evaluation project is being planned or implemented. This information is summarized in Table 1 and was included in an indicator document to complement descriptions of the principles and their supportive factors.

      The actions and indicator questions provided in the Table (and in the indicator document) have not been subjected to any formal review or validation. They are the result of our own collective reflections on CAE and are therefore indirectly based on knowledge garnered through working with the base data set. Nevertheless, we offered these processes and indicators as a way for potential users of the CAE principles to apply them in practice. Notable among the suggested actions for evaluators to consider in order to follow or apply the principles, a range of interpersonal and soft skills would be required. These would include facilitation, negotiation, promotion, and monitoring. Such skills, we would argue, come through considerable practical experience; they are not likely to be easily picked up in courses or workshops.

      Having provided a summary overview of the set of eight effectiveness principles for CAE, and associated actions and indicators, we now turn to considerations about how these principles may be applied to the benefit of evaluators, program and organizational stakeholders, and in the evaluation community at large.

      Envisioned Uses and Applications of CAE Principles

      In our view, a range of possibilities for the application of CAE principles exist. Here we comment on six main applications; no doubt others exist. These are prospective planning, framing, and doing; retrospective analysis and critique; designing and delivering education and training; reviewing and developing evaluation policy; translating and applying in cross-cultural contexts; and conceptual framing of RoE.

      Prospective Planning, Framing, and Doing

      Perhaps the most direct and obvious use of the principles would be to guide practice in a prospective manner. We would envision evaluators collaboratively working with members of the program community to engage in planning and implementing CAE on the basis of guidance from the principles. The indicator document provides some clues about actions to take and questions to ask to adhere to the tenets of the principles. We would expect that the actual collective exercise of familiarizing and internalizing would be at the same time instructive and inspiring of consideration of alternative courses of action. As a given CAE project unfolds over time, the principles could be used to stimulate ongoing reflection and dialogue, perhaps leading to alternative actions and/or decisions to reconsider strategies. Following implementation, it would be useful to retrospectively collectively reflect on the process and debrief about lessons learned in terms of both what went well and what challenges require attention. The suggestion is a natural segue into the next envisioned application.

      Retrospective Analysis and Critique

      To suggest that hindsight is 20:20 is a bit optimistic in our view given that collectively everyone exposed to a particular experience will, in fact, have experienced and remembered it differently. Another useful application of the principles, we believe, would be to use them as a guide to reflection after CAE projects have been completed. To the extent that such analyses can be systematic, it is our opinion that they will be more fruitful. For example, participants in a CAE project may wish to carefully identify and recruit other participating members for a structured dialogue about the process. Obviously, such a conversation would be based on individual and collective memories of what transpired, but such memories can be aided by artifacts and other clues. Nevertheless, it would be important not to delay the opportunity to engage in such reflection too long after a project has been completed since memories fade and people move on to other things. The primary benefit of such analyses would be to generate lessons learned that could inform future practice.

      Designing and Delivering Education and Training

      To date, we have delivered a range of full-day, half-day, and two-hour workshops and seminars using the CAE principles as a framework (see Appendix 1). One can easily imagine their use for structuring more protracted educational experiences such as a graduate-level evaluation course. Each of the eight principles could be used, for example, to structure a specific module, which could be augmented with introductory and integration modules. Given the requirement for evaluators to employ a range of interpersonal soft skills, in the ideal, a course on such approaches would involve some practical experience in the form of exercises and activities and/or authentic practice.

      Our discussion to this point has focused on the professional development of evaluators, but of course, we should not overlook that of program community stakeholders. Another option for training could be seminars and workshops for program community members and organization members with an interest in applying evaluation in a self-directed way. Some of our colleagues (e.g., Alkin & Vo, 2018) have successfully provided highly accessible and readable texts on evaluation that have been quite useful in engaging persons who are, for all intents and purposes, uninitiated in evaluation matters. It is not difficult to imagine the development of support materials framed by the CAE principles that can serve the same purpose.

      Reviewing and Developing Evaluation Policy

      Hind Al Hudib, one of our COVE research team members, has been conducting in-depth research into interconnections between evaluation policy and evaluation capacity building (ECB) over the past few years. Her research includes the review and analysis of a large sample of organizational evaluation policies as well as an interview study with several contributors to the research knowledge base (Al Hudib, 2018). Suffice to say that the linkage between policy and ECB is not a strong one. In fact, organizational evaluation policies are generally seen to be symbolic and benign. However, from her research, there is some evidence to show that organizations are motivated to revise their evaluation policies to make them more engaging and useful. One of the chief concerns arising


Скачать книгу