Collaborative Approaches to Evaluation. Группа авторов
avoid “too many unspoken assumptions” (study participant). Close and constant contact can be instrumental to real-time communication, relationship building, and expectation clarification. The constructive exploration of differences and search for solutions that go beyond one’s own limited vision are at the crux of cultural competency. In CAE, building respectful sustainable relationships is essential.
Develop a Shared Understanding of the Program (program logic, organizational context): Is the program commonly understood among program and organizational community members and evaluators? Is everyone in agreement about intended program processes and outcomes? The principle promotes the explication of the program logic situated within context. Involving program stakeholders in the program description process is a useful way to deepen understanding about program logic. “The involvement of stakeholders provides a more accurate definition of the terms, problems, and population needs [and] culture” (study participant). Focusing on a mutual understanding of what is being evaluated can reduce the likelihood of stakeholders moving forward in the evaluation with unrealistic expectations. Organizational context is also a significant consideration in this regard. It is important for stakeholders to feel comfortable and confident in the capacity of the organization to embrace the process. Disruptive forces such as a change in administration can diminish this capacity. Evaluators need to monitor the organizational context as the project unfolds.
Promote Appropriate Participatory Processes (diversity of stakeholders; depth of participation; control of decision-making): What does it mean for stakeholders to participate in a CAE? The principle encourages deliberate reflection on the form that the collaborative process will take in practice with regard to specific roles and responsibilities for the range of stakeholders identified for participation. Collaboration in CAE can be operationalized in a contextually responsive way. It is important for evaluators to consider diversity in stakeholder participation, particularly with members or groups who might not otherwise have been involved. A challenge, however, is not just identifying such diversity but negotiating participation. The benefits of involvement to organization and program stakeholders and relatively deep levels of participation in the evaluation process can pay off rather significantly, as suggested by this survey respondent:
Participants were close to—and ultimately owned—the data. They helped design the tools, collect the data, analyze the data, interpret the data, and present findings. It wasn’t just buy-in to the processes and outcomes; it was implementing the process themselves (not being led through) and generating (not been given and asked for their thoughts about) and owning the outcomes.
An important consideration is control of decision-making about the evaluation, which may be difficult to manage. The evaluator being open to sharing the control of evaluation—in terms of instrument choice, data collection, and the interpretation of findings—is an important strategy. On the other hand, complications can easily arise around the control of decision-making, particularly when power issues among stakeholders are present.
Monitor and Respond to Resource Availability (budget, time, personnel): Issues of time and money are challenges for any evaluation but in CAE, important interconnections are associated with personnel. Participating stakeholders are a significant resource for CAE implementation. In addition to fiscal resources, the principle warrants serious attention to the extent to which stakeholder evaluation team members are unencumbered by competing demands from their regular professional roles. If the collaboration is identified as part of the job for those who will be heavily involved, evaluators should ask what aspects of their normal routine will be removed from their list of responsibilities during the evaluation. This would be one way to set appropriate expectations. Evaluators need to monitor stakeholder engagement and perhaps develop strategies to motivate staff. Such engagement can be eroded by emerging conditions within the evaluation context. Another aspect of interest is the skill set that stakeholder participants bring to the project and the extent to which evaluators can help to match skills and interests to the tasks at hand. Program and organizational stakeholders are also a key resource for program content and contextual knowledge. “The evaluator was not an expert in the program content area and absolutely needed stakeholders to provide clarity about how the data would be used and what the boundary conditions were for asking questions of intended beneficiaries” (study participant).
Monitor Evaluation Progress and Quality (evaluation design, data collection): Just as program and organizational stakeholders can help evaluators to understand local contextual exigencies that bear upon the program being evaluated, there is a significant role for evaluators in contributing to the partnership. The principle underscores the critical importance of data quality assurance and the maintenance of professional standards of evaluation practice. One aspect of the role concerns evaluation designs and ensuring that any adjustments preserve design integrity and data quality. Such adjustments may be necessary in the face of changes in the evaluation context. Acknowledging and sometimes confronting one another with deteriorating lack of fit between the intended evaluation design and the capacity of the collaboration to implement it can be productive and critical to salvaging evaluation efforts. Challenges with data collection are particularly salient and critical to ensuring data quality. It is essential for evaluators not to assume that stakeholders are appreciative of the implications of data quality on findings and outcomes, as the following excerpt suggests: “Front-line staff, who are responsible for collecting the data, did not understand the importance of getting it collected accurately.” Given the instructional role for evaluators, it is a worthwhile consideration to build in funding for such professional development processes. Such attention may reduce the amount of monitoring necessary as the project unfolds and can go a long way toward preserving the integrity of the evaluation.
Promote Evaluative Thinking (inquiry orientation, focus on learning): The principle inspires the active and conscious development of an organizational culture of appreciation for evaluation and its power to leverage social change. Evaluative thinking is an attitude of inquisitiveness and belief in the value of evidence, and CAE provides good opportunity for developing such. When evaluative thinking is enhanced through collaboration, evaluation processes and findings become more meaningful to stakeholders, more useful to different decision makers, and more organizationally effective. The development of an inquiry orientation is an organizational culture issue and will not happen overnight, but certainly evaluators can profitably embrace a promotional stance as evaluation unfolds. Significant energy may be well spent helping collaborators to become invested in the learning process and to be prepared for the unexpected. In essence, evaluators would do well to be opportunistic in this respect, as the following excerpts suggest: “Because of the stakeholder commitment, results were used as an opportunity to learn and grow;” “stakeholders were willing to accept negative or contrary results without killing the messenger.” Organizational and program stakeholders who embrace the learning function of evaluation will have greater ownership and will be less likely to view it as something for someone else to do.
Follow Through to Realize Use (practical outcomes, transformative outcomes): To what extent is the evaluation a valuable learning experience for the stakeholder participants? The principle promotes the conscious consideration of the potential for learning, capacity building, and other practical and transformative consequences of the evaluation. Implicated are evaluation processes and findings, as well as the evaluator’s role in facilitating these desirable outcomes. Practical outcomes at the organizational level influence program, policy, and structural decision-making, and they are seen through a change in disposition toward the program or