Collaborative Approaches to Evaluation. Группа авторов
to contact the principal investigator for more information.
We received a good response to the call with 10 proposals coming from Europe, the Middle East, Latin America, as well as the USA and Canada. The core editorial team (members of the COVE research program group) reviewed the proposals and met to discuss their relevance, potential, and feedback to be provided to the authors. Ultimately, we decided that eight proposed studies were potentially publishable in the volume, and the editor (Cousins) subsequently wrote the authors to provide feedback and guidance. In addition to proposal quality, of particular interest in making the selections were the inclusion of (i) an empirical test of the principles, which aligns with our commitment to RoE; (ii) diversity in context/geography, which speaks to our commitment to a global test drive process; and (iii) diversity in application of the principles to minimize redundancy and enhance understanding of the scope of CAE principle application.
Review Process
Simultaneous with the call for proposals and our initial review of proposals, we recruited colleagues from the US, Canada, Europe, and the Middle East to serve as editorial board members for the volume. The list of participating board members appears in the front matter of the book. These individuals all have experience with CAE and in most cases have contributed to the professional literature on the topic. We are indebted to these colleagues for their generous contribution.
The peer review process may be thought of as single-blind review, and this was made known to the authors and the peer reviewers (editorial board members) from the outset. Each proposal underwent a pre-read by the editor who provided initial feedback to authors. Authors then tightened up drafts and submitted them for single-blind peer review. Each draft chapter was reviewed by one core editorial board member (COVE team member) and two additional editorial board members. Reviewers were asked to consider the following questions as they assessed their assigned draft chapters:
1 Is the purpose of the field study clear and well justified?
2 Are the methods used to gather and analyze data clear and suitable? Were steps taken to assure data quality?
3 Are the conclusions drawn supportable from the findings provided? Do they comment on implications for the use, application, and/or revision of the CAE principles?
4 Is the paper well organized and written?
All reviews were sent to the editor who then independently read the draft chapter and subsequently the peer reviews of it. The letter of decision was then sent to the authors, which identified the main points of concern in focus for revision. Appended to the letter of decision were anonymized versions of the reviewer comments. No promises of publication were made. Authors then responded to editorial and reviewer feedback and resubmitted their chapters for perusal. All chapters were accepted by the editor, some with continuing negotiation and revision.
And so now, we are proud to present a global test drive of the CAE principles. We hope you will agree that the quality, contextual diversity, and range in application provide an informative, interesting, and compelling review of the preliminary set of principles. We invite readers to review Chapter 10 where we present an integration of the field test results and associated implications for the ongoing use, application, and revision of the principles.
Questions for Reflection
1 What are the primary benefits of relying on a set of evidence-based principles to guide CAE practice? How would you know if such benefits accrued?
2 To what extent should CAE principles be used prescriptively? Why? What are some risks of overprescribing intended practice in CAE on the basis of the principles?
3 Listed in this chapter are a range of suggested applications of the CAE principles. Can you think of others? What would they be? Of the range of potential applications of the principles, which are likely to prove most beneficial to the evaluation community, broadly defined? Why?
References
Al Hudib, H. (2018). The role of evaluation policy in organizational capacity to do and use evaluation (Unpublished Ph.D. dissertation). University of Ottawa, Ottawa, Canada. https://ruor.uottawa.ca/handle/10393/38117
Alkin, M. C. (1991). Evaluation theory. In M. W. McLaughlin & D. C. Phillips (Eds.), Evaluation and education: At quarter century (pp. 91–112). Chicago, IL: University of Chicago Press.
Alkin, M. C. (2011). Evaluation essentials: From A to Z. New York, NY: Guilford.
Alkin, M. C., & Vo, A. (2018). Evaluation essentials: From A to Z (2nd ed.). New York, NY: Guilford.
Alkin, M. C., Vo, A. T., & Hansen, M. (2013). Using logic models to facilitate comparisons of evaluation theory. Evaluation and Program Planning, 38, 33.
Bryk, A. (1983). Stakeholder-based evaluation. New directions in program evaluation, no. 17. San Francisco, CA: Jossey-Bass.
Carden, F. (2010). Introduction to the forum on evaluation field building in South Asia. American Journal of Evaluation, 31(2), 219–221.
Chambers, R. (1981). Rapid rural appraisal: Rationale and repertoire. Public Administration and Development, 1, 95–106.
Chouinard, J. A. (2013). The case for participatory evaluation in an era of accountability. American Journal of Evaluation, 34(2), 237–253.
Chouinard, J. A., & Cousins, J. B. (2007). Culturally competent evaluation for Aboriginal communities: A review of the empirical literature. Journal of Multidisciplinary Evaluation, 4(8), 40–57.
Chouinard, J., & Cousins, J. B. (2015). The journey from rhetoric to reality: Participatory evaluation in a development context. Educational Assessment, Evaluation and Accountability, 27, 5–39.
Cousins, J. B. (2001). Do evaluator and program practitioner perspectives converge in collaborative evaluation? Canadian Journal of Program Evaluation, 16(2), 113–133.
Cousins, J. B. (2005). Will the real empowerment evaluation please stand up? A critical friend perspective. In D. M. Fetterman & A. Wandersman (Eds.), Empowerment evaluation principles in practice (pp. 183–208). New York, NY: Guilford.
Cousins, J. B. (2007). Process use in theory, research and practice. New Directions for Evaluation, No. 116. San Francisco, CA: Jossey-Bass.
Cousins, J. B. (2013). When does a conceptual framework become a theory? Reflections from an accidental theorist. Evaluation and Program Planning, 38, 67–70.
Cousins, J. B., & Chouinard, J. A. (2012). Participatory evaluation up close: A review and integration of the research base. Charlotte, NC: Information Age Press.
Cousins, J. B., & Chouinard, J. A. (2018, Oct.). Ethical justification for collaborative approaches to evaluation. Paper presented at the meeting of the European Evaluation Society, Thessaloniki, Greece.
Cousins, J. B., Donohue, J. J., & Bloom, G. (1996). Collaborative evaluation in North America: Evaluators’ self-reported opinions, practices, and consequences. Evaluation Practice, 17, 207–225.
Cousins, J. B., & Earl, L. (1992). The case for participatory evaluation. Educational Evaluation and Policy Analysis, 14(3), 397–418.
Cousins, J. B., & Earl, L. (1995). Participatory evaluation in education: Studies in evaluation use and organizational learning. London, UK: Falmer.
Cousins, J. B., Hay, K., & Chouinard, J. A. (2015). The third perspective: Uniting accountability and learning within an evaluation framework that takes a moral-political stance. In C. Christie & A. Vo (Eds.), Evaluation use revisited. Charlotte, NC: Information Age.
Cousins, J. B., & Whitmore, E. (1998). Framing participatory evaluation. In E. Whitmore (Ed.), Understanding and practicing participatory evaluation. New Directions in