Evaluation in Today’s World. Veronica G. Thomas
to voluntarily participate in the evaluation.
A graduate student would do most of the data collection, under the evaluator’s supervision. The student was fluent in Spanish and English, and this project would be the subject of the student’s master’s thesis. The evaluation’s final product would be a presentation of results, in PowerPoint format, with the slides and notes delivered to the program director and funder.
Data Collection. The student administered the staff surveys in person. These surveys asked for how long the staff members worked with the Health Care Collaborative, what they did in the program, how they viewed the participants, and what difference the program made in the neighborhood. Surveys of other providers involved with the Health Care Collaborative were web-based. The questions concerned what kinds of interaction the providers had at the Health Care Collaborative, with whom, and how often; how that relationship affected both organizations; and what services the responder brought to resident-participants in the Health Care Collaborative.
The Health Care Collaborative staff administered surveys to program participants during ongoing program contact. The student also conducted a small number of interviews of people identified for their longevity in working with this particular neighborhood, and added open-ended historical questions.
The student observed both staff and participants in health care awareness sessions for small groups to better enrich the evaluator’s and student’s understanding of the program, its staff, and the participants. Participants’ journals provided inspirational stories of their experiences in navigating the health care maze.
Data Analysis and Interpretation. From the surveys, some data were aggregated and reported descriptively (e.g., comparisons of the racial and ethnic composition of the Health Care Collaborative participants for the neighborhood). Scaling and cluster analyses were used to structure and analyze the results of the focus groups, and some journal entries and responses to open-ended questions from interviews also were analyzed.
All in all, the program served a disproportionate number of Hispanic adults (compared to the neighborhood’s composition) and disproportionate numbers of people without health insurance and without other known ways to access health care. Participants and staff were very positive about the program and its value in their neighborhood and lives. The Health Care Collaborative program participants overwhelmingly credited the use of racially and ethnically diverse staff, from the neighborhood itself, as the main reason for the Health Care Collaborative’s success.
Younger adults placed more concern on financial issues related to health care, compared with older adults. Hispanic participants in focus groups were all female, and most were unemployed. From all three focus groups, whether participants were treated fairly and had access to insurance and to health care was more important than waiting times or actually getting to appointments.
When the evaluator and student felt comfortable with their work, they shared draft findings informally with the program director, funder, and Board members—through in-person as well as telephone conversations and through email. Some feedback was given and considered in reviewing those findings and in developing the final product.
Dissemination and Utilization of Results. The final evaluation briefing was delivered at a meeting of the Health Care Collaborative Board, to which the funder and some residents were invited. The funder could not make this meeting, accepted the electronic PowerPoint file, and asked no further questions. Only one resident—a regular attendee of Board meetings—was present for the briefing. Two or three questions were asked, more of apparent curiosity than any other cause or purpose. No future plans for the findings were discussed at this meeting.
The student completed the thesis based on this project, and it was very well received by the faculty committee. The evaluator adapted the evaluation for use in an advanced evaluation course for graduate students.
The student and evaluator also proposed a poster session focusing on the evaluation findings to an annual, national professional conference in their discipline. The proposal was accepted and a large poster developed, which covered the basics of the evaluation. Those who stopped to read and talk about the evaluation expressed admiration for its scope and methods.
As the evaluator, what are some things that you would do differently to better ensure that your actions are ethically defensible?
Source: This case is republished with permission of the American Evaluation Association (with minor edits).
The Program Evaluation Standards
In addition to the Evaluators’ Ethical Guiding Principles, the Program Evaluation Standards is another document that provides guidance and direction for those in the evaluation field. It includes much more specificity regarding what to do and not do in program evaluation than the Evaluators’ Ethical Guiding Principles. Whereas the Evaluators’ Ethical Guiding Principles are concerned specifically with the ethical conduct of the evaluator, the Program Evaluation Standards pertain to the quality of the evaluation. Initially established in 1981 by the Joint Committee on Standards for Educational Evaluation1 with multiple editions since then, the Program Evaluation Standards provide guidance for improving evaluation quality and accountability. The Program Evaluation Standards contain 30 standards organized around five central attributes of evaluation quality. These quality attributes include (a) utility (N = 8 standards), (b) feasibility (N = 4 standards), (c) propriety (N = 7 standards), (d) accuracy (N = 8 standards), and (e) evaluation accountability (N = 3 standards). A full description of the 30 Program Evaluation Standards is provided in Appendix B. An overview of the five central attributes discussed in the Program Evaluation Standards, as adapted from Yarbrough et al. (2011), include the following:
1 The Joint Committee on Standards for Educational Evaluation (JCSEE) is supported by 17 sponsoring organizations and has been a member of the American National Standards Institute (ANSI) since 1989. During its history, the mission of the JCSEE has remained to develop and implement inclusive processes producing widely used evaluation standards that serve educational and social improvement. To learn more about the history and organizational support of the JCSEE, visit www.jcsee.org.
Utility standards are concerned with evaluation use, usefulness, influence, and misuse. Utility is supported by standards that provide guidance to increase the likelihood that the evaluation will have positive consequences and substantial influences such as contributing to stakeholders’ learning, informing decisions, leading to improvements, or providing information for accountability judgments.
Feasibility standards are intended to increase evaluation effectiveness and efficiency by ensuring that an evaluation is practical, efficient, and contextually viable. These standards highlight the logistical and administrative requirements of evaluations that must be managed, bring the world of possible evaluation procedures into the world of practical procedures for a specific evaluation, and serve as a precondition for other attributes of quality.
Propriety standards support what is proper, fair, legal, right, and just in evaluations. These standards cover three overlapping domains: (a) the evaluators’ and participants’ ethical rights, responsibilities, and duties; (b) systems of laws, regulations, and rules that regulate the conduct of people and organizations, such as federal, state, local, and tribal regulations and requirements, institutional review boards, and local/tribal constituencies that authorize consent to work in and with respective communities; and (c) the roles and duties inherent in evaluation professional practice.
Accuracy standards seek to increase quality in data collection and analyses and to increase the truthfulness and dependability of evaluation representations, propositions, and findings by urging that evaluations strive for as much accuracy (i.e., validity, reliability, reduction in error and bias) as is feasible, proper, and useful to support sound conclusions and decisions in specific situations. Ignoring