Evaluation in Today’s World. Veronica G. Thomas

Evaluation in Today’s World - Veronica G. Thomas


Скачать книгу
that she would stress two things:

      We [first] should not be producing so many ‘‘johnny-one-notes,’’ … evaluators who show up for work knowing only one methodology (usually survey research or the randomized field experiment)…. [E]valuators need to know how economists, engineers, and political scientists … deal with evaluation questions in their different disciplines….

      [Second] I would stress the study of context…. [S]o long as we try to force-fit evaluative ideas into political or social milieux without understanding those milieux we’ll never get them in…. [W]e need to understand bureaucracy if we want government to listen to us, we need to understand other evaluative professions both to borrow their methods and let them borrow ours…. Evaluation really has to fit in … and we must find partners, even though speaking truth to power is not calculated to bring us advocates. Perhaps the best way to do this is by avoiding zealotry, whether about methods or anything else…. If we can remember that there is no such thing as perfection, I think we’ll survive, find allies, flourish, and do an amazing job in helping to make government more transparent, more effective, and more publicly accountable. (Oral History Project Team, 2009, p. 244)

      Floraline I. Stevens

      Floraline I. Stevens served as the director of the Research and Evaluation branch of the Los Angeles Unified School District (LAUSD) from 1979 to 1994. She was also a senior research fellow for the National Center for Education Statistics (NCES) in Washington, DC (1991–1992), and a program director for the National Science Foundation (NSF) in the Education and Human Resources Directorate (1992–1994). While at NSF on an interagency personnel assignment from the LAUSD, Stevens conceived the idea for the NSF’s seminal publication, User-Friendly Handbook for Project Evaluation: Science, Mathematics, Engineering, and Technology Education. This handbook was developed to provide principal investigators and project evaluators with a basic understanding of selected approaches to evaluation. Further, the handbook builds on firmly established principles, blending technical knowledge and common sense to meet the special needs of NSF programs and projects. This handbook has been updated several times and is still currently in use.

Professional portrait of Floraline I. Stevens.

      Stevens (2000) indicated that she became an evaluator in 1965 when the Elementary and Secondary Education Act (ESEA, now referred to as the Every Student Succeeds Act, or ESSA) Title I legislation was enacted and federally funded education programs were being implemented in the LAUSD:

      [T]here were no evaluation types in the school district. However, in response to the federal guidelines, the school district recruited a cadre of persons who had training in counseling because of their coursework in test and measurement, statistics, and research design…. We knew nothing about evaluation theories and evaluation procedures. (p. 42).

      Stevens described how she and her colleagues overcame difficulties attendant to being responsive to culture during an evaluation project in this large, diverse metropolitan school district, noting that their extensive knowledge of the culture in the classroom and cultural background of the students helped them overcome difficulties in collecting accurate data in the schools where the ESEA Title I programs were operating. She said she (and her colleagues of color) knew how to gain access to people and information in schools, a critical element in evaluation. But equally important, Stevens indicated that she knew when the information provided did not make sense.

      It was from those early experiences that Stevens later became an evaluator of her first science education project, an ESEA Title III, K–12 ecology- and biology-focused project. Subsequently, she evaluated many K–12 science education and science-focused programs. As the director of the Research and Evaluation branch of the LAUSD, Stevens developed ongoing programs of professional development to assist the evaluation staff to become better qualified. She played a significant role in focusing attention to issues of race, culture, and context in program evaluation. In situations where programs involved ethnically diverse participants and stakeholders, Stevens (2000) called for the creation of multiethnic evaluation teams to increase the chances of really hearing the voices of underrepresented students. Stevens was also a major champion of evaluation capacity building, especially in relation to increasing the number of minorities in the evaluation field. In the 1990s, she argued that the NSF should step forward to train minority evaluators of science and technology projects.

      When Stevens retired from the LAUSD in 1994, she formed Stevens and Associates, an independent evaluation and research consulting firm. Her early work and involvement with various NSF minority capacity-building efforts had a visible impact on the developing scholarship related to culturally responsive evaluation.

      Lois-Ellin Datta

      For decades, Lois-Ellin Datta has been a leading evaluation researcher and international consultant. Datta also served in many other roles during the course of her distinguished career in government, working 30 years in Washington, DC. For example, she was director of program evaluation in the human services area at the U.S. GAO’s Program Evaluation and Methodology Division and director for teaching, learning, and assessment at the U.S. Department of Education’s National Institute of Education. Over the years, Datta has also done work with the Maori in New Zealand and with Native Hawaiians.

Professional portrait of Lois-Ellin Datta.

      In the 1960s, Datta served as the national director of evaluation for Project Head Start and the U.S. Children’s Bureau. Some years later while reflecting on that time, Datta (2018a) noted that Head Start’s immediate popularity was overwhelming and that increased the stakes for evaluation, while also pointing to the program’s obvious face validity and demand validity. For her, a major takeaway from those pioneering evaluation days was the importance of mixed methods, multiple approaches, and diverse designs and analyses to address the complexities and multiple dimensions of a major program like Head Start. This is a position that she has urged for the field. Datta (2018b) has stated that

      In the years and evaluations that followed [after Head Start] every study was a new opportunity to think through what approaches seemed good fits with the contexts, complexities, evaluation questions, stakeholders, demining, costs, and, yes, new ideas to try out. (p. 20)

      Datta’s numerous contributions to the field of evaluation include serving as editor-in-chief of New Directions for Evaluation and serving on the editorial boards of the American Journal of Evaluation, the Encyclopedia of Evaluation, and the International Handbook of Educational Evaluation, among others, and her publications have significantly advanced thinking and practice in evaluation particularly in the areas of case study methodology, evaluations in nontraditional settings, and mixed-methods evaluation approaches (Oral History Project Team, 2004). The author of numerous books and over 100 articles about evaluation, she has always been keenly focused on achieving social justice and mindful of the importance of policy.

      Laura Leviton

      Laura Leviton is the coauthor of Foundations of Program Evaluation (Shadish, Cook, & Leviton, 1991). This was one of the first comprehensive assessments of evaluation theories providing both insightful analysis of the current state of evaluation theory and suggestions for improving evaluation practice. From 1999 to 2017, Leviton served as special advisor for evaluation at the Robert Wood Johnson Foundation, an organization that seeks to improve the health and health care of all Americans. This position was created for Leviton at the foundation to advise and consult on evaluations across its many initiatives and national programs. In this position, Leviton describes her role as striving to represent the quality and consistency of the foundation’s research and evaluation and its impact on health and health care nationwide. During her time at the foundation, since 1999, Leviton has overseen more than 80 national and local evaluations. She is interested in all aspects of evaluation methodology and practice. Leviton has been recognized as a leader in the field of evaluating community health promotions (Francisco, Butterfoss, & Capwell, 2001). She collaborated on the first


Скачать книгу