Evaluation in Today’s World. Veronica G. Thomas
the reader to the range of evaluation frameworks, models, and theories that make up evaluation while Chapter 5, “Social Justice and Evaluation: Theories, Challenges, Frameworks, and Paradigms,” provides an overview of social justice issues and theories. Chapter 5 also builds on the content of Chapter 4 to show how social justice frameworks and paradigms modify and advance more traditional models and theories. Chapter 6, “Evaluation Types With a Cultural and Racial Equity Lens,” examines the major categories and types of evaluation, when they typically occur, their purpose or major strengths, and their primary audiences.
Chapter 7, “Social Programming, Social Justice, and Evaluation,” moves from the theory, models, and history of evaluation into looking at what will be evaluated. Along with describing social programming and graphically illustrating its components through various types of logic models, the chapter explores the issues, challenges, and complexities of implementing and evaluating social programs in a diverse society.
Chapters 8 through 14 cover the “how to” or practical aspects of doing an evaluation. Chapter 8, “Responsive Stakeholder Engagement and Democratization of the Evaluation Process,” discusses the importance of stakeholder engagement and provides a variety of ways to improve the quality and quantity of stakeholder engagement, as well as ways that greater stakeholder engagement can positively influence the evaluation process. Chapter 9, “Planning the Evaluation,” and Chapter 10, “Evaluation Questions That Matter,” focus on the information that needs to be collected or developed before the evaluation can be designed and implemented. Chapter 9 covers the information and knowledge needed to plan a responsive evaluation and introduces tools that are used in project planning and can be customized for use in evaluation planning. The chapter includes the steps needed to identify project goals and define success including ways of identifying and involving stakeholders. From project goals and definitions of success, the chapter goes on to show readers how to define goals for the evaluation and identify different types of indicators, which are “variables that provide evidence that a certain condition exists or certain results have, or have not, been achieved” (Campbell, Thomas, & Stoll, 2009, p. 54). Chapter 10 moves the reader from the goals for an evaluation to the development of the questions the evaluation will answer. Covered in this chapter are ways to develop evaluation questions that matter, including the characteristics and sources of good evaluation questions, and ways of prioritizing those evaluation questions for diverse audiences.
Chapters 11 through 14 target the technical aspects of evaluation including design, data collection, analysis, and reporting. Chapter 11, “Selecting Appropriate Evaluation Designs,” describes a variety of experimental, quasi-experimental, and descriptive designs; their strengths and weaknesses; and their appropriateness for different evaluation questions. It also covers issues of rigor, comparison and control groups, and longitudinal data including ethical issues tied to their use with different populations. Chapter 12, “Defining, Collecting, and Managing Data,” looks at the strengths and weaknesses of both qualitative and quantitative data, including ways of ensuring data quality including issues of validity and reliability. Also covered are sources of data to be used in evaluations, ways of collecting these data, measures, and ways of managing the data that are collected. Chapter 13, “The Best Analysis for the Data,” begins with a discussion of the types of reasoning that underlie analytic decisions and provides an introduction to different types of data analysis. The final chapter in this section, Chapter 14, “Reporting, Disseminating, and Utilizing Evaluation Results,” focuses on how to present information visually and textually to different groups in valid and culturally appropriate ways. It also covers different modes for communicating and disseminating results including ways to make evaluation results accessible to people with disabilities as well as ways to make the results more usable.
Chapter 15, “Evaluation as a Business,” goes in a very different direction, providing readers who are planning to do evaluations as a consultant or as a part- or full-time business with an overview of the business aspects of evaluation and the knowledge and skills needed to do evaluation as a business. Areas covered in this chapter include evaluation proposal writing, budgeting, interacting with clients, marketing, contracts, and business plans. In the final chapter, Chapter 16, “Interconnections and Practical Implications,” we go back to bias and cultural competence and take another look at how bias and a lack of cultural competence can impact evaluation decision making. Also covered are ways that readers can reduce their own biases and increase their cultural competence and how that can lead to evaluators becoming more culturally responsive. Reflecting on what we covered in earlier chapters, we explore some of the impacts of cultural responsiveness on decision making.
An Overview of Evaluation
The Oxford University Press defines evaluation as “the making of a judgment about the amount, number, or value of something” (Lexico.com, 2020, para. 1). As the definition implies, evaluation is an everyday activity. All of us, either consciously or unconsciously, at some point in time consider the value of a thing; take account of the actions we, or others, have taken; and examine the progress (or lack thereof) we have made on the path we are traveling. Individuals evaluate products and prices at a store to determine whether they will buy a product or even continue to patronize that business. People evaluate their relationships, finances, goals, and health to determine where they are and how they can get better in these areas. By engaging in some form of evaluation, individuals try to assess what is good or bad, what option is better or worse, and what conditions are best to nurture and produce the desired outcomes.
Although people make evaluation decisions, this doesn’t necessarily make them evaluators. Evaluators are professionals who ask and answer questions regarding projects, policies, and programs through the collection and analysis of data. Evaluators seek to provide information that improves decision making at a variety of levels—funders, policymakers, staff, and actual as well as potential participants. Table 1.1 provides a broad overview of the evaluation process from planning to implementation to reporting and use of results.
Table 1.1
Definitions of Evaluation
While for the general public there is a fairly consistent definition of evaluation, that is not the case for evaluators. As Mark, Greene, and Shaw (2006, p. 6) point out, “If you ask 10 evaluators to define evaluation, you’ll probably end up with 23 different definitions. Given that evaluation is diverse, with multiple countenances, it should not be surprising that varying definitions exist.”
Definitions of evaluation from leaders in the field, from the 1980s and 1990s, focused on evaluation as a way of determining value. For example, Michael Scriven,