Formative Assessment & Standards-Based Grading. Robert J. Marzano

Formative Assessment & Standards-Based Grading - Robert J. Marzano


Скачать книгу
lists the major studies that have been conducted since 1976. The last three columns are related. Critical to understanding exactly how they are related are the concepts of meta-analysis and effect size (ES). Appendix B (page 153) explains the concepts of meta-analysis and effect size in some depth. Briefly though, meta-analysis is a research technique for quantitatively synthesizing a series of studies on the same topic. For example, as table 1.1 indicates, Kluger and DeNisi (1996) synthesized findings from 607 studies on the effects of feedback interventions. Typically, meta-analytic studies report their findings in terms of average ESs (see the ES column in table 1.1). In the Kluger and DeNisi meta-analysis, the average ES is 0.41. An effect size tells you how many standard deviations larger (or smaller) the average score for a group of students who were exposed to a given strategy (in this case, feedback) is than the average score for a group of students who were not exposed to a given strategy (in this case, no feedback). In short, an ES tells you how powerful a strategy is; the larger the ES, the more the strategy increases student learning.

      a Reported in Fraser, Walberg, Welch, & Hattie, 1987.

      b Reported in Hattie & Timperley, 2007.

      c Feedback was embedded in general metacognitive strategies.

      d The dependent variable was engagement.

      e Reported in Hattie, 2009.

      ESs are typically small numbers. However, small ESs can translate into big percentile gains. For example, the average ES of 0.41 calculated by Kluger and DeNisi (1996) translates into a 16 percentile point gain (see appendix B, page 153, for a detailed description of ESs and a chart that translates ES numbers into percentile gains). Another way of saying this is that a student at the 50th percentile in a class where feedback was not provided (an average student in that class) would be predicted to rise to the 66th percentile if he or she were provided with feedback.

      Hattie and Timperley (2007) synthesized the most current and comprehensive research in feedback and summarized findings from twelve previous meta-analyses, incorporating 196 studies and 6,972 ESs. They calculated an overall average ES of 0.79 for feedback (translating to a 29 percentile point gain). As shown by Hattie (2009), this is twice the average ES of typical educational innovations. One study by Stuart Yeh (2008) revealed that students who received feedback completed more work with greater accuracy than students who did not receive feedback. Furthermore, when feedback was withdrawn from students who were receiving it, rates of accuracy and completion dropped.

      Interestingly, though the evidence for the effectiveness of feedback has been quite strong, it has also been highly variable. For example, in their analyzing of more than six hundred experimental/control studies, Kluger and DeNisi (1996) found that in 38 percent of the studies they examined, feedback had a negative effect on student achievement. This, of course, raises the critically important questions, What are the characteristics of feedback that produce positive effects on student achievement, and what are the characteristics of feedback that produce negative effects? In partial answer to this question, Kluger and DeNisi found that negative feedback has an ES of negative 0.14. This translates into a predicted decrease in student achievement of 6 percentile points. In general, negative feedback is that which does not let students know how they can get better.

      Hattie and Timperley (2007) calculated small ESs for feedback containing little task-focused information (punishment = 0.20; praise = 0.14) but large ESs for feedback that focused on information (cues = 1.10, reinforcement = 0.94). They argued that feedback regarding the task, the process, and self-regulation is often effective, whereas feedback regarding the self (often delivered as praise) typically does not enhance learning and achievement. Operationally, this means that feedback to students regarding how well a task is going (task), the process they are using to complete the task (process), or how well they are managing their own behavior (self-regulation) is often effective, but feedback that simply involves statements like “You’re doing a good job” has little influence on student achievement. Hattie and Timperley’s ultimate conclusion was:

      Learning can be enhanced to the degree that students share the challenging goals of learning, adopt self-assessment and evaluation strategies, and develop error detection procedures and heightened self-efficacy to tackle more challenging tasks leading to mastery and understanding of lessons. (p. 103)

      In K–12 classrooms, the most common form of feedback is an assessment. While the research and theory on feedback and assessment overlap to a great extent, in this section we consider the research and theory that is specific to assessment.

       Research on Assessment

      The research on the effects of assessments on student learning paints a positive picture. To illustrate, table 1.2 (page 6) provides a synthesis of a number of meta-analytic studies on the effects of assessment as reported by Hattie (2009).

Image

      a Two effect sizes are listed because of the differences in variables as reported by Hattie (2009). Readers should consult that study for more details.

      Notice that table 1.2 is subdivided into three categories: frequency of assessment, general effects of assessment, and providing assessment feedback to teachers. The first category speaks to how frequently assessments are given. In general, student achievement benefits when assessments are given relatively frequently as opposed to infrequently. The study by Robert Bangert-Drowns, James Kulik, and Chen-Lin Kulik (1991) depicted in table 1.3 adds some interesting details to this generalization.

      Note that in table 1.3, the effect of even one assessment in a fifteen-week period of time is substantial (0.34). Also note that there is a gradual increase in the size of the effect as the number of assessments increases. This trend should not be misconstrued as indicating that the more tests a teacher gives, the more students will achieve. As we shall see in subsequent chapters, a test is only one of many ways to obtain assessment data.

Number of AssessmentsEffect SizePercentile Point Gain
000
10.3413.5
50.5320
100.6022.5
150.6624.5
200.7126
250.7828.5
300.8229

      Note: Effect sizes computed using data reported by Bangert-Drowns, Kulik, and Kulik (1991).

      The second category in table 1.2, general effects of assessment, is the broadest and incorporates a variety of perspectives on assessment. Again, many of the specific findings in these studies manifest as the recommendations in subsequent chapters. Here it suffices to note that in the aggregate, these studies attest to the fact that properly executed assessments can be an effective tool for enhancing student learning.


Скачать книгу