Formative Assessment & Standards-Based Grading. Robert J. Marzano
improve the quality of their work, and help students see and feel in control of their journey to success…. This is not about accountability—these are assessments of learning. This is about getting better. (p. 31)
Susan Brookhart and Anthony Nitko (2007) explained that “formative assessment is a loop: Students and teachers focus on a learning target, evaluate current student work against the target, act to move the work closer to the target, and repeat” (p. 116).
Along with these general descriptions, specifics regarding the practice of formative assessment have been offered. Unfortunately, there is no clear pattern of agreement regarding the specifics. For example, some advocates stress that formative assessments should not be recorded, whereas others believe they should. Some assert that formative assessments should not be considered when designing grades, where others see a place for them in determining a student’s true final status (see O’Connor, 2002; Welsh & D’Agostino, 2009; Marzano, 2006). To a great extent, the purpose of this book is to articulate a well-crafted set of specifics regarding the practice of formative assessment.
Learning Progressions and Clear Goals
The development of learning progressions has become a prominent focus in the field of formative assessment. Margaret Heritage (2008) explained the link between learning progressions and formative assessment as follows:
The purpose of formative assessment is to provide feedback to teachers and students during the course of learning about the gap between students’ current and desired performance so that action can be taken to close the gap. To do this effectively, teachers need to have in mind a continuum of how learning develops in any particular knowledge domain so that they are able to locate students’ current learning status and decide on pedagogical action to move students’ learning forward. Learning progressions that clearly articulate a progression of learning in a domain can provide the big picture of what is to be learned, support instructional planning, and act as a touchstone for formative assessment. (p. 2)
One might think that learning progressions have already been articulated within the many state and national standards documents. This is not the case. Again, Heritage noted:
Yet despite a plethora of standards and curricula, many teachers are unclear about how learning progresses in specific domains. This is an undesirable situation for teaching and learning, and one that particularly affects teachers’ ability to engage in formative assessment. (p. 2)
The reason state and national standards are not good proxies for learning progressions is that they were not designed with learning progressions in mind. To illustrate, consider the following standard for grade 3 mathematics from the state of Washington (Washington Office of Superintendent of Public Instruction, 2008):
Students will be able to round whole numbers through 10,000 to the nearest ten, hundred, and thousand. (p. 33)
This sample provides a fairly clear target of what students should know by grade 3, but it does not provide any guidance regarding the building blocks necessary to attain that goal. In contrast, Joan Herman and Kilchan Choi (2008, p. 7) provided a detailed picture of the nature of a learning progression relative to the concept of buoyancy. They identified the following levels (from highest to lowest) of understanding regarding the concept:
• Student knows that floating depends on having less density than the medium.
• Student knows that floating depends on having a small density.
• Student knows that floating depends on having a small mass and a large volume.
• Student knows that floating depends on having a small mass, or that floating depends on having a large volume.
• Student thinks that floating depends on having a small size, heft, or amount, or that it depends on being made out of a particular material.
• Student thinks that floating depends on being flat, hollow, filled with air, or having holes.
Obviously, with a well-articulated sequence of knowledge and skills like this, it is much easier to provide students with feedback as to their current status regarding a specific learning goal and what they must do to progress.
While one might characterize the work on learning progressions as relatively new and therefore relatively untested, it is related to a well-established and heavily researched area of curriculum design—learning goals. One might think of learning progressions as a series of related learning goals that culminate in the attainment of a more complex learning goal. Learning progressions can also be used to track student progress. The research on learning goals is quite extensive. Some of the more prominent studies are reported in table 1.5 (page 12).
Table 1.5 Research Results for Establishing Learning Goals
a Two effect sizes are listed because of the manner in which effect sizes were reported. Readers should consult the study for more details.
b As reported in Hattie (2009).
c Both Tubbs (1986) and Locke and Latham (1990) report results from organizational as well as educational settings.
d As reported in Locke and Latham (2002).
e The review includes a wide variety of ways and contexts in which goals might be used.
A scrutiny of the studies reported in table 1.5 provides a number of useful generalizations about learning goals and, by extrapolation, about learning progressions. First, setting goals appears to have a notable effect on student achievement in its own right. This is evidenced by the substantial ESs reported in table 1.5 for the general effects of goal setting. For example, Kevin Wise and James Okey (1983) reported an ES of 1.37, Mark Lipsey and David Wilson (1993) reported an ES of 0.55, and Herbert Walberg (1999) reported an ES of 0.40. Second, specific goals have more of an impact than do general goals. Witness Mark Tubbs’s (1986) ES of 0.50 associated with setting specific goals as opposed to general goals. Edwin Locke and Gary Latham (1990) reported ESs that range from 0.42 to 0.82 regarding specific versus general goals, and Steve Graham and Dolores Perin (2007) reported an ES of 0.70 (for translations of ESs into percentile gains, see appendix B). Third, goals must be at the right level of difficulty for maximum effect on student achievement. This is evidenced in the findings reported by Tubbs (1986), Anthony Mento, Robert Steel, and Ronald Karren (1987), Locke and Latham (1990), Kluger and DeNisi (1996), and Matthew Burns (2004). Specifically, goals must be challenging enough to interest students but not so difficult as to frustrate them (for a detailed discussion of learning goals, see Marzano, 2009).
The Imprecision of Assessments
One fact that must be kept in mind in any discussion of assessment—formative or otherwise—is that all assessments are imprecise to one degree or another. This is explicit in a fundamental equation of classical test theory that can be represented as follows:
Observed score = true score + error score
Marzano (2006) explained:
This equation indicates that a student’s observed score on an assessment (the final score assigned by the teacher) consists of two components—the student’s true score and the student’s error score. The student’s true score is that which represents the student’s true level of understanding or skill regarding the topic being measured. The error score is the part of an observed score that is due to factors other than the student’s level understanding or skill. (pp. 36–37)
In technical terms, every score assigned to a student on every assessment probably contains some part that is error. To illustrate the consequences