A Handbook for High Reliability Schools. Robert J. Marzano
Get rid of time requirements. Adjust reporting systems accordingly.
The critical commitments for each level are described in depth in the following chapters. We believe they are essential to achieving high reliability status.
Monitoring Performance for Continuous Improvement
Once a school has met the criterion scores for a level’s lagging indicators, it is considered to have achieved high reliability status for that level. However, being a high reliability school at a given level involves more than meeting criterion scores for lagging indicators. Recall from the previous discussion of high reliability organizations that implementing processes and procedures to prevent problems is only part of what they do. High reliability organizations also constantly monitor critical factors, looking for changes in data that indicate the presence of problems.
Similarly, high reliability schools monitor critical factors and immediately take action to contain and resolve the negative effects of problems as quickly as possible. Even after a school has achieved high reliability status for a specific level, its leaders continue to collect and analyze data related to leading and lagging indicators to ensure that the expectations of that level are continuously met over time. In the event that data for a specific indicator cease to meet expectations, school leaders intervene to identify the problem, minimize any negative effects of the problem, and either strengthen existing processes and procedures or implement new ones to fix the current problem and prevent future ones.
Constantly monitoring critical factors for problems requires continual data collection and observation. Consider an organization with very little tolerance for errors: the U.S. Navy. Particularly, consider an aircraft carrier, a ship from which fighter jets, helicopters, and other aircraft take off and land. The number of potential errors in such an environment is mind-boggling. For example, even small debris—like a pebble or scrap of cloth—on the flight deck can cause catastrophic problems for the finely tuned engines and other sensitive systems of naval aircraft. Therefore, the U.S. Navy implements systematic FOD walks. FOD stands for “foreign objects and debris,” and during a FOD walk, personnel on the aircraft carrier walk along the deck shoulder to shoulder, picking up anything that they find. Such procedures occur multiple times each day. Figure I.1 shows a FOD walk being conducted on board an aircraft carrier.
Source: U.S. Navy, 2005. In public domain.
Figure I.1: FOD walk being conducted on board an aircraft carrier.
As seen here, FOD walks require all members of a ship’s crew to work together to identify and resolve potential problems.
Consider another example of the power of continual data collection and monitoring. Studies show that daily weigh-ins help individuals lose weight and keep it off (for example, see Linde, Jeffery, French, Pronk, & Boyle, 2005, and Wing, Tate, Gorin, Raynor, & Fava, 2006). Each time someone steps on the scale, that person collects a data point that shows whether he or she is moving toward or away from the target. If data show that he or she is not moving toward or maintaining the goal, he or she can take steps to minimize the impact of errors (such as eating less at meals or snacking less frequently).
In the same way that aircraft carrier crews walk along the flight deck or a dieter steps on the scale every day, so too must teachers and administrators monitor the reliability of their school even after they have achieved high reliability status at a specific level. Such work can be accomplished through quick data, problem prevention and celebration of success, and level-appropriate data collection.
Quick Data
Monitoring can be done quite efficiently through the use of quick data—information that can be collected quickly and easily within a short span of time. In the following chapters, we describe how schools can collect quick data about indicators for each level. Once a school has achieved high reliability status for a given level, its leaders can generate quick data on any topic, even if that topic is an area of strength for the school (as indicated by initial survey results). Quick data are meant to be used to monitor the pulse of a school regarding a particular level of performance. Therefore, a school should focus its quick data collection on indicators that will best help it monitor fluctuations in performance at a particular level of high reliability status. There are three types of quick data: (1) quick conversations, (2) quick observations, and (3) easy-to-collect quantitative data.
Quick Conversations
As the name implies, quick conversations are brief discussions that occur between teachers charged with collecting quick data and various members of a school community. For example, questions such as “How safe has our school been lately?” might be designed around leading indicators 1.1 and 1.2, which deal with safety (see chapter 1). Similarly, questions could be designed for leading indicator 1.3, which deals with teachers’ having a voice in school decisions (see chapter 1), by asking, “Recently, to what extent have teachers had roles in making important decisions regarding the school?” One or more of these questions would be asked of teachers, students, and parents over a short interval of time (for example, during a specific week).
Members of collaborative teams within a PLC are perfect candidates for quick conversations. For example, consider a school that designs or selects (from the lists of questions in chapters 1 through 5) questions every month for each high reliability level it has already achieved. One or more members selected from a collaborative team are then invited to ask these questions of teachers, students, or community members (whichever groups are appropriate) and engage in five to ten quick conversations with appropriate members of the school community. These conversations last only a few minutes and occur with those school community members who are readily available. Immediately after each interaction, the teacher asking the questions codes each answer using a scale like the following:
Excellent—The answer indicates that the respondent believes the school is performing above what would normally be expected for this issue.
Adequate—The answer indicates that the respondent believes there are no major problems relative to this issue.
Unsatisfactory—The answer indicates that the respondent believes there are major problems that should be addressed relative to this issue.
The teacher asking the questions records the responses on a form such