An Educator's Guide to Schoolwide Positive Behavioral Inteventions and Supports. Jason E. Harlacher

An Educator's Guide to Schoolwide Positive Behavioral Inteventions and Supports - Jason E. Harlacher


Скачать книгу
the goal and did not measure fidelity, we can’t be sure what led to the weight loss. Was the person lucky, or did the plan actually work?

      When fidelity isn’t met and the goal isn’t met, we must adjust fidelity and then try again. If fidelity is met, we can conclude the plan didn’t work. Figure 1.3 illustrates the logical conclusions when examining a goal and fidelity. By ensuring that practices are implemented with fidelity, decision makers can determine the extent to which practices are effective in achieving goals.

Image

      Screening data identify those students who are at risk (Hosp, 2008). Office discipline referrals (ODRs) are commonly used within SWPBIS (Irvin, Tobin, Sprague, Sugai, & Vincent, 2004), but schools may also screen using social and behavioral assessments (Anderson & Borgmeier, 2010; Hawken, et al., 2009). School teams also use screening data to understand the extent to which the overall system is healthy—at least 80 percent of the student population is responding to Tier One universal supports and are low risk for chronic problem behaviors, 10 to 15 percent seem to have some risk and are responding to Tier Two interventions, and about 5 percent need additional individualized supports. If the SWPBIS system is not healthy, the screeners can help teams identify where to target additional Tier One supports for all students. If the system appears healthy, the screeners can help determine students who may need additional support (Hawken et al., 2009).

      Once a team identifies a problem (for example, too many referrals on the playground) or when students are determined to need additional support, it uses diagnostic data to determine why the problem is occurring. Whereas screeners are brief measures of general outcomes, diagnostic tools take longer to administer; they dig into the context of the problem and provide extensive data on why the problem is occurring. For example, XYZ Elementary examined additional detailed office discipline referral data on specific behavior types, when the problems occurred, who got the referrals, and why there were numerous referrals occurring on the playground. From this additional information, the team could identify a reasonable solution. For individual students, teachers gather information on the purposes or functions of a behavior so the school staff can examine them to determine the reason behind the behavior. Schools commonly use ODRs to provide more detailed information on a student’s behavior, but the school staff may also use request-assistance forms or brief interviews with staff or students that will help identify the functions of behavior (Hawken et al., 2009). For some students, the staff may conduct a functional behavior assessment, an extensive assessment process designed to ascertain why a problem behavior is occurring and determine the environmental triggers and responses to that behavior (Crone & Horner, 2003).

      What Is the Impact of These Practices?

      Following the use of screening and diagnostic tools, teachers monitor the impact of solutions to the problem to ensure it is meeting the desired outcome. In progress monitoring, staff collects data to determine if support is effective while it is occurring to make formative decisions (Hosp, 2008). Schools use an array of methods and sources to monitor solutions, such as permanent products, daily behavior tracking cards, attendance, and ODRs (Rodriguez, Loman, & Borgmeier, 2016). The previous example (where XYZ set a goal to reduce playground lining-up referrals by 50 percent) demonstrates progress monitoring. The team set a goal to reach by two months, but it reviewed data every two weeks to determine the impact of their solution—providing tickets for lining up and free recess intervention—and to modify if needed. For progress monitoring the impact of supports for individual students, teachers can often use a screening tool as a progress-monitoring tool; for example, teachers use ODRs to screen students and to examine progress. However, there may be situations where the nature of the behavior will determine the exact method used for its monitoring. For example, a student with aggressive behavior may be monitored using methods that are more explicit and detailed than ODRs. Additionally, the intensity of monitoring changes for individual students depends on which level of support they are receiving. All students are essentially monitored using screening tools throughout the year, but students in Tiers Two and Three will have more intensive monitoring (Harlacher et al., 2014; Horner, Sugai, et al., 2005).

      Table 1.4 summarizes how the four elements look at each tier.

      One of the features of SWPBIS that separates it from other schoolwide models or other approaches to discipline is its reliance on data to make decisions (Horner, Sugai, et al., 2005; Sugai & Horner, 2006). To ensure that data are used accurately and efficiently, school teams use the Problem-Solving Model (PSM). The PSM is a four-step model used to define problems in clear and concise terms and then identify a targeted solution to solve that problem (Good, Gruba, & Kaminski, 2002; Reschly, 2008; Shinn, 2008a; Tilly, 2008). The four stages of the PSM are: (1) Problem Identification, (2) Problem Analysis, (3) Plan Identification and Implementation, and (4) Plan Evaluation (see figure 1.4, page 22). Whereas we can view the four key elements (outcomes, practices, systems, and data) as an organizing framework for schools to achieve sustainability and effectiveness with SWPBIS, we can view the PSM as the engine that drives the elements. As school teams use the PSM to identify and solve problems, they will consider each of the four key elements at various steps of the PSM.

Image

      *Teachers administer screening to all students but use the results to identify students who may need Tier Two or Tier Three support.

Image

       Problem Identification

      The first step of the PSM is Problem Identification. During this step, educators answer the question, What is the problem? Educators define the problem in observable and measurable terms that indicate the gap between the observed results and the expected results. In doing so, the educators clarify the magnitude of the problem and then determine whether there actually is a problem. For example, a team may identify that only 65 percent of students are receiving zero to one office referrals when at least 80 percent of students should have zero to one referrals. In this situation, there is a problem; 10 percent is a large enough magnitude. Conversely, if 75 percent of students are receiving zero to one referrals, the school may decide that 5 percent is not a large enough gap to indicate a problem. Once educators identify an initial problem, they can proceed to step 2.

       Problem Analysis

      If a problem is deemed worth solving, then school teams spend time analyzing the problem and answer the question, Why is the problem occurring? Whereas step 1 points out that a problem exists, step 2 entails gathering more information on the context of the problem. During step 2, Problem Analysis, educators gather or examine any additional data needed to answer all five Ws and one H.

      1. What is the problem?

      2. When is it occurring?

      3. Where is it occurring?

      4. Who is engaged in the behavior?

      5. Why is it occurring?

      6. How often is the behavior occurring?

      Note that why refers to the functions of behavior, such as getting or getting away from attention, tangible objects or events, or sensory issues. It is also helpful to answer the why question last because the context of a behavior influences the function of the behavior. All of the gathered information paints a detailed picture


Скачать книгу