Demand Driven Material Requirements Planning (DDMRP), Version 2. Carol Ptak

Demand Driven Material Requirements Planning (DDMRP), Version 2 - Carol Ptak


Скачать книгу
three months ended March 31, 2012, compared with $60 million for the comparable 2011 period. The increase in CJO average VaR was due to changes in the synthetic credit portfolio held by CIO as part of its management of structural and other risks arising from the Firm’s on-going business activities.” Keep the bolded sentence in mind, because as it turns out it is nothing but a euphemism for, drumroll, epic, amateur Excel error!

      How do we know this? We know it courtesy of JPMorgan itself, which in the very last page of its JPM task force report had this to say on the topic of JPM’s VaR:

      “. . . a decision was made to stop using the Basel II.5 model and not to rely on it for purposes of reporting CIO VaR in the Firm’s first-quarter Form 10-Q. Following that decision, further errors were discovered in the Basel II.5 model, including, most significantly, an operational error in the calculation of the relative changes in hazard rates and correlation estimates. Specifically, after subtracting the old rate from the new rate, the spreadsheet divided by their sum instead of their average, as the modeler had intended. This error likely had the effect of muting volatility by a factor of two and of lowering the VaR.... it also remains unclear when this error was introduced in the calculation.”

      In other words, the doubling in JPM’s VaR was due to nothing but the discovery that for years, someone had been using a grossly incorrect formula in their Excel, and as a result misreporting the entire firm VaR by a factor of nearly 50%! So much for the official JPM explanation in its 10-Q filing that somewhat conveniently missed to mention that, oops, we made a rookie, first year analyst error. (Tyler Durden, February 2, 2013)

      Perhaps a more interesting question is why are personnel allowed to use these ad-hoc approaches? From a data integrity and security perspective, this is a nightmare. It also means that the fate of the company’s purchasing and planning effectiveness is in the hands of a few essentially irreplaceable personnel. These people can’t be promoted or get sick or leave without dire consequences to the company. This also means that due to the error-prone nature of spreadsheets, globally on a daily basis there are a lot of wrong signals being generated across supply chains. Wouldn’t it be so much easier to just work in the system? The answer seems so obvious. The fact that reality is just the opposite shows just how big the problem is with conventional systems.

      To be fair, many executives are simply not aware of just how much work is occurring outside the system. Once they become aware, they are placed in an instant dilemma. Let it continue, thus endorsing it by default, or force compliance to a system that your subject-matter experts are saying is at best suspect? The choice is only easy the first time an executive encounters it. The authors of this book have seen countless examples of executives attempting to end the ad hoc systems only to quickly retreat when inventories balloon and service levels fall dramatically. They may not understand what’s behind the need for the work-arounds, but they now know enough to simply look the other way. So they make the appropriate noises about how the entire company is on the new ERP system and downplay just how much ad hoc work is really occurring.

      Another piece of evidence to suggest the shortcomings of conventional MRP systems has to do with the inventory performance of the companies that use these systems. To understand this particular challenge, consider the simple graphical depiction in Figure 1-2. In this figure you see a solid horizontal line running in both directions. This line represents the quantity of inventory. As you move from left to right, the quantity of inventory increases; right to left the quantity decreases.

      A curved dotted line bisects the inventory quantity line at two points:

      

Point A, the point where a company has too little inventory. This point would be a quantity of zero, or “stocked out.” Shortages, expedites, and missed sales are experienced at this point. Point A is the point at which the part position and supply chain have become too brittle and are unable to supply required inventory. Planners or buyers that have part numbers past this point to the left typically have sales and operations screaming at them for additional supply.

      

Point B, the point where a company has too much inventory. There is excessive cash, capacity, and space tied up in working capital. Point B is the point at which inventory is deemed waste. Planners or buyers that have part numbers past this point to the right typically have finance screaming at them for misuse of financial resources.

      If we know that these two points exist, then we can also conclude that for each part number, as well as the aggregate inventory level, there is an optimal range somewhere between those two points. This optimal zone is labeled in the middle and colored green. When inventory moves out of the optimal zone in either direction, it is deemed increasingly problematic.

      This depiction is consistent with the graphical depiction of a loss function developed by the Japanese business statistician Genichi Taguchi to describe a phenomenon affecting the value of products produced by a company. This made clear the concept that quality does not suddenly plummet when, for instance, a machinist slightly exceeds a rigid blueprint tolerance. Instead “loss” in value progressively increases as variation increases from the intended nominal target.

      The same is true for inventory. Chapter 2 will discuss how the value of inventory should be related to the ability of inventory to help promote or protect flow. As the inventory quantity expands out of the optimal zone and moves toward point B, the return on working capital captured in the inventory becomes less and less as the flow of working capital slows down. The converse is also true: as inventory shrinks out of the optimal zone and approaches zero or less, then flow is impeded due to shortages.

      When the aggregate inventory position is considered in an environment using traditional MRP, there is frequently a bimodal distribution noted. With regard to inventory, a bimodal distribution can occur on two distinct levels:

      1. A bimodal distribution can occur at the single-part level over a period of time, as a part will oscillate back and forth between excess and shortage positions. In each position, flow is threatened or directly inhibited. The bimodal position can be weighted toward one side or the other, but what makes it bimodal is a clear separation between the two groups—the lack of any significant number of occurrences in the “optimal range.”

      2. The bimodal distribution also occurs across a group of parts at any point in time. At any one point, many parts will be in excess while other parts are in a shortage position. Shortages of any parts are particularly devastating in environments with assemblies and shared components because the lack of one part can block the delivery of many.

      Figure 1-3 is a conceptual depiction of a bimodal distribution across a group of parts. The bimodal distribution depicts a large number of parts that are in the too-little range while still another large number of parts are in the too-much range. The Y axis represents the number of parts at any particular point on the loss function spectrum.

      Not only is the smallest population in the optimal zone, but the time any individual part spends in the optimal zone tends to be short-lived. In fact, most parts tend to oscillate between the two extremes. The oscillation is depicted with the solid curved line connecting the two disparate distributions. That oscillation will occur every time MRP is run. At any one time, any planner or buyer can have


Скачать книгу