The Failure of Risk Management. Douglas W. Hubbard

The Failure of Risk Management - Douglas W. Hubbard


Скачать книгу
risk manager should always assume that the list of considered risks, no matter how extensive, is incomplete. All we can do is increase completeness by continual assessment of risks from several angles and compare them with a common set of metrics. In part 3, we will discuss some angles to consider when developing a taxonomy in the hope that it might help the reader think of previously excluded risks.

      Answering the Right Question

      The first and simplest test of a risk management method is determining if it answers the relevant question, “Where and how much do we reduce risk and at what cost?” A method that answers this, explicitly and specifically, passes this test. If a method leaves this question open, it does not pass the test—and many will not pass.

      Relevant risk management should be based on risk assessment that ultimately follows through to explicit recommendations on decisions. Should an organization spend $2 million to reduce its second largest risk x by half, or spend the same amount to eliminate three risks that aren't in the top five biggest risks? Ideally, risk mitigation can be evaluated as a kind of “return on mitigation” so that different mitigation strategies of different costs can be prioritized explicitly. Merely knowing that some risks are high and others are low is not as useful as knowing that a particular mitigation has a 230 percent return on investment (ROI) and another has only a 5 percent ROI or whether the total risks are within our risk tolerance or not.

      We will spend some time on several of the previously mentioned methods of assessing performance, but we will be spending a greater share of our time on component testing. This is due, in part, to the fact that there is so much research on the performance of various components, such as methods of improving subjective estimates, the performance of quantitative methods, using simulations, aggregating expert opinion, and more.

      Still, even if risk managers use only component testing in their risk management process, many are likely to find serious shortcomings in their current approach. Many of the components of popular risk management methods have no evidence of whether they work, and some components have shown clear evidence of adding error. Still other components, though not widely used, can be shown to produce convincing improvements compared to the alternatives.

      RISK MANAGEMENT SUCCESS-FAILURE SPECTRUM

      1 Best. The firm builds quantitative models to run simulations; all inputs are validated with proven statistical methods, additional empirical measurements are used when optimal, and portfolio analysis of risk and return is used. Always skeptical of any model, the modelers check against reality and continue to improve the risk models with objective measures of risks. Efforts are made to systematically identify all risks in the firm.

      2 Better. Quantitative models are built using at least some proven components; the scope of risk management expands to include more of the risks.

      3 Baseline. Intuition of management drives the assessment and mitigation strategies. No formal risk management is attempted.

      4 Worse (the merely useless). Detailed soft or scoring methods are used, or perhaps misapplied quantitative methods are used, but at least they are not counted on by management. This may be no worse than the baseline, except that they did waste time and money on it.

      5 Worst (the worse than useless). Ineffective methods are used with great confidence even though they add error to the evaluation. Perhaps much effort is spent on seemingly sophisticated methods, but there is still no objective, measurable evidence they improve on intuition. These “sophisticated” methods are far worse than doing nothing or simply wasting money on ineffectual methods. They cause erroneous decisions to be made that would not otherwise have been made.

      Note that in this spectrum doing nothing about risk management is not actually the worst case. It is in the middle of the list. Those firms invoking the infamous “at least I am doing something” defense of their risk management process are likely to fare worse. Doing nothing is not as bad as things can get for risk management. The worst thing to do is to adopt an unproven method—whether or not it seems sophisticated—and act on it with high confidence.

      1 1. Some of the details of this are modified to protect the confidentiality of the firm that presented the method in this closed session, but the basic approach used was still a subjective weighted score.

      2 2. C. Tsai, J. Klayman, and R. Hastie, “Effects of Amount of Information on Judgment Accuracy and Confidence,” Organizational Behavior and Human Decision Processes 107, no. 2 (2008): 97–105.

      3 3. C. Heath and R. Gonzalez, “Interaction with Others Increases Decision Confidence but Not Decision Quality: Evidence against Information Collection Views of Interactive Decision Making,” Organizational Behavior and Human Decision Processes 61, no. 3 (1995): 305–26.

      4 4. Stuart Oskamp, “Overconfidence in Case-Study Judgments,” Journal of Consulting Psychology 29, no. 3 (1965): 261–65, doi: 10.1037/ h0022125. Reprinted in Judgment under Uncertainty: Heuristics and Biases, ed. Daniel Kahneman, Paul Slovic, and Amos Tversky (Cambridge, UK: Cambridge University Press, 1982).

      5 5. P. Andreassen, “Judgmental Extrapolation and Market Overreaction: On the Use and Disuse of News,” Journal of Behavioral Decision Making 3, no. 3 (July–September 1990): 153–74.

      6 6. D. A. Seaver, “Assessing Probability with Multiple Individuals: Group Interaction versus Mathematical Aggregation,” Report No. 78–3 (Los Angeles: Social Science Research Institute, University of Southern California, 1978).

      7 7. S. Kassin and C. Fong, “I'm Innocent! Effects of Training on Judgments of Truth and Deception in the Interrogation


Скачать книгу