Mechanical Engineering in Uncertainties From Classical Approaches to Some Recent Developments. Группа авторов
probability distribution for this epistemic uncertainty, a distribution which will typically be “narrower” than ℙX. In any case, the epistemic uncertainty on the failure limit stress of a specific aircraft spar is modeled here all along within a probabilistic framework, a framework that seems to be well suited to this problem.
On the other hand, there are other situations where retaining this probabilistic framework to represent epistemic uncertainties may be more problematic. These are typically cases where the amount of data is limited. This happens quite frequently, since having large sample sizes (experimental or by simulation) is often expensive or simply not feasible in practice. What can be said, for example, about the inherent variability of systems that have been manufactured only a few times, such as the space shuttle, which has been manufactured only five times?
In such cases, the key questions to ask before adopting a probabilistic modeling for these epistemic uncertainties are as follows:
– do we have sufficient information to consider a particular type of probability distribution (normal, uniform, etc.) for this uncertainty?
– if a certain type of probability distribution is assumed even in the absence of sufficient information to justify it, will it always lead to reasonable results for the analysis?
In the following, we present three examples of situations where the probabilistic modeling of epistemic uncertainties can be problematic or lead to counterintuitive results.
Problems can arise when epistemic uncertainties are represented by uniform probability distributions, which are supposed to account for a lack of knowledge about what happens between the upper and lower bounds of the uniform distribution. In the event, an expert can only provide two bounds to characterize the uncertainty on a quantity of interest and the principle of indifference could lead us to believe that modeling this uncertainty using a uniform distribution between these two bounds is a reasonable choice.
However, this can be problematic, as a uniform distribution has relatively strong specific implications in probability theory. In particular, making the assumption of a uniform distribution between two bounds is a much stronger assumption than the assumption that the quantity can be indifferently (that is, without preferential concentration) located anywhere between the two bounds. As a matter of fact, a uniform distribution implies a particular uncertainty structure between the two bounds, which is not without consequence. To illustrate this, consider the following example.
Let us consider a bar with a square cross-section subjected to a specific tensile force. In order to calculate the stress in the bar, we need the cross-sectional area of the bar. However, the dimensions of the bar are not precisely known. They thus comprise an epistemic uncertainty. Assume that the only thing that is known is that the value of the side of the square is between a and b. How should we model these uncertainties? The principle of indifference could lead us to model the side X of the square using a uniform distribution between a and b. However, the same principle of indifference could lead us to model these epistemic uncertainties by considering that it is the area of section X2 that is uniformly distributed between a2 and b2, since ultimately it is the section that we need for the calculation of the stress. Which of these two models should we choose?
The two choices might seem indifferent, but unfortunately, they are not. Indeed, by modeling uncertainty using a uniform distribution on X, when it is propagated to X2, it will not lead to a uniform distribution on X2. This is illustrated in Figure 1.3 when X follows a uniform distribution between 0 and 1. The distribution of X2 is then far from being a uniform distribution, with low values of X2 much more likely than high values. This dilemma illustrates one of the difficulties that can arise when trying to model certain knowledge gaps by imposing a priori choices of probability distributions.
Figure 1.3. Distribution function of a uniform distribution on X between [0,1] (magenta line) and the corresponding distribution function of X2 (blue line). For a color version of this figure, see www.iste.co.uk/gogu/uncertainties.zip
A second example illustrating counterintuitive results that could emerge from inadequate modeling of epistemic uncertainties is based on Nikolaidis et al. (2004). Let us consider a structure subjected to vibrational stresses. Due to the resonance phenomenon, some excitation frequencies must be avoided in order to prevent failure of the structure. If we impose that the structure’s response (for example, its maximum deflection) must be below a threshold, this delimits a zone of failure, illustrated in Figure 1.4 for a generic frequency response function.
Let us now consider that there is a large epistemic uncertainty on the values of the frequencies at which the structure in service will be excited and that only the bounds a and b on these frequencies are known. In the absence of additional information, the principle of indifference is applied and a uniform distribution in [a,b] is initially considered for the excitation frequencies (see Figure 1.4).
This defines the probability of failure by the area below the distribution within the failure zone. Now let us suppose that the engineer is looking to be more conservative and considers the uncertainty about the excitation frequencies to be higher. Therefore, they consider a wider uniform distribution in [ã,
] (see Figure 1.4) to account for the higher epistemic uncertainty. This results in a decrease in the area under the distribution that lies within the failure zone and thus decreases the probability of failure. This example thus illustrates a situation where increasing epistemic uncertainty (that is, increasing the lack of knowledge) decreases the probability of failure. This is somewhat counterintuitive, as one would normally expect that high epistemic uncertainty (that is, a significant lack of knowledge) would lead to unreliable products (namely, having a high probability of failure) and that products can be made more reliable by increasing knowledge (namely, decreasing epistemic uncertainty).Figure 1.4. Illustration with a system presenting a resonance of a case where modeling epistemic uncertainties using a uniform probability distribution leads to counterintuitive results. For a color version of this figure, see www.iste.co.uk/gogu/uncertainties.zip
We can also deduce, from the previous case study, a third example of a counterintuitive situation. Let us consider that the true variability on the excitation frequencies corresponds to a uniform distribution in [a,b]. This uncertainty is not exactly known and by attempting to be conservative we assume that the uncertainty is quite large, modeling it by a uniform distribution in [ã,
]. In the design phase, the engineer can typically look for ways to modify the structure in order to reduce the probability of failure. Suppose the engineer makes a modification for this purpose (for example, a change in dimensions, in materials), which results in moving the failure area somewhere in the interval [ã, a]. If we consider the real