Electronics in Advanced Research Industries. Alessandro Massaro
a DL neural network. The ANNs are based on the concept of back‐propagation error of the neural network training, consisting of a tuning of the weights based on the error rate obtained in the previous iterations. The iteration is named an epoch. Proper refinement of the weights tuning ensures lower error rates, optimizing the model for the specific case of study. In Figure 1.18a is sketched the principle of the back propagation feedback system, enabling self‐adjusting weights. Figure 1.18b shows a basic neural network implementing the mathematical function defining node output (unit step functions named activation functions).
Figure 1.17 (a) Simple ANN. (b) DL neural network.
Figure 1.18 (a) Feedback system minimizing calculation error in the training model. (b) Neural network model implementing unit step function.
The pseudocode of the ANN training process is as follows:
1. Train_ANN (fi, wi, oj) 2. For epochs = 1 to N Do 3. While (j ≤ m) Do 4. Randomly initialize wi = { w1 , w2 …, wn}; 5. Input oj = { o1 , o2 ,…, om} in the input layer forward propagate (fi· wi) though layers until is obtained the predicted result y; 6. Compute the error e = y - y2; 7. Back propagate e from the right to the left of the ANN network through layers; 8. Update wi; 9. End While 10. End For
The pseudocode highlights that there are two mechanisms in the ANN network: the forward propagation of the estimation of the predicted output y, and the back propagation of the error function as sketched in Figure 1.18b. The output is estimated by considering the summation of the input contributions and is defined as:
(1.10)
where f is the activation function. Some examples of activation functions are plotted in Figure 1.19, where the analytical forms are:
(1.11)
(1.12)
(1.13)
(1.14)
(1.15)
Figure 1.19 Basic mathematical functions defining activation functions.
Other mathematical activation functions are the following [68]:
(1.16)
(1.17)
(1.18)
(1.19)
(1.20)
(1.21)
(1.22)
(1.23)
(1.24)
(1.25)
(1.26)
(1.27)
(1.28)
(1.29)
(1.30)
(1.31)
The activation function represents a basic research element of considerable importance. The correct choice of the activation function defines the best implementation of the logic defining the outputs. The analytical model must therefore be appropriately weighted by the various variables and must be “calibrated” for the specific case study. Another important aspect is the ability of the activation function to self‐adapt [69] to the specific case study providing a certain flexibility [70]. Of particular interest is the possibility to consider a combination of activation functions (activation ensemble [71]). The approach to follow is therefore to define a flexible and modular activation function as is the case for the adaptive spline activation function [72].
Concerning the training models, the full dataset of the neural network is divided into a training set, validation set, and test set (Figure 1.20). In particular, the function of the training dataset is to fit the model; the validation set is a small partition of the full dataset able to previously estimate prediction error of the selected model; finally, the test set is used for testing the final model. A correct choice of the three parts depends on the SNR of the full dataset.
Figure 1.20 Supervised artificial network model: partitioning of the available dataset into training set, validation set, and test set.
The intelligent algorithms which constitute the core of the Industry 5.0 system, are classified in Figure