Artificial Intelligence and Quantum Computing for Advanced Wireless Networks. Savo G. Glisic

Artificial Intelligence and Quantum Computing for Advanced Wireless Networks - Savo G. Glisic


Скачать книгу
the classification decision. A simple yes or no answer is sometimes of limited value in applications where questions like where something occurs or how it is structured are more relevant than a binary or real‐valued one‐dimensional assessment of mere presence or absence of a certain structure. In this section, we aim to explain in more detail the relation between classification and interpretability for multilayered neural networks discussed in the previous chapter.

      4.2.1 Pixel‐wise Decomposition

      We start with the concept of pixel‐wise image decomposition, which is designed to understand the contribution of a single pixel of an image x to the prediction f(x) made by a classifier f in an image classification task. We would like to find out, separately for each image x, which pixels contribute to what extent to a positive or negative classification result. In addition, we want to express this extent quantitatively by a measure. We assume that the classifier has real‐valued outputs with mapping f: V1 such that f(x) > 0 denotes the presence of the learned structure. We are interested in finding out the contribution of each input pixel x(d) of an input image x to a particular prediction f(x). The important constraint specific to classification consists in finding the differential contribution relative to the state of maximal uncertainty with respect to classification, which is then represented by the set of root points f(x0) = 0. One possible way is to decompose the prediction f(x) as a sum of terms of the separate input dimensions xd :

      Here, the qualitative interpretation is that Rd < 0 contributes evidence against the presence of a structure that is to be classified, whereas Rd > 0 contributes evidence for its presence. More generally, positive values should denote positive contributions and negative values, negative contributions.

      LRP: Returning to multilayer ANNs, we will introduce LRP as a concept defined by a set of constraints. In its general form, the concept assumes that the classifier can be decomposed into several layers of computation, which is a structure used in Deep NN. The first layer are the inputs, the pixels of the image; and the last layer is the real‐valued prediction output of the classifier f. The l‐th layer is modeled as a vector z equals left-parenthesis z Subscript d Superscript left-parenthesis l right-parenthesis Baseline right-parenthesis Subscript d equals 1 Superscript upper V left-parenthesis l right-parenthesis with dimensionality V(l). LRP assumes that we have a relevance score upper R Subscript d Superscript left-parenthesis l plus 1 right-parenthesis for each dimension z Subscript d Superscript left-parenthesis l plus 1 right-parenthesis of the vector z at layer l + 1. The idea is to find a relevance score upper R Subscript d Superscript left-parenthesis l right-parenthesis for each dimension z Subscript d Superscript left-parenthesis l right-parenthesis of the vector z at the next layer l which is closer to the input layer such that the following equation holds:

      As an example, suppose we have one layer. The inputs are xV. We use a linear classifier with some arbitrary and dimension‐specific feature space mapping φd and a bias b:

      (4.3)f left-parenthesis x right-parenthesis equals b plus sigma-summation Underscript d Endscripts alpha Subscript d Baseline phi Subscript d Baseline left-parenthesis x Subscript d Baseline right-parenthesis

      Let us define the relevance for the second layer trivially as upper R 1 Superscript left-parenthesis 2 right-parenthesis Baseline equals f left-parenthesis x right-parenthesis. Then, one possible LRP formula would be to define the relevance R(1) for the inputs x as

      (4.4)upper R Subscript d Superscript left-parenthesis 1 right-parenthesis Baseline equals StartLayout Enlarged left-brace 1st Row 1st Column f left-parenthesis x right-parenthesis Superscript StartFraction bar alpha Super Subscript d Superscript phi Super Subscript d Superscript left-parenthesis x Super Subscript d Superscript right-parenthesis bar Over sigma-summation Underscript d Endscripts bar alpha Super Subscript d Superscript phi Super Subscript d Superscript left-parenthesis x Super Subscript d Superscript right-parenthesis bar EndFraction Baseline 2nd Column if sigma-summation Underscript d Endscripts bar alpha Subscript d Baseline phi Subscript d Baseline left-parenthesis x Subscript d Baseline right-parenthesis bar not-equals 0 2nd Row 1st Column StartFraction b Over upper V EndFraction 2nd Column if sigma-summation Underscript d Endscripts bar alpha Subscript d Baseline phi Subscript d Baseline left-parenthesis x Subscript d Baseline right-parenthesis bar equals 0 EndLayout