Modern Characterization of Electromagnetic Systems and its Associated Metrology. Magdalena Salazar-Palma
of a grid describing the rank-2 approximation of the image X."/>
Figure 1.3 Rank‐2 approximation of the image X.
Figure 1.4 Rank‐3 approximation of the image X.
Figure 1.5 Rank‐4 approximation of the image X.
Figure 1.6 Rank‐5 approximation of the image X.
Figure 1.7 Rank‐6 approximation of the image X.
Figure 1.8 Rank‐7 approximation of the image X.
Figure 1.9 Rank‐8 approximation of the image X.
In Figure 1.10, the mean squared error between the actual picture and the approximate ones is presented. It is seen, as expected the error is given by this data
(1.24)
This is a very desirable property for the SVD. Next, the principles of total least squares is presented.
1.3.2 The Theory of Total Least Squares
The method of total least squares (TLS) is a linear parameter estimation technique and is used in wide variety of disciplines such as signal processing, general engineering, statistics, physics, and the like. We start out with a set of m measured data points {(x1,y1),…,(xm,ym)}, and a set of n linear coefficients (a1,…,an) that describe a model,
Figure 1.10 Mean squared error of the approximation.
Since m > n, there are more equations than unknowns and therefore (1.25) has an overdetermined set of equations. Typically, an overdetermined system of equation is best solved by the ordinary least squares where the unknown is given by
(1.26)
where X ∗ represents the complex conjugate transpose of the matrix X. The least squares can take into account if there are some uncertainties like noise in y as it is a least squares fit to it. However, if there is uncertainty in the elements of the matrix X then the ordinary least squares cannot address it. This is where the total Least squares come in. In the total least squares the matrix equation (1.25) is cast into a different form where uncertainty in the elements of both the matrix X and y can be taken into account.
(1.27)
In this form one is solving for the solution to the composite matrix by searching for the eigenvector/singular vector corresponding to the zero eigen/singular value. If the matrix X is rectangular then the eigenvalue concept does not apply and one needs to deal with the singular vectors and the singular values.
The best approximation according to total least squares is that minimizes the norm of the difference between the approximated data and the model
where
(1.29)
where
(1.30)
and where σi is the i‐th singular value of matrix A.
We