Power Magnetic Devices. Scott D. Sudhoff
on the case where f(x) ∈ ℝ and x ∈ ℝn. Algorithms to solve this problem include gradient methods, Newton’s method, conjugate direction methods, quasi‐Newton methods, and the Nelder–Mead simplex method, to name a few. Let us focus on Newton’s method as being somewhat representative.
In order to set the stage for Newton’s method, let us first define some operators. The first derivative or gradient of our objective function is denoted ∇f(x) and is defined as
(1.3-1)
The second derivative or Hessian of f(x)is defined as
(1.3-2)
If x* is a local minimizer of f, and if x* is in the interior of Ω, it can be shown that
and that
Note that the statement F(x*) ≥ 0 means that F(x*) is positive semi‐definite, which is to say that yTF(x*)y ≥ 0 ∀ y ∈ ℝn. This second condition verifies that a minimum rather than a maximum has been found. It is important to understand that the conditions (1.3-3) and (1.3-4) are necessary but not sufficient conditions for x* to be a mimimizer, unless f is a convex function. If f is convex, (1.3-3) and (1.3-4) are necessary and sufficient conditions for x* to be a global minimizer.
At this point, we are posed to set forth Newton’s method of finding function minimizers. This method is iterative and is based on a kth estimate of the solution denoted x[k]. Then an update formula is applied to generate a (hopefully) improved estimate x[k + 1] of the minimizer. The update formula is derived by first approximating f as a Taylor series about the current estimated solution x[k]. In particular,
where H denotes higher‐order terms. Neglecting these higher‐order terms and finding the gradient of f(x) based on (1.3-5), we obtain
From the necessary condition (1.3-3), we next take the right‐hand side of (1.3-6) and equate it with zero; then we replace x, which will be our improved estimate, with x[k + 1]. Manipulating the resulting expression yields
which is Newton’s method.
Clearly, Newton’s method requires f ⊂ C2, which is to say that f is in the set of twice differentiable functions. Note that the selection of the initial solution, x[1], can have a significant impact on which (if any) local solution is found. Also note that method is equally likely to yield a local maximizer as a local minimizer.
Example 1.3A
Let us apply Newton’s method to find the minimizer of the function
This example is an arbitrary mathematical function; we will consider how to construct f(x) so as to serve a design purpose in Section 1.9. Inspection of (1.3A-1) reveals that the global minimum is at x1 = 2 and x2 = ln (2). However, let us apply Newton’s method to find the minimum. Our first step is to obtain the gradient and the Hessian. From (1.3A-1), we have
(1.3A-2)
and
(1.3A-3)
Table 1.1 Newton’s Method Results
k | x[k] | f(x[k]) | ∇f(x[k]) | F(x[k]) |
---|---|---|---|---|
1 |
|
43 |
|
|
2 |
|
14.2 |
|
|
3 |
|
9.25 |
|
|