On Modified DFP Update for Unconstrained Optimization

In this paper, we propose a new modify of DFP update with a new extended quasi-Newton condition for unconstrained optimization problem so called − update. This update is based on a new Zhang Xu condition we show that − update preserves the value of determinant of the next Hessian matrix equal to the value of determinant of current Hessian matrix theoretically and practically. Global convergence of the modify is established. Local and super linearly convergence are obtained for the proposed method. Numerical results are given to compare a performance of the modify − method with the standard DFP method on same function is selected.


Introduction
The quasi-Newton methods are very useful and efficient methods for solving the unconstrained minimization problem Where : → is twice continuously differentiable. Starting from point and a symmetric positive definite matrix , quasi-Newton method generates sequence { } and { } by the iteration of the form where the update ∈ × satisfies the following famous quasi-Newton equation or (secant equation): with where ' is the step length and ( is the search direction that is obtained by solving the equation: in which ) = ∇ is the gradient of at and is an approximation to the Hessian matrix + = ∇ , . The updating matrix is required to satisfy the usual quasi-Newton equation (3) with equation (4). So is reasonable approximation to + .
The − update consist of iteration of the form (2) where ( is the search direction which of the form (5) The formula is the modifying of DFP update which is satisfy equation (3) and in the next section.
In the following discussion, we shall use ‖•‖ and ‖•‖ > to denote the ? , -norm and the frobenius norm, respectively. For a symmetric positive definite matrix @ ∈ × , we show also use the following weighted norm ‖A‖ B = ‖@A@‖ > , ∀ A ∈ × (8)
Then the equation (14) becomes: Now, by multiplying both sides with 789 1 J , we get: where % * is defined by (9). It is clear that any formula is symmetric and satisfies the quasi-Newton equation.

Convergence Analysis
Now, we study the global convergence of the − update: At the first, we need the following assumptions: Assumption (3.1) \ : f: → is twice continuously differentiable on convex set ⊂ : is uniformly convex, i.e., there exist positive constants n and N such that for all ∈^ = { | ≤ }, which is convex, where is starting point, we have: The assumption (B) implies that ∇ , is positive definite on , and that f has a unique minimizer * in L .
By definition of weighted norm (8) and equation (9) Where + f is the average Hessian, which is defined as: And from (20) c U ∇ , * c ≤ d‖c‖ , , we know ‖c‖ , > 0. Then we can divide both sides from this, we get: Then from equation (28), we get: In addition, from equations (21) and (9), we get: Which gives: is a convex function, we have: In particular, set u = 1, then we have By multiplying both sides with (-1) we get By Cauchy-Schwarz inequality, we get From (24) and (9), we have Obviously, x = Y. By computing the trace of (39), we have The middle two terms can be written as: From equation (4) and (5), we have From the property of the DFP method [5] ) U % = % U ) = 0, we obtain: (4)and (5) again, we get From the positive definiteness property of x , then (38) becomes: Which gives: By finding the inverse number of the expression, we obtain By recurrence, we obtain: In the left part, we will prove that if the theorem does not hold, then the sum of the last two term in (45) Note that and By the positive definiteness of x † and exact line search, then by using (49), (48) and (47) Now suppose that the theorem is not true, that is, there exists OE > 0 such that for all sufficiently large k, Also, by lemma (3.3), there exists a constant • > 0 such that which gives ‖% * ‖ → 0 and further ‖& ‖ → 0. Then, by (51) and (52) The above inequality implies that the sum of the last two terms in (45) is negative.
By (53) Note that, for a symmetric positive definite matrix, the inverse of trace is the lower bound of the last eigenvalue of inverse of the matrix. Then, it follows from (54) that Where • is the lower bound of the last eigenvalue of x . However, from the property of Rayleigh quotient [9], we have which contradicts (55). This contradiction proves that { } converges to * and that our theorem holds.

Local Linear Convergence of D − EFG Method
Now, we shall prove the local linear convergence of − method of equivalent formula (18) for α>0 under exact line search.
The − iteration we consider is: So that replace ∇ and ∇ , by and • respectively.
In this discussion of this subsection, we need the following assumption: Assumption where " is some constant, or that where Ÿ and Ÿ , are some constants, and Then, there exist constants ¨ and H, such that, for all ‖ − * ‖ <¨ and ‖ − • * ‖ < H , the iteration (57) and (65) is well-defined, and { } converges to * linearly.
To study the local convergence of − method, it is required to estimate ‖ − ∇ , * ‖. Note that ‖ ‖ , is defined by (69). The first term on the right hand side of (75) can be estimate as: moreover, for the rest two terms on the right hand side of (75) and by (70) we have:

Super Linear Convergence of D − EFG Method
Now, we shall prove the super linear convergence of the − method. the convergence analysis in this section mainly Dennis and Mor'e [2]. The super linear convergence of the sequence { } generated by the iteration (57) is generally characterized by the following theorem.
Theorem ( Hence, { } is convergent super linearly, we complete the proof.

Numerical Results
This section is devoted to numerical experiments. Our purpose was to check whether the modified − algorithm provide improvements on the corresponding standard DFP algorithm. The programs were written in MATLAP. The reason for their selection is that the problems appear to have been used in standard problems in most the literature these functions represent a result of application in the branch of technology and industry.
The test functions are chosen as follows:

Conclusion
In this thesis, we introduce a new modified of the DFP say − update, we show that under certain circumstances this update preserve the value of the determinant of hessian matrix and without Quasi-Newton or based on the Zhang Xu condition.
Global convergence of the proposed method establishes under exact line search. The proposed method possesses local linearly convergence and super linearly convergence for unconstrained optimization problem.
Numerical results show that the proposed is efficient for unconstrained optimization problem compared the modified − method with the standard DFP method on same function is selected, which suggests that a good improvement has been achieved.