A Parametric Kernel Function Yielding the Best Known Iteration Bound of Interior-Point Methods for Semidefinite Optimization

: In this paper, a class of large-update primal-dual interior-point methods for semidefinite optimization based on a parametric kernel function are presented. The proposed kernel function is not only used for determining the search directions but also for measuring the distance between the given iterate and the center for the algorithms. By means of the Nesterov and Todd scaling scheme, the currently best known iteration bounds for large-update methods is established.


Introduction
In this paper, we focus on the primal problem of semidefinite optimization (SDO) in the standard form Throughout the paper, we assume that the matrices i A are linearly independent.
Kernel functions play an important role in the design and analysis of primal-dual (IPMs) for optimization and complementarity problems. They are not only used for determining the search directions but also for measuring the distance between the given iterate and the µ -center for the algorithms. Currently, (IPM) based on kernel function is one of the most effective methods for (LO) and (SDO) and is a very active research areas in mathematical programming. Particularly, Bai et al. [14] introduced a variety of non-selfregular kernel functions, i.e., the so-called eligible kernel functions, which is defined by some simple conditions on the kernel functions and their derivatives. They provided a simple and unified computational scheme for the complexity analysis of primal-dual kernel function based (IPMs) for (LO). Consequently, a series of eligible kernel functions are considered for various optimization problems and complementarity problems, see, e.g., [15,16,17,18]. For a survey, we refer to the monograph [19] on the subject and the references therein.
In this paper, we consider the following parametric kernel function [8] which is a generalization of the finite kernel function considered in [15] for (LO), namely, The purpose of the paper is to extend the primal-dual large-update (IPMs) for (LO) based on the parametric function considered in [15] to (SDO) by using the NT-scaling scheme [11,21]. Furthermore, the complexity results match the currently best result of iteration bounds for large-update methods is established, namely, log log . The outline of the rest of the paper is as follows. In Section 2, we recall some basic concepts and results on matrix theory, the properties of the parametric kernel (and barrier) function. Primal-dual kernel function-based (IPMs) for (SDO) are presented in Section 3. In Section 4, we give the complexity analysis of the primal-dual (IPMs) for (SDO). Finally, some concluding remarks are made in Section 5.
Some of the notations used throughout the paper are as follows. , n n R R + and n R + + denote the set of vectors with n components, the set of nonnegative vectors and the set of positive vectors, respectively. n n R × denotes the set of n n ×
Recall that a matrix ( ) A t is said to be a matrix of functions if each entry of ( ) A t is a function of t , i.e., For any function ( ) t ϕ , let us denote by ϕ ∆ the divided difference of ( ) t ϕ as follows The following theorem provides to measure the first-order directional derivative of a general function ( ( )) A t ϕ and bound its second-order derivative with respect to t . Theorem 2.1 (Lemma 16 in [21]) Suppose that ( ) A t is a matrix of functions such that the matrix ( ) A t is positive domain that contains all the eigenvalues of ( ) A t , then d Tr( ( ( ))) Tr( ( ( )) ( )), where max{| ( ( ), ( )) |: ( , ), , is a number depending on ( ) A t and ( ) t

The Parametric Kernel (Barrier) Function
The first three derivatives of ( ) t ϕ defined by (1) with respect to t are given by In what follows, we recall some useful results in [15,16] without proofs.
The following property, i.e., the exponential convexity, which plays an important role in the analysis of kernelfunction based (IPMs) [15,21]. Lemma 2.3 (Lemma 2.1 in [15]) Let 1 0 t > and 2 0 t > . Then From (16), we have One can easily verify that the derivative of the barrier function exactly equal to ( ) V ϕ′ , which is defined by (5).
Furthermore, we know that ( ) V Ψ is strictly convex with respect to 0 V ≻ and vanishes at its global minimal point We have the following theorem, by Lemma 2.3.
The following theorem provides an estimate for the effect of a µ -update on the value of ( ) V Ψ , which is a reformulation of Theorem 3.2 in [15]. ( ) ( ( )) .
The lower bound on ( ) V δ in terms of ( ) V Ψ can be obtained from the following theorem, which is a reformulation of Theorem 4.8 in [15].
This completes the proof.

Primal-Dual Kernel Function-Based (IPMs) for (SDO)
Without loss of generality, we assume that both the primal problem and its dual problem of (SDO) satisfy the interiorpoint condition (IPC), i.e., there exists The Karush-Kuhn-Tucker conditions for (P) and (D) are equivalent to the following system The standard approach is to replace the third equation in (24), i.e., the so-called complementarity condition for (P) and Under the assumption that (P) and (D) satisfy the (IPC), the system (25) has a unique solution, denoted by ( ( ), ( ), ( )) X y S µ µ µ . Let ( ) X µ be the µ -center of (P) and ( ( ), ( )) y S µ µ be the µ -center of (D). The set of µ -centers (with µ running through positive real numbers) gives a homotopy path, which is called the central path of (P) and (D). If 0 µ → , then the limit of the central path exists, and since the limit points satisfy the complementarity condition, i.e., 0, XS = it naturally yields an optimal solution for (P) and (D), see, e.g., [2].
In order to provide the scaled Newton system has a unique symmetric solution, Zhang [22] introduced the following symmetrization operator 1 The search direction obtained through the system (29) is called the Monteiro-Zhang unified direction. Different choices of the matrix P result in different search directions (see, e.g., [2,22]).
In this paper, we consider the so-called NTsymmetrization scheme [11,21], which yields the NT search direction. Let and 1 2 : The matrix D can be used to rescale X and S to the same matrix , V defined by One can easily verify that Hence, the value of ( ) V Ψ can be considered as a measure for the distance between the given iterate ( , , ) X y S and the µ -center ( ( ), ( ), ( )) The generic form of primal-dual kernel function-based (IPMs) for (SDO) is shown in Algorithm 1.

Algorithm 1 Primal-Dual Interior-Point Algorithm for (SDO)
Input This implies that the eigenvalues of V + are precisely the same as those of the matrix From the definition of ( ) Now, we consider the decrease in ( ) V Ψ as a function of α and define Hence, using the third equation of the system (37), one has In order to facilitate discussion, we denote : ( ), V δ δ = and we have the following result [8].
According to the decrease of ( ) The following lemma provides an estimate for the number of inner iterations between two successive barrier parameter updates. Lemma