A Continuous-Time Multi-Agent Systems Based Algorithm for Constrained Distributed Optimization
Ping Liu^{1}, Huaqing Li^{1, 2,}^{ *}, Liping Feng^{3}
^{1}College of Electronic and Information Engineering, Southwest University, Chongqing, PR China
^{2}State Key Laboratory of Power Transmission Equipment & System Security and New Technology, Chongqing University, Chongqing, PR China
^{3}Department of Computer Science and Technology of Xinzhou Normal University, Xinzhou, Shanxi, PR China
Email address:
To cite this article:
Ping Liu, Huaqing Li, Liping Feng. A Continuous-Time Multi-Agent Systems Based Algorithm for Constrained Distributed Optimization. Applied and Computational Mathematics. Vol. 5, No. 3, 2016, pp. 114-120. doi: 10.11648/j.acm.20160503.14
Received: May 25, 2016; Accepted: June 7, 2016; Published: June 18, 2016
Abstract: This paper considers a second-order multi-agent system for solving the non-smooth convex optimization problem, where the global objective function is a sum of local convex objective functions within different bound constraints over undirected graphs. A novel distributed continuous-time optimization algorithm is designed, where each agent only has an access to its own objective function and bound constraint. All the agents cooperatively minimize the global objective function under some mild conditions. In virtue of the KKT condition and the Lagrange multiplier method, the convergence of the resultant dynamical system is ensured by involving the Lyapunov stability theory and the hybrid LaSalle invariance principle of differential inclusion. A numerical example is conducted to verify the theoretical results.
Keywords: Distributed Optimization, Multi-Agent Network, Lyapunov Method, Bound Constraint, Continuous-Time
1. Introduction
The distributed optimization of a sum of local convex functions has been widely investigated in a variety of scenarios in recent years. Examples include multi-agents system, resource allocation in communication networks and localization in sensor networks [1-18], to just name a few. Numerous distributed optimization algorithms are designed to be in a discrete-time fashion to search the optimal solutions of the optimization problem in [3,5,8], while continuous-time strategies due to its relatively complete theoretical framework have been widely applied to the distributed optimization problems in [10-12].
Distributed algorithms are characterized by high reliability, scalability and reduced communication capabilities, which attract many researchers to intensively study the distributed optimization algorithms (see e.g. [13-24]). Nedić and Ozdaglar [25] was the first to systematically put forward the distributed optimization problems. A projection-based distributed algorithm was developed in [7], and the further investigations with respect to set constrained optimization were show in [26-27]. What is worth mentioning is that Bianchi and Jakubowicz [26] presented a distributed constraint non-convex optimization algorithm which consists of two steps: a local stochastic gradient descent at each agent and a gossip step that drives the network of agents to a consensus. Different from the above, a distributed optimization problem subject to the (in-)equality constraint or set constraint was investigated in [28]. The authors proposed two distributed subgradient algorithms for muilt-agent optimization problems, where the goal of agents is to minimize a sum of local objective functions. Motived by [28], the primal-dual subgradient algorithm was studied in Yuan et al. [29] and Zhu et al. [24] for muilt-agent optimization problems with set constraints. Furthermore, in order to solve an un-constrained optimization problem, where the objective function is formed by a sum of convex functions available to individual agent, a second-order distributed dynamicwas given in [10], while a similar second-order continuous-time distributed algorithm was proposed to solve the convex optimization problem in [11].
Inspired by the works of [7, 10, 11, 19], a novel distributed second-order continuous-time multi-agent system is proposed to solve a distributed convex optimization problem, where the objective function is a sum of local objective functions, and each one can only know its local information. That is to say, all the agents cooperatively reach the optimal solution of the optimization problem. To tackle the optimization with box constraints, a logarithmic barrier penalty function is used, which is different from previous studies that are mainly based on the projection algorithm. In comparison with the existing distributed optimization methods, the proposed method in this paper has the following three advantages. Firstly, this paper designs a novel distributed continuous-time algorithm to solve more general distributed convex optimization problems. Secondly, the proposed algorithm can solve the convex optimization problem with a sum of convex objective functions with local bound constraints. Meanwhile, it does not require the objective function to be smooth, which is required in the most existing recurrent neural network algorithms. Thirdly, the box constraints are treated with a logarithmic barrier penalty function, which makes the proposed algorithm has a faster convergence speed to obtain the approximate solutions.This has a sufficientaccuracy to satisfy most of the actual demands comparing with the projection algorithms.
The remainder of this paper is outlined as follows. Some preliminaries about the graph theory, the non-smooth analysis, and the stability of differential inclusions are presented in Section 2. Section 3 formulates a convex optimization problem and proposes a distributed continuous-time algorithm. In Section 4, a complete convergence proof is conducted to indicate that the dynamic system is convergent and stable meanwhile the agent’s estimates are convergent to the same optimal solution. A numerical example for illustration is given in Section 5. Conclusions are finally drawn in Section 6.
2. Mathematical Preliminaries
In this section, some preliminaries about the graph theory, the non-smooth analysis, and the stability of differential inclusion are introduced.
A. Algebraic Graph Theory
A weighted undirected graph consists of a vertex set , an undirected edge set and a weighted adjacency matrix . Here, if and only if ; , otherwise. An undirected path between and is denoted by , which means that and can exchange information with each other [30]. We assume that the communications between agents are bidirectional and the weights are positive, which indicates that the connection weights between and in graph satisfy if and only if there exists an edge ; otherwise, . The degree matrix is defined as: , so the Laplacian matrix of the undirected graph is defined as . An undirected graph is connected if there exists a path between any pair of distinct nodes. Especially, for an undirected graph is symmetric positive semi-definite. However, for a directed graph, does not have this property.
B. Non-smooth Analysis
Let be a space of Banach, be Euclidean norm of , be the conjugate space of , and be a subset of .
Definition 2.1: There exist a and a such that satisfies the following:
The function iscalled Lipschitz near , where represents the Lipschitz constant. If is Lipschitz near any point , then is also said to be locally Lipschitz in .
Definition 2.2: Assume that is Lipschitz near . The generalized directional derivative of at in the direction is given by:
Definition 2.3: The generalized gradient of is defined as:
Lemma 2.1: If is Lipschitz near any point , then
where is the convex closed hull, and is the null measure set, which is composed by the undefined points of the generalized gradient of .
C. Stability of Differential Inclusion
For an autonomous differential inclusion system:
(1)
where is an upper semi-continuous set-valued mapping with compact convex values and 0 is a balance point of (1). That is to say,
Definition 2.4: Let be a solution of (1). If there is a sequence satisfying:
then is the limit point of the solution of (1). All the limit-points make up the limit-set, which is donated as .
Definition 2.5: For any point in , if there exists a maximal solution of the system in , then is called the weakly invariant set of the system (1).
Theorem 2.2: (Lasalle invariance principle of differential inclusion) Assuming is a positive definite and locally Lipschitz regular function for almost all , it satisfies
If there exists a constant such that bounded, then for any solution passing through , we have that where is the largest weakly invariant subset of . Here, is the closure of .
Remark 2.1: If all the conditions of Theorem 2.2 hold, and , then the trivial solution of the autonomous differential inclusion systems (1) is asymptotically stable.
3. Problem Formulation and Optimization Algorithm
A. Problem Formulation
Consider a network of n agents that interact with each other over a connected graph . Each agent has a local objective function and a local bound constraint for all . The muilt-agent group cooperatively solves the following distributed optimization problem:
(2)
where is the closed convex set, and has nonempty interior points. The local objective function is convex and not necessarily smooth. Here, is a column vector.
We give some meaningful results, which will be used in this paper:
Assumption 3.1. The optimization problem (2) has at least one finite optimal solution .
Assumption 3.2. At least one of the local objective functions has a positive definite Hessian matrix.
Assumption 3.3. The weighted graph is undirected and connected.
Under Assumption 3.3, we have that 0 is a simple eigenvalue of Laplacian and is an eigenvector of corresponding to the simple zero eigenvalue. Moreover, , .
B. Optimization algorithm
Denote as an estimation of agent , as a matrix with column vector , and . The objective of optimization problem (2) is to achieve the global minimizer .
Next, we will provide an equivalent optimization problem of (2).
Lemma 3.1: Let, where is a symmetric and positive semi-definite Laplacian matrix of connected graph , is theKronecker product and denotes the identity matrix in . Then, the equivalence problem of (2) is described as:
(3)
where is the Cartesian product.
Proof: According to Assumption 3.3, let , we have the following:
Denoteas matrices with approximate dimensions. In virtue of the properties of Kronecker product , it is clearly that:
If , then , which implies . According to Assumption 3.3, we have , where is any real number and is a column vector. Then,
where each column vector of satisfies . We have .
Based on the above analysis, we have that if and only if for some . Since , we have , which implies and. Therefore, the problem (2) is equivalent to the problem (3).
Remark 3.1: In this paper, we mainly consider the case that is the box constraint . Let , , and for . Denote and . Then the problem (3) can be rewritten as:
(4)
where stands for the matrix entry in the k-th row and i-th column of .
Then, combining with the optimization problems (3), a new optimization problem based on the method of augmented Lagrangian is proposed as follow:
(5)
where , is the Laplacian of graph , and the quadratic penalty term in the objective function plays a damping role.
Building on the Theorems 3.25 and 3.27 in [25], the following results are obtained.
Lemma 3.1:Assume that Assumption 3.1 and 3.2 hold. Denote as the multiplier for theequality constraint, and , as the Lagrangian multipliers for inequality constraints. Then, is the optimal solution of optimization problem (4) if and only if the following KKT condition is established:
(6)
Lemma 3.2: Assume that Assumptions 1-3 hold. If optimization problem (4) has the optimal solution , then it is also the optimal solution of (5).
Let , and be defined as that in Lemma 3.1. Similar to Lemma 3.1, is the optimal solution of optimization problem (5) if and only if the following KKT condition is established:
(7)
Remark 3.2: Supposing that Assumptions 1 and 3 hold, then the Slater condition is established for problem (5), implying that there exist Lagrangian multipliers satisfying KKT condition (7).
To deal with the inequality constraints of problem (5), by means of the logarithmic-barrier method, it follows that:
(8)
where is a small enough positive real number.
Let be the lagrangian multiplier of equality constraint in (8), then the Lagrange function of (8) is:
(9)
The corresponding Lagrange dual function is:
(10)
Remark 3.3: Since the Slater condition deduced from Assumption 3.1 holds and there exists a nonempty interior point in satisfying , then the optimal solution of Lagrange dual problem (10) is equivalent to the optimization problem (8).
To solve the original optimization problem (2), thedynamics of multi-agent network is designed as:
(11)
Let . Then (11) can be written in a compact form:
(12)
4. The Convergence Analysis
In this section, a complete convergence proof of dynamic system (11) [or (12)] is provided in the following theorems.
Lemma 4.1: Under the Assumptions 1-3, the trajectories ofdynamic system (11) [or (12)] with any finite initial points are bounded.
Proof : In order to proof the stability of the dynamic system system (11) [or (12)], we construct a Lyapunov function as follow:
(13)
Obviously, . In view of the chain rule, the time derivative of along the trajectories of dynamic system (11) [or (12)] is :
Since is convex in , and concave in , combining with the properties of convex function, we have:
Based on Remark 3.3 and the Saddle point theorem, then
Hence, the Lyapunov function (13) is monotonic, non-increasing and has a lower bound, i.e., trajectories are bounded. Let represent the initial point of , then we have , which indicates that there exists a positive invariant compact set such that the solution of (11) [or (12)] satisfy:
(14)
Lemma 4.2: The trivial solution of the dynamic system (11) [or (12)] is asymptotically stable.
Proof: Define a function:
Then the time-derivative of is:
（15）
where .
With Assumption 3.2, at least one of the local objective functions of has a positive definite Hessian matrix. Then is negative definite, which implies . Since is a positive semi-definite matrix, then . In addition, due to , the last step of (14) holds.
Next, we shall prove . According to the definition of , we have . Noting that is monotonic, non-increasing and has a lower bound, so . According to the continuity of , we have . Denoting as a solution of , which satisfies , then for any . Thus, it gives
For almost all , . It can be concluded that there exists satisfying . By the continuity of , it yields . Hence, we have . Noting that is weaklyinvariant set, then . From the above, according to Remark 2.1, we have that the trivial solution of the system (11) [or (12)] is asymptotically stable and it is also the optimal solution of (2).
5. Simulation
In this section, a simulation examples are presented to verify the theoretical analysis of the proposed second-order algorithm (11)[or (12)].
Example: Consider optimization problem (2) with , and , where are non-smooth objective functions.
For the above optimization problem, we first assume thatthe network topology is a cyclic connected network, as shown in Fig. 1(a). The connection weight is set to 1 if there exists a path between agent and , otherwise 0. The trajectories of twelve agents are shown in Fig. 1(b). It can be seen that all the agents converge to the same optimal solution (approximate solution). Next, supposing thatthe network topology is fully connected, as shown in Fig. 2(a), and the simulation results are shown in Fig. 2(b). It is clearly that the tighter the network connection, the faster the convergence rate is.
(a)
(b)
(a)
(b)
6. Conclusions
In this paper, a novel distributed continuous-time algorithm based on the KKT condition and the Lagrange multiplier method has been proposed for a distributed convex optimization problem. It aims to minimize the sum of the non-smooth local objective functions with local bound constraints over an undirected graph. Furthermore, the convergence analysis of the dynamical system is accomplished by using the Lyapunov stability theory and the hybrid LaSalle invariance principle of differential inclusion. The numerical simulation shows the performance of the proposed algorithm. In the future, our works may turn to the optimization problem with respect to directed topology andequality constraint, meanwhile analysis of its convergence speed.
Acknowledgements
This work described in this paper was supported in part by the Visiting Scholarship of State Key Laboratory of Power Transmission Equipment & System Security and New Technology (Chongqing University) under Grant 2007DA10512716421, in part by the Fundamental Research Funds for the Central Universities under Grant XDJK2016B016, in part by the Natural Science Foundation Project of Chongqing CSTC under Grant cstc2014jcyjA40016, in part by the China Postdoctoral Science Foundation under Grant 2016M590852, and in part by the Natural Science Foundation of China under Grant 61403314.
References