Applied and Computational Mathematics
Volume 5, Issue 3, June 2016, Pages: 114-120

A Continuous-Time Multi-Agent Systems Based Algorithm for Constrained Distributed Optimization

Ping Liu1, Huaqing Li1, 2, *, Liping Feng3

1College of Electronic and Information Engineering, Southwest University, Chongqing, PR China

2State Key Laboratory of Power Transmission Equipment & System Security and New Technology, Chongqing University, Chongqing, PR China

3Department of Computer Science and Technology of Xinzhou Normal University, Xinzhou, Shanxi, PR China

Email address:

(Huaqing Li)

*Corresponding author

To cite this article:

Ping Liu, Huaqing Li, Liping Feng. A Continuous-Time Multi-Agent Systems Based Algorithm for Constrained Distributed Optimization. Applied and Computational Mathematics. Vol. 5, No. 3, 2016, pp. 114-120. doi: 10.11648/j.acm.20160503.14

Received: May 25, 2016; Accepted: June 7, 2016; Published: June 18, 2016


Abstract: This paper considers a second-order multi-agent system for solving the non-smooth convex optimization problem, where the global objective function is a sum of local convex objective functions within different bound constraints over undirected graphs. A novel distributed continuous-time optimization algorithm is designed, where each agent only has an access to its own objective function and bound constraint. All the agents cooperatively minimize the global objective function under some mild conditions. In virtue of the KKT condition and the Lagrange multiplier method, the convergence of the resultant dynamical system is ensured by involving the Lyapunov stability theory and the hybrid LaSalle invariance principle of differential inclusion. A numerical example is conducted to verify the theoretical results.

Keywords: Distributed Optimization, Multi-Agent Network, Lyapunov Method, Bound Constraint, Continuous-Time


1. Introduction

The distributed optimization of a sum of local convex functions has been widely investigated in a variety of scenarios in recent years. Examples include multi-agents system, resource allocation in communication networks and localization in sensor networks [1-18], to just name a few. Numerous distributed optimization algorithms are designed to be in a discrete-time fashion to search the optimal solutions of the optimization problem in [3,5,8], while continuous-time strategies due to its relatively complete theoretical framework have been widely applied to the distributed optimization problems in [10-12].

Distributed algorithms are characterized by high reliability, scalability and reduced communication capabilities, which attract many researchers to intensively study the distributed optimization algorithms (see e.g. [13-24]). Nedić and Ozdaglar [25] was the first to systematically put forward the distributed optimization problems. A projection-based distributed algorithm was developed in [7], and the further investigations with respect to set constrained optimization were show in [26-27]. What is worth mentioning is that Bianchi and Jakubowicz [26] presented a distributed constraint non-convex optimization algorithm which consists of two steps: a local stochastic gradient descent at each agent and a gossip step that drives the network of agents to a consensus. Different from the above, a distributed optimization problem subject to the (in-)equality constraint or set constraint was investigated in [28]. The authors proposed two distributed subgradient algorithms for muilt-agent optimization problems, where the goal of agents is to minimize a sum of local objective functions. Motived by [28], the primal-dual subgradient algorithm was studied in Yuan et al. [29] and Zhu et al. [24] for muilt-agent optimization problems with set constraints. Furthermore, in order to solve an un-constrained optimization problem, where the objective function is formed by a sum of convex functions available to individual agent, a second-order distributed dynamicwas given in [10], while a similar second-order continuous-time distributed algorithm was proposed to solve the convex optimization problem in [11].

Inspired by the works of [7, 10, 11, 19], a novel distributed second-order continuous-time multi-agent system is proposed to solve a distributed convex optimization problem, where the objective function is a sum of local objective functions, and each one can only know its local information. That is to say, all the agents cooperatively reach the optimal solution of the optimization problem. To tackle the optimization with box constraints, a logarithmic barrier penalty function is used, which is different from previous studies that are mainly based on the projection algorithm. In comparison with the existing distributed optimization methods, the proposed method in this paper has the following three advantages. Firstly, this paper designs a novel distributed continuous-time algorithm to solve more general distributed convex optimization problems. Secondly, the proposed algorithm can solve the convex optimization problem with a sum of convex objective functions with local bound constraints. Meanwhile, it does not require the objective function to be smooth, which is required in the most existing recurrent neural network algorithms. Thirdly, the box constraints are treated with a logarithmic barrier penalty function, which makes the proposed algorithm has a faster convergence speed to obtain the approximate solutions.This has a sufficientaccuracy to satisfy most of the actual demands comparing with the projection algorithms.

The remainder of this paper is outlined as follows. Some preliminaries about the graph theory, the non-smooth analysis, and the stability of differential inclusions are presented in Section 2. Section 3 formulates a convex optimization problem and proposes a distributed continuous-time algorithm. In Section 4, a complete convergence proof is conducted to indicate that the dynamic system is convergent and stable meanwhile the agent’s estimates are convergent to the same optimal solution. A numerical example for illustration is given in Section 5. Conclusions are finally drawn in Section 6.

2. Mathematical Preliminaries

In this section, some preliminaries about the graph theory, the non-smooth analysis, and the stability of differential inclusion are introduced.

A. Algebraic Graph Theory

A weighted undirected graph consists of a vertex set , an undirected edge set  and a weighted adjacency matrix . Here,  if and only if ; , otherwise. An undirected path between  and  is denoted by , which means that  and  can exchange information with each other [30]. We assume that the communications between agents are bidirectional and the weights are positive, which indicates that the connection weights between  and  in graph  satisfy  if and only if there exists an edge ; otherwise, . The degree matrix is defined as: , so the Laplacian matrix of the undirected graph  is defined as . An undirected graph is connected if there exists a path between any pair of distinct nodes. Especially,  for an undirected graph is symmetric positive semi-definite. However, for a directed graph,  does not have this property.

B. Non-smooth Analysis

Let  be a space of Banach,  be Euclidean norm of ,  be the conjugate space of , and  be a subset of .

Definition 2.1: There exist a  and a  such that  satisfies the following:

The function  iscalled Lipschitz near , where  represents the Lipschitz constant. If  is Lipschitz near any point , then is also said to be locally Lipschitz in .

Definition 2.2: Assume that  is Lipschitz near . The generalized directional derivative of  at  in the direction  is given by:

Definition 2.3: The generalized gradient of  is defined as:

Lemma 2.1: If  is Lipschitz near any point , then

where  is the convex closed hull, and  is the null measure set, which is composed by the undefined points of the generalized gradient of .

C. Stability of Differential Inclusion

For an autonomous differential inclusion system:

(1)

where  is an upper semi-continuous set-valued mapping with compact convex values and 0 is a balance point of (1). That is to say,

Definition 2.4: Let  be a solution of (1). If there is a sequence  satisfying:

then is the limit point of the solution  of (1). All the limit-points make up the limit-set, which is donated as .

Definition 2.5: For any point  in , if there exists a maximal solution of the system in , then  is called the weakly invariant set of the system (1).

Theorem 2.2: (Lasalle invariance principle of differential inclusion) Assuming  is a positive definite and locally Lipschitz regular function for almost all , it satisfies

If there exists a constant  such that  bounded, then for any solution  passing through , we have that where is the largest weakly invariant subset of . Here,  is the closure of .

Remark 2.1: If all the conditions of Theorem 2.2 hold, and , then the trivial solution of the autonomous differential inclusion systems (1) is asymptotically stable.

3. Problem Formulation and Optimization Algorithm

A. Problem Formulation

Consider a network of n agents that interact with each other over a connected graph . Each agent has a local objective function  and a local bound constraint  for all . The muilt-agent group cooperatively solves the following distributed optimization problem:

(2)

where  is the closed convex set, and  has nonempty interior points. The local objective function  is convex and not necessarily smooth. Here,  is a column vector.

We give some meaningful results, which will be used in this paper:

Assumption 3.1. The optimization problem (2) has at least one finite optimal solution .

Assumption 3.2. At least one of the local objective functions  has a positive definite Hessian matrix.

Assumption 3.3. The weighted graph  is undirected and connected.

Under Assumption 3.3, we have that 0 is a simple eigenvalue of Laplacian and  is an eigenvector of  corresponding to the simple zero eigenvalue. Moreover, , .

B. Optimization algorithm

Denote  as an estimation of agent ,  as a matrix with column vector , and . The objective of optimization problem (2) is to achieve the global minimizer .

Next, we will provide an equivalent optimization problem of (2).

Lemma 3.1: Let, where  is a symmetric and positive semi-definite Laplacian matrix of connected graph ,  is theKronecker product and  denotes the identity matrix in . Then, the equivalence problem of (2) is described as:

(3)

where  is the Cartesian product.

Proof: According to Assumption 3.3, let , we have the following:

Denoteas matrices with approximate dimensions. In virtue of the properties of Kronecker product , it is clearly that:

If , then , which implies . According to Assumption 3.3, we have , where  is any real number and  is a column vector. Then,

where each column vector of  satisfies . We have .

Based on the above analysis, we have that  if and only if  for some . Since , we have , which implies  and. Therefore, the problem (2) is equivalent to the problem (3).

Remark 3.1: In this paper, we mainly consider the case that  is the box constraint . Let , , and  for . Denote  and . Then the problem (3) can be rewritten as:

(4)

where  stands for the matrix entry in the k-th row and i-th column of .

Then, combining with the optimization problems (3), a new optimization problem based on the method of augmented Lagrangian is proposed as follow:

(5)

where ,  is the Laplacian of graph , and the quadratic penalty term in the objective function plays a damping role.

Building on the Theorems 3.25 and 3.27 in [25], the following results are obtained.

Lemma 3.1:Assume that Assumption 3.1 and 3.2 hold. Denote  as the multiplier for theequality constraint, and ,  as the Lagrangian multipliers for inequality constraints. Then,  is the optimal solution of optimization problem (4) if and only if the following KKT condition is established:

(6)

Lemma 3.2: Assume that Assumptions 1-3 hold. If optimization problem (4) has the optimal solution , then it is also the optimal solution of (5).

Let ,  and  be defined as that in Lemma 3.1. Similar to Lemma 3.1,  is the optimal solution of optimization problem (5) if and only if the following KKT condition is established:

(7)

Remark 3.2: Supposing that Assumptions 1 and 3 hold, then the Slater condition is established for problem (5), implying that there exist Lagrangian multipliers satisfying KKT condition (7).

To deal with the inequality constraints of problem (5), by means of the logarithmic-barrier method, it follows that:

(8)

where  is a small enough positive real number.

Let  be the lagrangian multiplier of equality constraint  in (8), then the Lagrange function of (8) is:

                (9)

The corresponding Lagrange dual function is:

(10)

Remark 3.3: Since the Slater condition deduced from Assumption 3.1 holds and there exists a nonempty interior point in  satisfying , then the optimal solution of Lagrange dual problem (10) is equivalent to the optimization problem (8).

To solve the original optimization problem (2), thedynamics of multi-agent network is designed as:

(11)

Let . Then (11) can be written in a compact form:

(12)

4. The Convergence Analysis

In this section, a complete convergence proof of dynamic system (11) [or (12)] is provided in the following theorems.

Lemma 4.1: Under the Assumptions 1-3, the trajectories ofdynamic system (11) [or (12)] with any finite initial points are bounded.

Proof : In order to proof the stability of the dynamic system system (11) [or (12)], we construct a Lyapunov function as follow:

(13)

Obviously, . In view of the chain rule, the time derivative of  along the trajectories of dynamic system (11) [or (12)] is :

Since  is convex in , and concave in , combining with the properties of convex function, we have:

Based on Remark 3.3 and the Saddle point theorem, then

Hence, the Lyapunov function (13) is monotonic, non-increasing and has a lower bound, i.e., trajectories are bounded. Let  represent the initial point of , then we have , which indicates that there exists a positive invariant compact set such that the solution of (11) [or (12)] satisfy:

(14)

Lemma 4.2: The trivial solution of the dynamic system (11) [or (12)] is asymptotically stable.

Proof: Define a function:

Then the time-derivative of  is:

                   15

where .

With Assumption 3.2, at least one of the local objective functions of has a positive definite Hessian matrix. Then  is negative definite, which implies . Since  is a positive semi-definite matrix, then . In addition, due to , the last step of (14) holds.

Next, we shall prove . According to the definition of , we have . Noting that  is monotonic, non-increasing and has a lower bound, so . According to the continuity of , we have . Denoting as a solution of , which satisfies , then  for any . Thus, it gives

For almost all , . It can be concluded that there exists  satisfying . By the continuity of , it yields . Hence, we have . Noting that  is weaklyinvariant set, then . From the above, according to Remark 2.1, we have that the trivial solution of the system (11) [or (12)] is asymptotically stable and it is also the optimal solution of (2).

5. Simulation

In this section, a simulation examples are presented to verify the theoretical analysis of the proposed second-order algorithm (11)[or (12)].

Example: Consider optimization problem (2) with ,  and , where  are non-smooth objective functions.

For the above optimization problem, we first assume thatthe network topology  is a cyclic connected network, as shown in Fig. 1(a). The connection weight is set to 1 if there exists a path between agent  and , otherwise 0. The trajectories of twelve agents are shown in Fig. 1(b). It can be seen that all the agents converge to the same optimal solution  (approximate solution). Next, supposing thatthe network topology  is fully connected, as shown in Fig. 2(a), and the simulation results are shown in Fig. 2(b). It is clearly that the tighter the network connection, the faster the convergence rate is.

(a)

(b)

Fig. 1. (a) A cyclic connected network. (b) The trajectories of the state vector  over a cyclic connected network.

(a)

(b)

Fig. 2. (a) A fully connected network. (b) The trajectories of the state vector  over a fully connected network.

6. Conclusions

In this paper, a novel distributed continuous-time algorithm based on the KKT condition and the Lagrange multiplier method has been proposed for a distributed convex optimization problem. It aims to minimize the sum of the non-smooth local objective functions with local bound constraints over an undirected graph. Furthermore, the convergence analysis of the dynamical system is accomplished by using the Lyapunov stability theory and the hybrid LaSalle invariance principle of differential inclusion. The numerical simulation shows the performance of the proposed algorithm. In the future, our works may turn to the optimization problem with respect to directed topology andequality constraint, meanwhile analysis of its convergence speed.

Acknowledgements

This work described in this paper was supported in part by the Visiting Scholarship of State Key Laboratory of Power Transmission Equipment & System Security and New Technology (Chongqing University) under Grant 2007DA10512716421, in part by the Fundamental Research Funds for the Central Universities under Grant XDJK2016B016, in part by the Natural Science Foundation Project of Chongqing CSTC under Grant cstc2014jcyjA40016, in part by the China Postdoctoral Science Foundation under Grant 2016M590852, and in part by the Natural Science Foundation of China under Grant 61403314.


References

  1. Han D, Mo Y, Wu J, et al. Stochastic event-triggered sensor schedule for remote state estimation. IEEE Transactions on Automatic Control, 2015, 60(10): 2661-2675.
  2. Li H, Liao X, Huang T, et al. Event-triggering sampling based leader-following consensus in second-order multi-agent systems. IEEE Transactions on Automatic Control, 2015, 60(7): 1998-2003.
  3. Olfati-Saber R, Fax A, Murray R M. Consensus and cooperation in networked multi-agent systems. Proceedings of the IEEE, 2007, 95(1): 215-233.
  4. Xia Z, Wang X, Sun X, et al. Steganalysis of least significant bit matching using multi-order differences. Security and Communication Networks, 2014, 7(8): 1283-1291.
  5. Lobel I, Ozdaglar A. Distributed subgradient methods for convex optimization over random networks. IEEE Transactions on Automatic Control, 2011, 56(6): 1291-1306.
  6. Guo P, Wang J, Geng X H, et al. A variable threshold-value authentication architecture for wireless mesh networks. Journal of Internet Technology, 2014, 15(6): 929-935.
  7. Nedić A, Ozdaglar A, Parrilo P A. Constrained consensus and optimization in multi-agent networks. IEEE Transactions on Automatic Control, 2010, 55(4): 922-938.
  8. Ren W, Beard R W. Distributed consensus in multi-vehicle cooperative control. Springer-Verlag, London, 2008.
  9. Li H, Chen G, Liao X, et al. Quantized data-based leader-following consensus of general discrete-time multi-agent systems, IEEE Transactions on Circuits and Systems II: Express Briefs, 2016, 63(4): 401-405.
  10. Wang J, Elia N. Control approach to distributed optimization. Communication, Control, and Computing (Allerton), 2010 48th Annual Allerton Conference on. IEEE, 2010: 557-561.
  11. Gharesifard B, Cortés J. Distributed continuous-time convex optimization on weight-balanced digraphs. IEEE Transactions on Automatic Control, 2014, 59(3): 781-786.
  12. Liu S, Qiu Z, Xie L. Continuous-time distributed convex optimization with set constraints. Proceedings of the 19th IFAC World Congress, Cape Town, South Africa. 2014: 9762-9767.
  13. Li H, Liao X, Huang T, et al. Second-order global consensus in multi-agent networks with random directional link failure. IEEE Transactions on Neural Networks and Learning Systems, 2015, 26(3): 565-575.
  14. Gungor V C, Hancke G P. Industrial wireless sensor networks: Challenges, design principles, and technical approaches. IEEE Transactions on Industrial Electronics, 2009, 56(10): 4258-4265.
  15. Shen J, Tan H W, Wang J, et al. A novel routing protocol providing good transmission reliability in underwater sensor networks. Journal of Internet Technology, 2015, 16(1): 171-178. 2015, 16(1): 171-178.
  16. Li H, Chen G, Huang T, et al. High-performance consensus control in networked systems with limited bandwidth communication and time-varying directed topologies. IEEE Transactions on Neural Networks and Learning Systems, 2016, DOI: 10.1109/TNNLS.2016.2519894.
  17. Blatt D, Hero III A O. Energy-based sensor network source localization via projection onto convex sets. IEEE Transactions on Signal Processing, 2006, 54(9): 3614-3619.
  18. Wen X, Shao L, Xue Y, et al. A rapid learning algorithm for vehicle classification. Information Sciences, 2015, 295: 395-406.
  19. Yi P, Hong Y, Liu F. Initialization-free distributed algorithms for optimal resource allocation with feasibility constraints and its application to economic dispatch of power systems. arXiv preprint arXiv:1510.08579, 2015.
  20. Li H, Liao X, Chen G, et al. Event-triggered asynchronous intermittent communication strategy for synchronization in complex dynamical networks. Neural Networks, 2015, 66: 1-10.
  21. Gu B, Sheng V S, Tay K Y, et al. Incremental support vector learning for ordinal regression. IEEE Transactions on Neural networks and learning systems, 2015, 26(7): 1403-1416.
  22. Wen G, Duan Z, Chen G, et al. Consensus tracking of multi-agent systems with Lipschitz-type node dynamics and switching topologies. IEEE Transactions on Circuits and Systems I: Regular Papers, 2014, 61(2): 499-511.
  23. Xie S, Wang Y. Construction of tree network with limited delivery latency in homogeneous wireless sensor networks. Wireless personal communications, 2014, 78(1): 231-246.
  24. Zhu M, Martínez S. On distributed convex optimization under inequality and equality constraints. IEEE Transactions on Automatic Control, 2012, 57(1): 151-164.
  25. Nedić A, Ozdaglar A. Distributed subgradient methods for multi-agent optimization. IEEE Transactions on Automatic Control, 2009, 54(1): 48-61.
  26. Bianchi P, Jakubowicz J. Convergence of a multi-agent projected stochastic gradient algorithm for non-convex optimization. IEEE Transactions on Automatic Control, 2013, 58(2): 391-405.
  27. Lou Y, Shi G, Johansson K H, et al. Approximate projected consensus for convex intersection computation: Convergence analysis and critical error angle. IEEE Transactions on Automatic Control, 2014, 59(7): 1722-1736.
  28. Zhu M, Martinez S. On distributed convex optimization under inequality and equality constraints via primal-dual subgradient methods. arXiv preprint arXiv:1001.2612, 2010.
  29. Yuan D, Xu S, Zhao H. Distributed primal–dual subgradient method for multiagent optimization via consensus algorithms. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 2011, 41(6): 1715-1724.
  30. Li H, Chen G, Huang T, et al. Event-triggered distributed average consensus over directed digital networks with limited communication bandwidth. IEEE Transactions on Cybernetics, 2016, DOI: 10.1109/TCYB.2015.2496977.
  31. Ruszczyński A P. Nonlinear Optimization. Princeton university press, 2006.

Article Tools
  Abstract
  PDF(510K)
Follow on us
ADDRESS
Science Publishing Group
548 FASHION AVENUE
NEW YORK, NY 10018
U.S.A.
Tel: (001)347-688-8931