A Continuous-Time Multi-Agent Systems Based Algorithm for Constrained Distributed Optimization

: This paper considers a second-order multi-agent system for solving the non-smooth convex optimization problem, where the global objective function is a sum of local convex objective functions within different bound constraints over undirected graphs. A novel distributed continuous-time optimization algorithm is designed, where each agent only has an access to its own objective function and bound constraint. All the agents cooperatively minimize the global objective function under some mild conditions. In virtue of the KKT condition and the Lagrange multiplier method, the convergence of the resultant dynamical system is ensured by involving the Lyapunov stability theory and the hybrid LaSalle invariance principle of differential inclusion. A numerical example is conducted to verify the theoretical results.


Introduction
The distributed optimization of a sum of local convex functions has been widely investigated in a variety of scenarios in recent years. Examples include multi-agents system, resource allocation in communication networks and localization in sensor networks [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18], to just name a few. Numerous distributed optimization algorithms are designed to be in a discrete-time fashion to search the optimal solutions of the optimization problem in [3,5,8], while continuous-time strategies due to its relatively complete theoretical framework have been widely applied to the distributed optimization problems in [10][11][12].
Distributed algorithms are characterized by high reliability, scalability and reduced communication capabilities, which attract many researchers to intensively study the distributed optimization algorithms (see e.g. [13][14][15][16][17][18][19][20][21][22][23][24]). Nedić and Ozdaglar [25] was the first to systematically put forward the distributed optimization problems. A projection-based distributed algorithm was developed in [7], and the further investigations with respect to set constrained optimization were show in [26][27]. What is worth mentioning is that Bianchi and Jakubowicz [26] presented a distributed constraint non-convex optimization algorithm which consists of two steps: a local stochastic gradient descent at each agent and a gossip step that drives the network of agents to a consensus. Different from the above, a distributed optimization problem subject to the (in-)equality constraint or set constraint was investigated in [28]. The authors proposed two distributed subgradient algorithms for muilt-agent optimization problems, where the goal of agents is to minimize a sum of local objective functions. Motived by [28], the primal-dual subgradient algorithm was studied in Yuan et al. [29] and Zhu et al. [24] for muilt-agent optimization problems with set constraints. Furthermore, in order to solve an un-constrained optimization problem, where the objective function is formed by a sum of convex functions available to individual agent, a second-order distributed dynamic was given in [10], while a similar second-order continuous-time distributed algorithm was proposed to solve the convex optimization problem in [11].
Inspired by the works of [7,10,11,19], a novel distributed second-order continuous-time multi-agent system is proposed to solve a distributed convex optimization problem, where the objective function is a sum of local objective functions, and each one can only know its local information. That is to say, all the agents cooperatively reach the optimal solution of the optimization problem. To tackle the optimization with box constraints, a logarithmic barrier penalty function is used, which is different from previous studies that are mainly based on the projection algorithm. In comparison with the existing distributed optimization methods, the proposed method in this paper has the following three advantages. Firstly, this paper designs a novel distributed continuous-time algorithm to solve more general distributed convex optimization problems. Secondly, the proposed algorithm can solve the convex optimization problem with a sum of convex objective functions with local bound constraints. Meanwhile, it does not require the objective function to be smooth, which is required in the most existing recurrent neural network algorithms. Thirdly, the box constraints are treated with a logarithmic barrier penalty function, which makes the proposed algorithm has a faster convergence speed to obtain the approximate solutions. This has a sufficient accuracy to satisfy most of the actual demands comparing with the projection algorithms.
The remainder of this paper is outlined as follows. Some preliminaries about the graph theory, the non-smooth analysis, and the stability of differential inclusions are presented in Section 2. Section 3 formulates a convex optimization problem and proposes a distributed continuous-time algorithm. In Section 4, a complete convergence proof is conducted to indicate that the dynamic system is convergent and stable meanwhile the agent's estimates are convergent to the same optimal solution. A numerical example for illustration is given in Section 5. Conclusions are finally drawn in Section 6.

Mathematical Preliminaries
In this section, some preliminaries about the graph theory, the non-smooth analysis, and the stability of differential inclusion are introduced. i j ν ν ∈ E , which means that i ν and j ν can exchange information with each other [30]. We assume that the communications between agents are bidirectional and the weights are positive, which indicates that the connection weights between The degree matrix is defined as:

A. Algebraic Graph Theory
The function : where L represents the Lipschitz constant. If f is Lipschitz near any point x D ∈ , then f is also said to be locally Lipschitz in D .

Definition 2.2:
Assume that : The generalized gradient of f is defined as: where co is the convex closed hull, and f Ω is the null measure set, which is composed by the undefined points of the generalized gradient of V .

C. Stability of Differential Inclusion
For an autonomous differential inclusion system: is an upper semi-continuous set-valued mapping with compact convex values and 0 is a balance point of (1). That is to say, 0 (0).
x t of (1). All the -ω limit-points make up the limit-set, which is donated as

Definition 2.5: For any point 0
x in Ω , if there exists a maximal solution of the system in Ω , then Ω is called the weakly invariant set of the system (1).

Theorem 2.2:
(Lasalle invariance principle of differential inclusion) Assuming : n V → ℝ ℝ is a positive definite and locally Lipschitz regular function for almost all t , it satisfies

A. Problem Formulation
Consider a network of n agents that interact with each other over a connected graph G . Each agent has a local objective function We give some meaningful results, which will be used in this paper: Assumption 3.1. The optimization problem (2) has at least one finite optimal solution * x . x . The objective of optimization problem (2) is to achieve the global minimizer * arg min ( ) x f x = . Next, we will provide an equivalent optimization problem of (2). . Then, the equivalence problem of (2) is described as: We where k i x stands for the matrix entry in the k-th row and i-th column of x .
Then, combining with the optimization problems (3), a new optimization problem based on the method of augmented Lagrangian is proposed as follow: is the optimal solution of optimization problem (5) if and only if the following KKT condition is established: Remark 3.2: Supposing that Assumptions 1 and 3 hold, then the Slater condition is established for problem (5), implying that there exist Lagrangian multipliers satisfying KKT condition (7).
To deal with the inequality constraints of problem (5) where θ is a small enough positive real number.
Let mn λ ∈ ℝ be the lagrangian multiplier of equality constraint Lx in (8), then the Lagrange function of (8) is: ,min ,max The corresponding Lagrange dual function is: ,min ,max

Remark 3.3:
Since the Slater condition deduced from Assumption 3.1 holds and there exists a nonempty interior point in Ω satisfying ,min ,max , then the optimal solution of Lagrange dual problem (10) is equivalent to the optimization problem (8).
To solve the original optimization problem (2), the dynamics of multi-agent network is designed as: ,min ,max 1 1 Then (11) can be written in a compact form: ( )

The Convergence Analysis
In this section, a complete convergence proof of dynamic system (11) [or (12)] is provided in the following theorems.
Proof : In order to proof the stability of the dynamic system system (11) [or (12)], we construct a Lyapunov function as follow: Obviously, ( , ) 0 W x λ ≥ . In view of the chain rule, the time derivative of ( , ) W x λ along the trajectories of dynamic system (11) [or (12)] is : Hence, the Lyapunov function (13) is monotonic, non-increasing and has a lower bound, i.e., trajectories are bounded. Let ( (0), (0)) x λ represent the initial point of ( , ) x λ , then we have , which indicates that there exists a positive invariant compact set such that the solution of (11) [or (12)] satisfy: Lemma 4.2: The trivial solution of the dynamic system (11) [or (12)] is asymptotically stable.
Proof: Define a function: Then the time-derivative of ( , ) V x λ ɺ ɺ is: With Assumption 3.2, at least one of the local objective functions ( ), 1,..., of ( ) f x has a positive definite Hessian matrix. Then  (12)] is asymptotically stable and it is also the optimal solution of (2).

Simulation
In this section, a simulation examples are presented to verify the theoretical analysis of the proposed second-order algorithm (11)[or (12)].
Example: Consider optimization problem (2)   For the above optimization problem, we first assume that the network topology G is a cyclic connected network, as shown in Fig. 1(a). The connection weight is set to 1 if there exists a path between agent i and j , otherwise 0. The trajectories of twelve agents are shown in Fig. 1(b). It can be seen that all the agents converge to the same optimal solution * T ( 10, 7.5,13) x = − (approximate solution). Next, supposing that the network topology G is fully connected, as shown in Fig. 2(a), and the simulation results are shown in Fig. 2(b). It is clearly that the tighter the network connection, the faster the convergence rate is.

Conclusions
In this paper, a novel distributed continuous-time algorithm based on the KKT condition and the Lagrange multiplier method has been proposed for a distributed convex optimization problem. It aims to minimize the sum of the non-smooth local objective functions with local bound constraints over an undirected graph. Furthermore, the convergence analysis of the dynamical system is accomplished by using the Lyapunov stability theory and the hybrid LaSalle invariance principle of differential inclusion. The numerical simulation shows the performance of the proposed algorithm. In the future, our works may turn to the optimization problem with respect to directed topology and equality constraint, meanwhile analysis of its convergence speed.