Distributed Subgradient Algorithm for Multi-agent Convex Optimization with Local Constraint Sets
Qingguo Lü^{1}, Huaqing Li^{1, 2, *}, Li Xiao^{3}
^{1}College of Electronic and Information Engineering, Southwest University, Chongqing, PR China
^{2}State Key Laboratory of Power Transmission Equipment & System Security and New Technology, Chongqing University, Chongqing, PR China
^{3}Department of Mathematics and Information Engineering, Chongqing University of Education, Chongqing, PR China
Email address:
To cite this article:
Qingguo Lü, Huaqing Li, Li Xiao. Distributed Subgradient Algorithm for Multi-agent Convex Optimization with Local Constraint Sets. Applied and Computational Mathematics. Vol. 5, No. 3, 2016, pp. 150-159. doi: 10.11648/j.acm.20160503.19
Received: June 12, 2016; Accepted: June 20, 2016; Published: July 13, 2016
Abstract: This paper considers a distributed constrained optimization problem, where the objective function is the sum of local objective functions of distributed nodes in a network. The estimate of each agent is restricted to different convex sets. To solve this optimization problem which is not necessarily smooth, we study a novel distributed projected subgradient algorithm for multi-agent optimization with nonidentical constraint sets and switching topologies. The algorithm shows that each agent minimizes its own objective function while communicating information locally with other agents over a network with time-varying topologies but satisfying a standard connectivity property. Under the assumption that the network topology is weight-balanced, the noveldistributed subgradient algorithm we proposed is proven to be convergent. Particularly, we suppose the step-size is various, which is different from previous work on multi-agent optimization that makes worst-case assumption with constant step-size.
Keywords: Distributed Optimization, Subgradient Algorithm, Multi-agent Network, Weight-Balanced
1. Introduction
In recent years, multi-agent systems and distributed algorithms have received considerable research attentions due to its wide applications in many engineering systems and large-scale networks [1-28], including resource allocation in computer network [16-18], distributed estimation in sensor networks [19], distributed finite-time optimal rendezvous problem [21], and distributed demand response control problem in smart grid [22]. In many networked systems (see e.g. [22-28]), multi-agent are enforced to solve a distributed convex optimization problem, where the global objective function is the sum of local objective functions, each of them can not be known or shared by other agents.
Distributed optimization of a sum of convex functions has received a surge of interests in recent years.Nedić and Ozdaglar [1] presented an analysis of the consensus-based subgradient method for solving the distributed convex optimization problem. Projection-based distributed algorithm was developed in Nedić et al. [3] for distributed optimization, where each agent was constrained to individual closed convex set and gave corresponding convergence analysis on identical closed convex sets and on uniform weight with non-identical. Further distributed algorithm for set constrained optimization was investigated in Bianchi and Jakubowicz [7] and Lou et al. [10]. To figure out distributed optimization problems with asynchronous stepsizes or inequality–equality constraints, Distributed Lagrangian primal-dual subgradient algorithm and penalty primal-dual subgradient algorithm were shown in Zhu et al. [6], Towfic and Sayed [12], both of them were designed for function constrained problems. Meanwhile, dual decomposition was applied to separable problems with affine constraints in [4,9]. Recent works [15-18] have coordinately put their sights on inequality-equality constraints. Zhu et al. [22] proposed a distributed Lagrangian primal–dual subgradient method which is based on the characterization of the primal–dual optimal solutions as the saddle points of the Lagrangian function associated with the problem. Yuan et al. [25] studied a variant of the distributed primal–dual subgradient method, where the multi-step consensus algorithm was employed to simplify the implementation and convergence analysis of the method. To solve the multi-agent optimization problem with more general inequality constraints that couple all the agents’ optimization variables, Chang et al. [27] proposed a novel distributed primal–dual perturbed subgradient method and established its convergence. The implementation of the aforementioned methods in general involved projection-step onto some primal and dual constraint sets, respectively.
Inspired by the works of [1,3,33], a multi-agent unconstrained convex optimization problem through a novel combination of average consensus algorithms with subgradient methods was solved in Nedić and Ozdaglar [1]. Then, Nedić et al. [3] assumed that each agent is constrained to remain in a closed convex set and gave corresponding convergence analysis on identical closed convex sets and on uniform weight with non-identical closed convex sets. Furthermore, paper [22] solved a multi-agent convex optimization problem where the agents were subjected to a global inequality constraint, a global equality constraint and a global constraint set. In order to figure out these constraints, Zhu et al. [22] presented two different distributed projection algorithms with three assumptions that the network topologies were strongly connected among each time interval of a certain bounded length and the adjacency matrices were doubly stochastic and non-degeneracy.
Contributions: Inspired by the previous studies, this paper proposes a novel distributed subgradient algorithm for multi-agent convex optimization with local constraint sets. Previous work did not perform well on the application of the distributed algorithms in multi-agent network, for example, they may regarded the edge weight matrices of graphs as doubly stochastic (i.e., for all and , and for all and ). However, our methods do not assume that the adjacency matrices are doubly stochastic. We only require the network is weight-balanced, which makes the algorithm more practical. More precisely, the contribution of this paper is mainly in two aspects. Firstly, based on the conditions that each agent is restricted to different convex sets and the digraph is weight-balanced, we introduce a novel distributed projected subgradient algorithm under the case of various step-sizes. Secondly, we show the convergence of the algorithm and prove that it can achieve the optimal point of the sum of agents’ local objective functions while satisfying local constraint sets.
The organization of the remaining part is given as follows. Some basic preliminaries and concepts are given in section 2. Then, in Section 3, we present our problem formulation and some distributed subgradient algorithm preliminaries. We then introduce the distributed projection subgradient algorithm with some supporting lemmas and continue with a convergence analysis of the algorithm in Section 4. Furthermore, the properties of the algorithm are explored by using a numerical example in Section 5. Finally, we conclude the paper with a discussion and future work in Section 6.
2. Preliminaries and Concepts
In this section, we review some useful related concepts in algebraic graph theory, convex analysis, properties of the projection operation on a closed convex set and introduce some useful lemmas (referring to [28,29,31]).
2.1. Algebraic Graph Theory
We use a graph to describe the information exchange between the agents and that leader. The interaction topology of information exchange between agents is commonly depicted by a weighted directed graph , where is the set of vertices representing agents, and is the set of edge of the graph. It is assumed that the graph is simple, i.e., there are no repeated edges or self-loops. The weighted adjacency matrix of is denoted by with if and otherwise. Note that the diagonal elements and is generally an asymmetric matrix. A directed edge implies that node can reach node or node can receive information from node . If an edge , then node is called a neighbor of node and . The neighbor node set of node is denoted by , while we indicate with the number of neighbors of node . The Laplacian matrix associated with the adjacency matrix is defined by ; , which ensures that . The in-degree and out-degree of node can be, respectively, define by the Laplacian matrix as: and . A directed path from node to node is a sequence of edges in the directed graph with distinct nodes . A directed graph is strongly connected if for any two distinct nodes and in the set , there always exists a directed path from node to node . A graph is called an in-degrees (or out-degrees) balanced graph if the in-degrees (or out-degrees) of all nodes in the directed graph are equal. A directed graph with nodes is called a directed tree if it contains edges and there exists a root node with directed paths to every other node. A directed spanning tree of a directed graph is a directed tree that contains all network nodes.
2.2. Basic Notations and Concepts
In this paper, we do not assume the function at a point is differentiable and the subgradient plays the role of the gradient.
Definition 1: For a given convex function and a point , a subgradient of the function at is a vector such that the following subgradient inequality holds for any :
Definition 2: The set of all subgradients of a convex function at is called the subdifferential of at , and is denoted by :
We use to denote the projection of a vector on a closed convex set , i.e.
In the subsequent development, the properties of the projection operation on a closed convex set play an important role. In particular, we use the projection inequality, i.e., for any vector
for all (1)
We also use the standard non-expansiveness property, i.e. for any and
(2)
In addition, we use the properties given in the following lemma.
Lemma 2.1: Let be a nonempty closed convex set in . Then, we have for any
(a) , for all .
(b) , for all .
Assumption 2.1 (Interior Point): Let , , be nonempty closed convex sets and be their intersection. There is a relative interior point of , i.e., there exists a scalar such that .
Throughout this paper, the following notations are used. denotes the set of all -dimensional real vector spaces. Given a set , we denote by its convex hull. We write or to denote the transpose of a vector or a matrix . Denote and . For a vector , we denote , while is the standard Euclidean norm in the Euclidean space. In this paper, the quantities (e.g., functions, scalars and sets) associated with agent will be indexed by the superscript .
3. Problem Formulation
Distributed Constrained Optimization Problem
In this paper, we are interested in solving the distributed constrained convex optimization problem over a multi-agent network. Specifically, we consider a network of agents labeled by which endowed with a local convex objective function and a local constraint set. The network objective function is given by
(3)
where is the convex objective function of agent , is the intersection of local constraint sets, is the compact and convex constraint set of agent , and is a global decision vector. Assume that and are only known by agent , and probably different. Let denote the optimal value of (3) and let denote an optimizer of (3). We assume that the optimal value is finite. We also denote the optimal solution set by , i.e., . We will assume that in general is non-differentiable and there exists at least one interior of , i.e. (3) has finite optimal solution. Specially, the following assumptions and lemmas are needed in the analysis of distributed optimization algorithm throughout this paper.
Assumption 3.1 (Weight-balanced): is weight-balanced if , for all .
Assumption 3.2 (Periodical Strong Connectivity): There is a positive integer such that, for all , the directed graph is strongly connected.
Assumption 3.3 (Subgradient Boundedness): There exist a closed convex closure such that for all . The subgradient set of each over is bounded, i.e., there exists a constant for all such that , .
Lemma 3.1 (Dynamic Average Consensus Algorithm) [30]: The following is a vector version of the first-order dynamic average consensus algorithm with :
Denote for . The sequences of satisfy and and periodical strong connectivity, i.e., there exists a constant such that , and , for , satisfies , for all . Assume that for all and all . Then for all .
Proof: One can finish the proof by following similar arguments of Theorem 3.1 in [30].
Consider the following distributed projected subgradient algorithm proposed in [3]: Suppose is a closed and convex set. Let . Denote . Denote the average estimate . The following is a slight modification of Lemma 8 and its proof in [3].
Lemma 3.2: Let the weighted-balanced Assumption 3.1, and the periodic strong connectivity Assumption 3.2 hold. Suppose is uniformly bounded for each , and for any , then there exist and such that
Thus we have .
4. Distributed Projected Subgradient Algorithm
In this section, we present a novel distributed projected subgradient algorithm to solve the optimization problem (3), followed by its convergence properties.
4.1. Distributed Projected Subgradient Algorithm
Consider a set of agents. Each agent chooses any initial state , and . Formally, each agent at any time updates according to the following rule:
and generates , based on the following iterative procedure:
(4)
where the positive scalars are step-sizes, the scalars are non-negative weights, is the projector onto the set , and is the networked control gain. The vector is a subgradient of the agent objective function at . Note that we will use the nature shorthand for .
Remark 4.1: Since , it follows that
where is the Laplacian matrix such that .
Proof: Modifying the second term on the right-hand side in the above formula, we then have
Let , one has
Since graph is balanced, we then have and hold if satisfies . Similarly, we obtain .
Assumption 4.1 (Step-size assumption): The step-sizes sequences , , satisfy and .
In the following, we study the convergence behavior of the subgradient algorithm, where the optimal solution and the optimal value are asymptotically agreed upon.
Theorem 4.1 (Convergence properties of the distributed projected subgradient algorithm): Consider the problem (3). Let the weight-balanced Assumption 3.1 and the periodic strong connectivity Assumption 3.2 hold. Consider the sequences of and of the distributed projection subgradient algorithm where the step-sizes sequences satisfy the step-size Assumption 4.1. Then there exists a optimal solution such that for all . Furthermore, we have for all .
Remark 4.2: Our distributed subgradient algorithm is an extension of the distributed projected subgradient algorithm in [3] to solve multi-agent convex optimization problems with local constraints set in a more general way. We do not divide a closed convex set into on identical closed convex sets or on uniform weight with non-identical closed convex sets to analysis the corresponding convergence. Furthermore, unlike other subgradient algorithm, e.g., [33-36],our distributed projected subgradient algorithm consider the condition of various step-sizes, this will totally make the application more extensive.
4.2. Convergence Analysis
In the following, we will prove convergence property of the distributed projected subgradient algorithm. First, we rewrite our algorithm into the following form:
where is projection errors described by and is the local input which allows agent to track the variation of the local objective function . In this way, the update rule of each estimate is resolved in two parts: a convex sum to fuse the information of each agent with those of its neighbors, plus some local error or input. With this decomposition, all the update laws are put into the same form as the dynamic average consensus algorithm e.g., [30]. This observation allows us to divide the analysis of the distributed projected subgradient algorithm into two steps. Firstly, we show all the estimates asymptotically achieve consensus by utilizing the property that the local errors and inputs are diminishing. Secondly, we further show that the consensus vectors coincide with optimal solutions and the optimal value.
Lemma 4.1 (Convergence properties of weighted sequences): Let . Consider the sequences defined by , where , and .
(a) If , then .
(b) If , then .
The proof of Lemma 4.1 can be referred to Lemma 5.1 in [22].
The following lemmas provide a basic iteration relation applied in the convergence proof for the distributed projected subgradient algorithm.
Lemma 4.2 (Basic iteration relation): Let the weighted-balanced Assumption 3.1, and the periodic strong connectivity Assumption 3.2 hold. For any and all , the following estimate holds:
(5)
Proof: By Lemma 2.1, we can deduce that . Let . For any and all , the following estimate holds:
Summing the preceding relation with , and using , we can derive
One can show (5) by substituting the following subgradient inequality into above formula:
The following lemma shows that the consensus is asymptotically reached.
Lemma 4.3 (Achieving consensus): Let the weight-balanced Assumption 3.1, the periodic strong connectivity Assumption 3.2 hold. Consider that the sequences of and of the distributed projected subgradient algorithm with the step-size sequences satisfy the step-sizes Assumption 4.1. Then there exists such that for all , and for all .
Proof: From Lemma 4.2, it follows that
Due to , we can show that
Then it follows from subgradient boundedness Assumption 3.3 that . We have
(6)
Noticing that and are bounded. Since and is continuous, then and . Taking limits on both sides of (6), one can see that for any ,
and thus exists for any .
For another, taking the limits of both sides of (4) as , we have and therefore we deduce that for all . Follows from Lemma 3.1 that for all . Combining this with the property that exists for any , and we deduce that there exists such that for all . Since and is closed, it implies that for all and thus . Since and is continuous, then . It follows from Lemma 3.1 that hold for all .
From Lemma 4.3, we can say that the sequences of the distributed projected algorithm asymptotically agree on to some point in . We further denote by the average of agents’ state estimates . The following lemma illustrates that is the optimal solution.
Lemma 4.4 (Optimal Solution): is the optimal solution of the objective function .
Proof: Denoting the maximum deviation and , then
We will prove this lemma by contradiction. Suppose that is not the optimal solution of over . Then the following inequality holds:
(7)
Then, there exists such that . Consider the sequences of which converge to . Then, it follows from Lemma 4.2 that
where
Further, from the subgradient boundedness Assumption 3.3,
Since , , then and converge to zero as . Then there exists such that for all , it holds that
Following a recursive argument, we obtain for all
(8)
Recalling that and , and are bounded, then (8) yields a contradiction as . That is to say, inequality (7) can’t hold. Therefore, we have the desired results.
Lemma 4.5: It holds that .
Proof: Since and , the following holds for any :
The following can be proven by induction on for a fixed :
(9)
Let in (9) and recall that initial state for all . Then we obtain
(10)
From (10), we can obtain
(11)
Combining (11) with gives the desired result. Based on the above five Lemmas, we then finish the proofs of Theorem 4.1.
5. Numerical Example
In this section, we study a simple numerical example to illustrate the effectiveness of the proposed distributed projected subgradient algorithm. We here consider a network with five agents and their objective functions are formulated as follows:
where the global decision vector . The optimization problem can be described as:
(12)
where local constraint sets are denoted by
We solve problem (12) by employing the distributed projected subgradient algorithm (4) with the step-sizes , is a positive linear function of . Fig. 1 to 3 shows the simulation results of the distributed projected subgradient algorithm (4). Fig. 1 shows that local input tends to when achieve consensus. It can be seen from Fig. 2 that all the agents asymptotically achieve the optimal solution by taking iteration. We can observe from Fig. 3 that all the agents asymptotically achieve the optimal value.
6. Conclusion and Future Work
In this paper, we formulated a distributed optimization problem with both local objective functions and local constraint sets private to each agent. Then, we proposed a novel distributed projected subgradient algorithm for the constrained optimization with a convergence analysis. The algorithm was shown to asymptotically converge to optimal solution and optimal value. A numerical example was presented to demonstrate the performance of our algorithm. Future work will focus on the problem with local objective functions, local equality, local inequality and local constraint sets. Also, we will pay attention to the convergence rates of the algorithms in this paper.
Acknowledgements
This work described in this paper was supported in part by the Visiting Scholarship of State Key Laboratory of Power Transmission Equipment & System Security and New Technology (Chongqing University) under Grant 2007DA10512716421, in part by the Fundamental Research Funds for the Central Universities under Grant XDJK2016B016, in part by the Natural Science Foundation Project of Chongqing CSTC under Grants cstc2014jcyjA40016 and cstc2014jcyjA40041, in part by the China Postdoctoral Science Foundation under Grant 2016M590852, in part by the Natural Science Foundation of China under Grant 61403314, in part by the project supported by graduate scientific research and innovation foundation of Chongqing China under Grant CYS16069, and in part by the Scientific and Technological Research Program of Chongqing Municipal Education Commission under KJ1501401.
References