Distributed Subgradient Algorithm for Multi-agent Convex Optimization with Local Constraint Sets

: This paper considers a distributed constrained optimization problem, where the objective function is the sum of local objective functions of distributed nodes in a network. The estimate of each agent is restricted to different convex sets. To solve this optimization problem which is not necessarily smooth, we study a novel distributed projected subgradient algorithm for multi-agent optimization with nonidentical constraint sets and switching topologies. The algorithm shows that each agent minimizes its own objective function while communicating information locally with other agents over a network with time-varying topologies but satisfying a standard connectivity property. Under the assumption that the network topology is weight-balanced, the novel distributed subgradient algorithm we proposed is proven to be convergent. Particularly, we suppose the step-size is various, which is different from previous work on multi-agent optimization that makes worst-case assumption with constant step-size.


Introduction
In recent years, multi-agent systems and distributed algorithms have received considerable research attentions due to its wide applications in many engineering systems and large-scale networks , including resource allocation in computer network [16][17][18], distributed estimation in sensor networks [19], distributed finite-time optimal rendezvous problem [21], and distributed demand response control problem in smart grid [22]. In many networked systems (see e.g. [22][23][24][25][26][27][28]), multi-agent are enforced to solve a distributed convex optimization problem, where the global objective function is the sum of local objective functions, each of them can not be known or shared by other agents.
Distributed optimization of a sum of convex functions has received a surge of interests in recent years. Nedić and Ozdaglar [1] presented an analysis of the consensus-based subgradient method for solving the distributed convex optimization problem. Projection-based distributed algorithm was developed in Nedić et al. [3] for distributed optimization, where each agent was constrained to individual closed convex set and gave corresponding convergence analysis on identical closed convex sets and on uniform weight with nonidentical. Further distributed algorithm for set constrained optimization was investigated in Bianchi and Jakubowicz [7] and Lou et al. [10]. To figure out distributed optimization problems with asynchronous stepsizes or inequality-equality constraints, Distributed Lagrangian primal-dual subgradient algorithm and penalty primal-dual subgradient algorithm were shown in Zhu et al. [6], Towfic and Sayed [12], both of them were designed for function constrained problems. Meanwhile, dual decomposition was applied to separable problems with affine constraints in [4,9]. Recent works [15][16][17][18] have coordinately put their sights on inequality-equality constraints. Zhu et al. [22] proposed a distributed Lagrangian primal-dual subgradient method which is based on the characterization of the primal-dual optimal solutions as the saddle points of the Lagrangian function associated with the problem. Yuan et al. [25] studied a variant of the distributed primal-dual subgradient method, where the multi-step consensus algorithm was employed to simplify the implementation and convergence analysis of the method. To solve the multi-agent optimization problem with more general inequality constraints that couple all the agents' optimization variables, Chang et al. [27] proposed a novel distributed primal-dual perturbed subgradient method and established its convergence. The implementation of the aforementioned methods in general involved projection-step onto some primal and dual constraint sets, respectively.
Inspired by the works of [1,3,33], a multi-agent unconstrained convex optimization problem through a novel combination of average consensus algorithms with subgradient methods was solved in Nedić and Ozdaglar [1]. Then, Nedić et al. [3] assumed that each agent is constrained to remain in a closed convex set and gave corresponding convergence analysis on identical closed convex sets and on uniform weight with non-identical closed convex sets. Furthermore, paper [22] solved a multi-agent convex optimization problem where the agents were subjected to a global inequality constraint, a global equality constraint and a global constraint set. In order to figure out these constraints, Zhu et al. [22] presented two different distributed projection algorithms with three assumptions that the network topologies were strongly connected among each time interval of a certain bounded length and the adjacency matrices were doubly stochastic and non-degeneracy.
Contributions: Inspired by the previous studies, this paper proposes a novel distributed subgradient algorithm for multiagent convex optimization with local constraint sets. Previous work did not perform well on the application of the distributed algorithms in multi-agent network, for example, they may regarded the edge weight matrices of graphs as doubly stochastic (i.e., However, our methods do not assume that the adjacency matrices are doubly stochastic. We only require the network is weight-balanced, which makes the algorithm more practical. More precisely, the contribution of this paper is mainly in two aspects. Firstly, based on the conditions that each agent is restricted to different convex sets and the digraph is weight-balanced, we introduce a novel distributed projected subgradient algorithm under the case of various step-sizes. Secondly, we show the convergence of the algorithm and prove that it can achieve the optimal point of the sum of agents' local objective functions while satisfying local constraint sets.
The organization of the remaining part is given as follows. Some basic preliminaries and concepts are given in section 2. Then, in Section 3, we present our problem formulation and some distributed subgradient algorithm preliminaries. We then introduce the distributed projection subgradient algorithm with some supporting lemmas and continue with a convergence analysis of the algorithm in Section 4. Furthermore, the properties of the algorithm are explored by using a numerical example in Section 5. Finally, we conclude the paper with a discussion and future work in Section 6.

Preliminaries and Concepts
In this section, we review some useful related concepts in algebraic graph theory, convex analysis, properties of the projection operation on a closed convex set and introduce some useful lemmas (referring to [28,29,31]).

Algebraic Graph Theory
We use a graph to describe the information exchange between the agents and that leader. The interaction topology of information exchange between N agents is commonly depicted by a weighted directed graph is the set of vertices representing N agents, and E V V ⊆ × is the set of edge of the graph. It is assumed that the graph is simple, i.e., there are no repeated edges or self-loops. The weighted adjacency matrix of G is denoted by [ ] . A directed graph is strongly connected if for any two distinct nodes j and i in the set V , there always exists a directed path from node j to node i . A graph is called an in-degrees (or outdegrees) balanced graph if the in-degrees (or out-degrees) of all nodes in the directed graph are equal. A directed graph with N nodes is called a directed tree if it contains 1 N − edges and there exists a root node with directed paths to every other node. A directed spanning tree of a directed graph is a directed tree that contains all network nodes.

Basic Notations and Concepts
In this paper, we do not assume the function [ The set of all subgradients of a convex function F at n x R ∈ is called the subdifferential of F at x , and is denoted by We use [ ] X P x to denote the projection of a vector x on a closed convex set X , i.e.
[ ] arg min || || In the subsequent development, the properties of the projection operation on a closed convex set play an important role. In particular, we use the projection inequality, i.e., for any vector x We also use the standard non-expansiveness property, i.e. for any x and y In addition, we use the properties given in the following lemma.
Lemma 2.1: Let X be a nonempty closed convex set in n R . Then, we have for any be nonempty closed convex sets and Throughout this paper, the following notations are used. n R denotes the set of all n -dimensional real vector spaces.
Given a set S , we denote co( ) S by its convex hull. We write T x or T A to denote the transpose of a vector x or a matrix A . Denote is the standard Euclidean norm in the Euclidean space. In this paper, the quantities (e.g., functions, scalars and sets) associated with agent i will be indexed by the superscript [ ] i .

Distributed Constrained Optimization Problem
In this paper, we are interested in solving the distributed constrained convex optimization problem over a multi-agent network. Specifically, we consider a network of agents labeled by {1, 2,..., } V N = which endowed with a local convex objective function and a local constraint set. The network objective function is given by Let * p denote the optimal value of (3) and let * x denote an optimizer of (3). We assume that the optimal value * p is finite. We also denote the optimal solution set by * X , i.e., . We will assume that in general f is non-differentiable and there exists at least one interior x of X , i.e. x X ∈ (3) has finite optimal solution. Specially, the following assumptions and lemmas are needed in the analysis of distributed optimization algorithm throughout this paper.
Assumption 3.2 (Periodical Strong Connectivity): There is a positive integer B such that, for all 0 0 k ≥ , the directed Proof: One can finish the proof by following similar arguments of Theorem 3.1 in [30].
Consider the following distributed projected subgradient algorithm proposed in [3] The following is a slight modification of Lemma 8 and its proof in [3]. Lemma 3.2: Let the weighted-balanced Assumption 3.1, and the periodic strong connectivity Assumption 3.2 hold.

Distributed Projected Subgradient Algorithm
In this section, we present a novel distributed projected subgradient algorithm to solve the optimization problem (3), followed by its convergence properties.

Distributed Projected Subgradient Algorithm
Proof: Modifying the second term on the right-hand side in the above formula, we then have In the following, we study the convergence behavior of the subgradient algorithm, where the optimal solution and the optimal value are asymptotically agreed upon. Remark 4.2: Our distributed subgradient algorithm is an extension of the distributed projected subgradient algorithm in [3] to solve multi-agent convex optimization problems with local constraints set in a more general way. We do not divide a closed convex set into on identical closed convex sets or on uniform weight with non-identical closed convex sets to analysis the corresponding convergence. Furthermore, unlike other subgradient algorithm, e.g., [33][34][35][36], our distributed projected subgradient algorithm consider the condition of various step-sizes, this will totally make the application more extensive.

Convergence Analysis
In the following, we will prove convergence property of the distributed projected subgradient algorithm. First, we rewrite our algorithm into the following form: is the local input which allows agent i to track the variation of the local objective function [ ] i f . In this way, the update rule of each estimate is resolved in two parts: a convex sum to fuse the information of each agent with those of its neighbors, plus some local error or input. With this decomposition, all the update laws are put into the same form as the dynamic average consensus algorithm e.g., [30]. This observation allows us to divide the analysis of the distributed projected subgradient algorithm into two steps. Firstly, we show all the estimates asymptotically achieve consensus by utilizing the property that the local errors and inputs are diminishing. Secondly, we further show that the consensus vectors coincide with optimal solutions and the optimal value. The proof of Lemma 4.1 can be referred to Lemma 5.1 in [22].

Numerical Example
In this section, we study a simple numerical example to illustrate the effectiveness of the proposed distributed projected subgradient algorithm. We here consider a network with five agents and their objective functions are formulated as follows: ∈ . The optimization problem can be described as: 5 [ h is a positive linear function of i . Fig. 1 to 3 shows the simulation results of the distributed projected subgradient algorithm (4). Fig. 1 shows that local input [ ] i u tends to 0 when achieve consensus. It can be seen from Fig. 2 that all the agents asymptotically achieve the optimal solution by taking 300 iteration. We can observe from Fig. 3 that all the agents asymptotically achieve the optimal value.

Conclusion and Future Work
In this paper, we formulated a distributed optimization problem with both local objective functions and local constraint sets private to each agent. Then, we proposed a novel distributed projected subgradient algorithm for the constrained optimization with a convergence analysis. The algorithm was shown to asymptotically converge to optimal solution and optimal value. A numerical example was presented to demonstrate the performance of our algorithm. Future work will focus on the problem with local objective functions, local equality, local inequality and local constraint sets. Also, we will pay attention to the convergence rates of the algorithms in this paper.