Applied and Computational Mathematics
Volume 5, Issue 3, June 2016, Pages: 150-159

Distributed Subgradient Algorithm for Multi-agent Convex Optimization with Local Constraint Sets

Qingguo Lü1, Huaqing Li1, 2, *, Li Xiao3

1College of Electronic and Information Engineering, Southwest University, Chongqing, PR China

2State Key Laboratory of Power Transmission Equipment & System Security and New Technology, Chongqing University, Chongqing, PR China

3Department of Mathematics and Information Engineering, Chongqing University of Education, Chongqing, PR China

Email address:

(Huaqing Li)

*Corresponding author

To cite this article:

Qingguo Lü, Huaqing Li, Li Xiao. Distributed Subgradient Algorithm for Multi-agent Convex Optimization with Local Constraint Sets. Applied and Computational Mathematics. Vol. 5, No. 3, 2016, pp. 150-159. doi: 10.11648/j.acm.20160503.19

Received: June 12, 2016; Accepted: June 20, 2016; Published: July 13, 2016

Abstract: This paper considers a distributed constrained optimization problem, where the objective function is the sum of local objective functions of distributed nodes in a network. The estimate of each agent is restricted to different convex sets. To solve this optimization problem which is not necessarily smooth, we study a novel distributed projected subgradient algorithm for multi-agent optimization with nonidentical constraint sets and switching topologies. The algorithm shows that each agent minimizes its own objective function while communicating information locally with other agents over a network with time-varying topologies but satisfying a standard connectivity property. Under the assumption that the network topology is weight-balanced, the noveldistributed subgradient algorithm we proposed is proven to be convergent. Particularly, we suppose the step-size is various, which is different from previous work on multi-agent optimization that makes worst-case assumption with constant step-size.

Keywords: Distributed Optimization, Subgradient Algorithm, Multi-agent Network, Weight-Balanced

1. Introduction

In recent years, multi-agent systems and distributed algorithms have received considerable research attentions due to its wide applications in many engineering systems and large-scale networks [1-28], including resource allocation in computer network [16-18], distributed estimation in sensor networks [19], distributed finite-time optimal rendezvous problem [21], and distributed demand response control problem in smart grid [22]. In many networked systems (see e.g. [22-28]), multi-agent are enforced to solve a distributed convex optimization problem, where the global objective function is the sum of local objective functions, each of them  can not be known or shared by other agents.

Distributed optimization of a sum of convex functions has received a surge of interests in recent years.Nedić and Ozdaglar [1] presented an analysis of the consensus-based subgradient method for solving the distributed convex optimization problem. Projection-based distributed algorithm was developed in Nedić et al. [3] for distributed optimization, where each agent was constrained to individual closed convex set and gave corresponding convergence analysis on identical closed convex sets and on uniform weight with non-identical. Further distributed algorithm for set constrained optimization was investigated in Bianchi and Jakubowicz [7] and Lou et al. [10]. To figure out distributed optimization problems with asynchronous stepsizes or inequality–equality constraints, Distributed Lagrangian primal-dual subgradient algorithm and penalty primal-dual subgradient algorithm were shown in Zhu et al. [6], Towfic and Sayed [12], both of them were designed for function constrained problems. Meanwhile, dual decomposition was applied to separable problems with affine constraints in [4,9]. Recent works [15-18] have coordinately put their sights on inequality-equality constraints. Zhu et al. [22] proposed a distributed Lagrangian primal–dual subgradient method which is based on the characterization of the primal–dual optimal solutions as the saddle points of the Lagrangian function associated with the problem. Yuan et al. [25] studied a variant of the distributed primal–dual subgradient method, where the multi-step consensus algorithm was employed to simplify the implementation and convergence analysis of the method. To solve the multi-agent optimization problem with more general inequality constraints that couple all the agents’ optimization variables, Chang et al. [27] proposed a novel distributed primal–dual perturbed subgradient method and established its convergence. The implementation of the aforementioned methods in general involved projection-step onto some primal and dual constraint sets, respectively.

Inspired by the works of [1,3,33], a multi-agent unconstrained convex optimization problem through a novel combination of average consensus algorithms with subgradient methods was solved in Nedić and Ozdaglar [1]. Then, Nedić et al. [3] assumed that each agent is constrained to remain in a closed convex set and gave corresponding convergence analysis on identical closed convex sets and on uniform weight with non-identical closed convex sets. Furthermore, paper [22] solved a multi-agent convex optimization problem where the agents were subjected to a global inequality constraint, a global equality constraint and a global constraint set. In order to figure out these constraints, Zhu et al. [22] presented two different distributed projection algorithms with three assumptions that the network topologies were strongly connected among each time interval of a certain bounded length and the adjacency matrices were doubly stochastic and non-degeneracy.

Contributions: Inspired by the previous studies, this paper proposes a novel distributed subgradient algorithm for multi-agent convex optimization with local constraint sets. Previous work did not perform well on the application of the distributed algorithms in multi-agent network, for example, they may regarded the edge weight matrices of graphs as doubly stochastic (i.e.,  for all  and , and  for all  and ). However, our methods do not assume that the adjacency matrices are doubly stochastic. We only require the network is weight-balanced, which makes the algorithm more practical. More precisely, the contribution of this paper is mainly in two aspects. Firstly, based on the conditions that each agent is restricted to different convex sets and the digraph is weight-balanced, we introduce a novel distributed projected subgradient algorithm under the case of various step-sizes. Secondly, we show the convergence of the algorithm and prove that it can achieve the optimal point of the sum of agents’ local objective functions while satisfying local constraint sets.

The organization of the remaining part is given as follows. Some basic preliminaries and concepts are given in section 2. Then, in Section 3, we present our problem formulation and some distributed subgradient algorithm preliminaries. We then introduce the distributed projection subgradient algorithm with some supporting lemmas and continue with a convergence analysis of the algorithm in Section 4. Furthermore, the properties of the algorithm are explored by using a numerical example in Section 5. Finally, we conclude the paper with a discussion and future work in Section 6.

2. Preliminaries and Concepts

In this section, we review some useful related concepts in algebraic graph theory, convex analysis, properties of the projection operation on a closed convex set and introduce some useful lemmas (referring to [28,29,31]).

2.1. Algebraic Graph Theory

We use a graph to describe the information exchange between the agents and that leader. The interaction topology of information exchange between  agents is commonly depicted by a weighted directed graph , where  is the set of vertices representing agents, and  is the set of edge of the graph. It is assumed that the graph is simple, i.e., there are no repeated edges or self-loops. The weighted adjacency matrix of is denoted by  with  if  and  otherwise. Note that the diagonal elements  and  is generally an asymmetric matrix. A directed edge  implies that node  can reach node  or node  can receive information from node . If an edge , then node  is called a neighbor of node  and . The neighbor node set of node  is denoted by , while we indicate with  the number of neighbors of node . The Laplacian matrix  associated with the adjacency matrix  is defined by ; , which ensures that . The in-degree and out-degree of node  can be, respectively, define by the Laplacian matrix as:  and . A directed path from node  to node  is a sequence of edges  in the directed graph  with distinct nodes . A directed graph is strongly connected if for any two distinct nodes  and  in the set , there always exists a directed path from node  to node . A graph is called an in-degrees (or out-degrees) balanced graph if the in-degrees (or out-degrees) of all nodes in the directed graph are equal. A directed graph with  nodes is called a directed tree if it contains  edges and there exists a root node with directed paths to every other node. A directed spanning tree of a directed graph is a directed tree that contains all network nodes.

2.2. Basic Notations and Concepts

In this paper, we do not assume the function  at a point is differentiable and the subgradient plays the role of the gradient.

Definition 1: For a given convex function  and a point , a subgradient of the function  at  is a vector  such that the following subgradient inequality holds for any :

Definition 2: The set of all subgradients of a convex function  at  is called the subdifferential of  at , and is denoted by :

We use  to denote the projection of a vector  on a closed convex set , i.e.

In the subsequent development, the properties of the projection operation on a closed convex set play an important role. In particular, we use the projection inequality, i.e., for any vector

 for all  (1)

We also use the standard non-expansiveness property, i.e. for any  and


In addition, we use the properties given in the following lemma.

Lemma 2.1: Let  be a nonempty closed convex set in . Then, we have for any

(a)  , for all .

(b)  , for all .

Assumption 2.1 (Interior Point): Let , , be nonempty closed convex sets and  be their intersection. There is a relative interior point  of , i.e., there exists a scalar  such that .

Throughout this paper, the following notations are used.  denotes the set of all -dimensional real vector spaces. Given a set , we denote  by its convex hull. We write  or  to denote the transpose of a vector  or a matrix . Denote  and . For a vector , we denote , while is the standard Euclidean norm in the Euclidean space. In this paper, the quantities (e.g., functions, scalars and sets) associated with agent  will be indexed by the superscript .

3. Problem Formulation

Distributed Constrained Optimization Problem

In this paper, we are interested in solving the distributed constrained convex optimization problem over a multi-agent network. Specifically, we consider a network of agents labeled by  which endowed with a local convex objective function and a local constraint set. The network objective function is given by


where  is the convex objective function of agent , is the intersection of local constraint sets,  is the compact and convex constraint set of agent , and  is a global decision vector. Assume that  and  are only known by agent , and probably different. Let  denote the optimal value of (3) and let  denote an optimizer of (3). We assume that the optimal value  is finite. We also denote the optimal solution set by , i.e., . We will assume that in general  is non-differentiable and there exists at least one interior  of , i.e.  (3) has finite optimal solution. Specially, the following assumptions and lemmas are needed in the analysis of distributed optimization algorithm throughout this paper.

Assumption 3.1 (Weight-balanced):  is weight-balanced if , for all .

Assumption 3.2 (Periodical Strong Connectivity): There is a positive integer  such that, for all , the directed graph  is strongly connected.

Assumption 3.3 (Subgradient Boundedness): There exist a closed convex closure  such that  for all . The subgradient set of each  over  is bounded, i.e., there exists a constant  for all  such that , .

Lemma 3.1 (Dynamic Average Consensus Algorithm) [30]: The following is a vector version of the first-order dynamic average consensus algorithm with :

Denote  for . The sequences of  satisfy  and  and periodical strong connectivity, i.e., there exists a constant  such that , and , for , satisfies , for all . Assume that  for all  and all . Then  for all .

Proof: One can finish the proof by following similar arguments of Theorem 3.1 in [30].

Consider the following distributed projected subgradient algorithm proposed in [3]: Suppose  is a closed and convex set. Let . Denote . Denote the average estimate . The following is a slight modification of Lemma 8 and its proof in [3].

Lemma 3.2: Let the weighted-balanced Assumption 3.1, and the periodic strong connectivity Assumption 3.2 hold. Suppose  is uniformly bounded for each , and  for any , then there exist  and  such that

Thus we have .

4. Distributed Projected Subgradient Algorithm

In this section, we present a novel distributed projected subgradient algorithm to solve the optimization problem (3), followed by its convergence properties.

4.1. Distributed Projected Subgradient Algorithm

Consider a set  of agents. Each agent chooses any initial state , and . Formally, each agent  at any time  updates according to the following rule:

and generates ,  based on the following iterative procedure:


where the positive scalars  are step-sizes, the scalars  are non-negative weights,  is the projector onto the set , and  is the networked control gain. The vector  is a subgradient of the agent  objective function  at . Note that we will use the nature shorthand  for .

Remark 4.1: Since , it follows that

where  is the Laplacian matrix such that .

Proof: Modifying the second term on the right-hand side in the above formula, we then have

Let , one has

Since graph  is balanced, we then have  and  hold if  satisfies . Similarly, we obtain .

Assumption 4.1 (Step-size assumption): The step-sizes sequences , , satisfy  and .

In the following, we study the convergence behavior of the subgradient algorithm, where the optimal solution and the optimal value are asymptotically agreed upon.

Theorem 4.1 (Convergence properties of the distributed projected subgradient algorithm): Consider the problem (3). Let the weight-balanced Assumption 3.1 and the periodic strong connectivity Assumption 3.2 hold. Consider the sequences of  and  of the distributed projection subgradient algorithm where the step-sizes sequences satisfy the step-size Assumption 4.1. Then there exists a optimal solution  such that  for all . Furthermore, we have  for all .

Remark 4.2: Our distributed subgradient algorithm is an extension of the distributed projected subgradient algorithm in [3] to solve multi-agent convex optimization problems with local constraints set in a more general way. We do not divide a closed convex set into on identical closed convex sets or on uniform weight with non-identical closed convex sets to analysis the corresponding convergence. Furthermore, unlike other subgradient algorithm, e.g., [33-36],our distributed projected subgradient algorithm consider the condition of various step-sizes, this will totally make the application more extensive.

4.2. Convergence Analysis

In the following, we will prove convergence property of the distributed projected subgradient algorithm. First, we rewrite our algorithm into the following form:

where  is projection errors described by  and  is the local input which allows agent  to track the variation of the local objective function . In this way, the update rule of each estimate is resolved in two parts: a convex sum to fuse the information of each agent with those of its neighbors, plus some local error or input. With this decomposition, all the update laws are put into the same form as the dynamic average consensus algorithm e.g., [30]. This observation allows us to divide the analysis of the distributed projected subgradient algorithm into two steps. Firstly, we show all the estimates asymptotically achieve consensus by utilizing the property that the local errors and inputs are diminishing. Secondly, we further show that the consensus vectors coincide with optimal solutions and the optimal value.

Lemma 4.1 (Convergence properties of weighted sequences): Let . Consider the sequences defined by  , where ,  and .

(a)  If , then .

(b)  If  , then .

The proof of Lemma 4.1 can be referred to Lemma 5.1 in [22].

The following lemmas provide a basic iteration relation applied in the convergence proof for the distributed projected subgradient algorithm.

Lemma 4.2 (Basic iteration relation): Let the weighted-balanced Assumption 3.1, and the periodic strong connectivity Assumption 3.2 hold. For any  and all , the following estimate holds:


Proof: By Lemma 2.1, we can deduce that . Let . For any  and all , the following estimate holds:

Summing the preceding relation with , and using , we can derive

One can show (5) by substituting the following subgradient inequality into above formula:

The following lemma shows that the consensus is asymptotically reached.

Lemma 4.3 (Achieving consensus): Let the weight-balanced Assumption 3.1, the periodic strong connectivity Assumption 3.2 hold. Consider that the sequences of  and  of the distributed projected subgradient algorithm with the step-size sequences  satisfy the step-sizes Assumption 4.1. Then there exists  such that  for all , and  for all .

Proof: From Lemma 4.2, it follows that

Due to , we can show that

Then it follows from subgradient boundedness Assumption 3.3 that . We have


Noticing that  and  are bounded. Since  and  is continuous, then  and . Taking limits on both sides of (6), one can see that for any ,

and thus  exists for any .

For another, taking the limits of both sides of (4) as , we have  and therefore we deduce that  for all . Follows from Lemma 3.1 that  for all . Combining this with the property that  exists for any , and we deduce that there exists  such that  for all . Since  and  is closed, it implies that  for all  and thus . Since  and  is continuous, then . It follows from Lemma 3.1 that  hold for all .

From Lemma 4.3, we can say that the sequences  of the distributed projected algorithm asymptotically agree on to some point in . We further denote by the average of agents’ state estimates . The following lemma illustrates that  is the optimal solution.

Lemma 4.4 (Optimal Solution):  is the optimal solution of the objective function .

Proof: Denoting the maximum deviation  and , then

We will prove this lemma by contradiction. Suppose that  is not the optimal solution of  over . Then the following inequality holds:


Then, there exists  such that . Consider the sequences of  which converge to . Then, it follows from Lemma 4.2 that


Further, from the subgradient boundedness Assumption 3.3,

Since , , then  and  converge to zero as . Then there exists  such that for all , it holds that

Following a recursive argument, we obtain for all


Recalling that  and , and  are bounded, then (8) yields a contradiction as . That is to say, inequality (7) can’t hold. Therefore, we have the desired results.

Lemma 4.5: It holds that .

Proof: Since  and , the following holds for any :

The following can be proven by induction on  for a fixed :


Let  in (9) and recall that initial state  for all . Then we obtain


From (10), we can obtain


Combining (11) with gives the desired result. Based on the above five Lemmas, we then finish the proofs of Theorem 4.1.

5. Numerical Example

In this section, we study a simple numerical example to illustrate the effectiveness of the proposed distributed projected subgradient algorithm. We here consider a network with five agents and their objective functions are formulated as follows:

where the global decision vector . The optimization problem can be described as:


where local constraint sets  are denoted by

We solve problem (12) by employing the distributed projected subgradient algorithm (4) with the step-sizes ,  is a positive linear function of . Fig. 1 to 3 shows the simulation results of the distributed projected subgradient algorithm (4). Fig. 1 shows that local input  tends to  when achieve consensus. It can be seen from Fig. 2 that all the agents asymptotically achieve the optimal solution by taking  iteration. We can observe from Fig. 3 that all the agents asymptotically achieve the optimal value.

Fig. 1. Local input .

Fig. 2. All agents’ states .

Fig. 3. The evolution of states .

6. Conclusion and Future Work

In this paper, we formulated a distributed optimization problem with both local objective functions and local constraint sets private to each agent. Then, we proposed a novel distributed projected subgradient algorithm for the constrained optimization with a convergence analysis. The algorithm was shown to asymptotically converge to optimal solution and optimal value. A numerical example was presented to demonstrate the performance of our algorithm. Future work will focus on the problem with local objective functions, local equality, local inequality and local constraint sets. Also, we will pay attention to the convergence rates of the algorithms in this paper.


This work described in this paper was supported in part by the Visiting Scholarship of State Key Laboratory of Power Transmission Equipment & System Security and New Technology (Chongqing University) under Grant 2007DA10512716421, in part by the Fundamental Research Funds for the Central Universities under Grant XDJK2016B016, in part by the Natural Science Foundation Project of Chongqing CSTC under Grants cstc2014jcyjA40016 and cstc2014jcyjA40041, in part by the China Postdoctoral Science Foundation under Grant 2016M590852, in part by the Natural Science Foundation of China under Grant 61403314, in part by the project supported by graduate scientific research and innovation foundation of Chongqing China under Grant CYS16069, and in part by the Scientific and Technological Research Program of Chongqing Municipal Education Commission under KJ1501401.


  1. A. Nedić and A. Ozdaglar. "Distributed subgradient methods for multiagent optimization", IEEE Transactions on Automatic Control, 54 (1): 48-61, 2009.
  2. B. Gu, V. S. Sheng, K. Y. Tay, et al. "Incremental support vector learning for ordinal regression," IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 7, pp. 1403-1416, 2015.
  3. A. Nedić, A. Ozdaglar, and P. Parrilo. Constrained consensus and optimization in multi-agent networks, IEEE Transactions on Automatic Control, 2010, 55 (4): 922-938.
  4. H. Li, X. Liao, X. Lei, et al. Second-order consensus seeking in multi-agent systems with nonlinear dynamics over random switching directed networks. IEEE Transactions on Circuits and Systems I: Regular Papers, 2013, 60 (6): 1595-1607.
  5. B. Gu, V. S. Sheng, Z. Wang, D. Ho, S. Osman, and S. Li, "Incremental learning for ν-Support Vector Regression," Neural Networks, vol. 67, pp. 140-150, 2015.
  6. M. Zhu and S. Martínez. On distributed convex optimization under inequality and equality constraints via primal-dual subgradient methods, arXiv preprint arXiv: 1001. 2612, 2010.
  7. P. Bianchi, J. Jakubowicz, Convergence of a multi-agent projected stochastic gradient algorithm for non-convex optimization, IEEE Transactions on Automatic Control, 2013, 58 (2): 391–405.
  8. Z. Xia, X. Wang, X. Sun, and Q. Wang, "A Secure and dynamic multi-keyword ranked search scheme over encrypted cloud data," IEEE Transactions on Parallel and Distributed Systems, vol. 27, no. 2, pp. 340-352, 2015.
  9. I. Necoara, Random coordinate descent algorithms for multi-agent convex optimization over networks, IEEE Transactions on Automatic Control, 2013, 58 (8): 2001-2012.
  10. Y. Lou, G. Shi, K. H. Johansson, Y. Hong, Approximate projected consensus for convex intersection computation: Convergence analysis and critical error angle, IEEE Transactions on Automatic Control, 2014, 59 (7): 1722-1736.
  11. Z. Fu, X. Sun, Q. Liu, L. Zhou, and J. Shu, "Achieving efficient cloud search services: multi-keyword ranked search over encrypted cloud data supporting parallel computing," IEICE Transactions on Communications, vol. E98-B, no. 1, pp. 190-200, 2015.
  12. Z. J. Towfic, A. H. Sayed, Adaptive penalty-based distributed stochastic convex optimization, IEEE Transactions on Signal Process, 2014, 62 (15) 3924-3938.
  13. H. Li, X. Liao, T. Huang, et al. Second-order global consensus in multi-agent networks with random directional link failure. IEEE Transactions on Neural Networks and Learning Systems, 2015, 26 (3): 565-575.
  14. B. Chen, H. Shu, G. Coatrieux, G. Chen, et al. "Color image analysis by quaternion-type moments," Journal of Mathematical Imaging and Vision, vol. 51, no. 1, pp. 124-144, 2015.
  15. A. H. Sayed, Adaptation, learning, and optimization over networks, in: Foundations and Trends in Machine Learning, Vol. 7, NOW Publishers, Boston-Delft, 2014, pp. 4-5.
  16. H. Li, G. Chen, T. Huang, et al. High-performance consensus control in networked systems with limited bandwidth communication and time-varying directed topologies. IEEE Transactions on Neural Networks and Learning Systems, 2016, DOI: 10.1109/TNNLS.2016.2519894.
  17. X. Wen, L. Shao, Y. Xue, and W. Fang, "A rapid learning algorithm for vehicle classification," Information Sciences, vol. 295, no. 1, pp. 395-406, 2015.
  18. B. Johansson, P. Soldati, and M. Johansson, "Mathematical decomposition techniques for distributed cross-layer optimization of data networks," IEEE Journal on Selected Areas in Communications, vol. 24, no. 8, pp. 1535-1547, Aug. 2006.
  19. M. Rabbat, R. Nowak, and J. Bucklew, "Generalized consensus computation in networked systems with erasure links," in IEEE SPAWC, 2005.
  20. Y. Zheng, B. Jeon, D. Xu, Q. M. Jonathan Wu, and H. Zhang, "Image segmentation by generalized hierarchical fuzzy C-means algorithm," Journal of Intelligent and Fuzzy Systems, vol. 28, no. 2, pp. 961-973, 2015.
  21. H. Li, X. Liao, G. Chen, et al. Event-triggered asynchronous intermittent communication strategy for synchronization in complex dynamical networks. Neural Networks, 2015, 66: 1-10.
  22. M. Zhu and S. Martinez, "On distributed convex optimization under inequality and equality constraints," IEEE Transactions on Automatic Control, 2012, vol. 57, no. 1, pp. 151-164.
  23. M. Zhu and S. Martinez, "An approximate dual subgradient algorithm for multi-agent non-convex optimization," IEEE Transactions on Automatic Control, 2013, vol. 58, no. 6, pp. 1534-1539.
  24. Z. Xia, X. Wang, X. Sun, and B. Wang, "Steganalysis of least significant bit matching using multi-order differences," Security and Communication Networks, vol. 7, no. 8, pp. 1283-1291, 2014.
  25. D. Yuan, S. Xu, and H. Zhao, "Distributed primal–dual subgradient method for multiagent optimization via consensus algorithms," IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 41, no. 6, pp. 1715-1724, Dec. 2011.
  26. B. Gu, X. Sun, and V. S. Sheng, "Structural minimax probability machine," IEEE Transactions on Neural Networks and Learning Systems, DOI: 10.1109/TNNLS.2016.2544779, 2016.
  27. T. H. Chang, A. Nedic, and A. Scaglione, "Distributed constrained optimization by consensus-based primal–dual perturbation method," IEEE Transactions on Automatic Control, 2014, vol. 59, no. 6, pp. 1524-1538.
  28. H. Li, X. Liao, T. Huang. Second-order locally dynamical consensus of multi-agent systems with arbitrarily fast switching directed topologies. IEEE Transactions on Systems Man and Cybernetics: Systems, 2013, 43 (6): 1343-1353.
  29. C. Godsil and G. Royle, Algebraic Graph Theory, New York: Springer-Verlag, 2001.
  30. M. Zhu, and S. Martínez. Discrete-time dynamic average consensus, Automatica, 2010, 46 (2): 322-329.
  31. H. Li, G. Chen, T. Huang, et al. Event-triggering sampling based leader-following consensus in second-order multi-agent systems. IEEE Transactions on Automatic Control, 2015, 60 (7): 1998-2003.
  32. B. Johansson, A. Speranzon, M. Johansson, and K. H. Johansson, "On decentralized negotiation of optimal consensus," Automatical, pp. 1175-1179, Jul. 2008.
  33. H. Li, G. Chen, X. Liao, et al. Quantized data-based leader-following consensus of general discrete-time multi-agent systems, IEEE Transactions on Circuits and Systems IIExpress Briefs, 2016, 63 (4): 401-405.
  34. Ren W, Beard R W. Distributed consensus in multi-vehicle cooperative control. Springer-Verlag, London, 2008.
  35. H. Li, G. Chen, T. Huang, et al. Event-triggered distributed consensus over directed digital detworks with limited bandwidth, IEEE Transactions on Cybernetics, DOI:10.1109/TCYB.2015.2496977.
  36. J. Shen, H. Tan, J. Wang, J. Wang, and S. Lee, "A novel routing protocol providing good transmission reliability in underwater sensor networks," Journal of Internet Technology, vol. 16, no. 1, pp. 171-178, 2015.

Article Tools
Follow on us
Science Publishing Group
NEW YORK, NY 10018
Tel: (001)347-688-8931