(ProQuest: ... denotes non-US-ASCII text omitted.)
Academic Editor:Zoltan Szabo
College of Electrical and Information Engineering, Hunan University, Changsha 410082, China
Received 18 January 2014; Revised 31 March 2014; Accepted 31 March 2014; 26 May 2014
This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
Distributed coordination of multiagent systems has attracted considerable attentions recently due to their extensive applications [1, 2]. The implementation of distributed algorithms is significant for multiagent systems with limited resources in real applications. In general, sampled-data control is natural for the implementation on a digital platform. Traditionally, periodic sampling control is well done as long as the sampling frequency is high enough. However, this time-triggered approach is not suitable for large-scale multiagent systems where energy, computation, and communication constraints should be explicitly addressed. On the contrary, event-triggered control limits the sensor and control computation and/or communication to instances when the system needs attention. It offers some clear advantages such as the reduction of information transmissions and control updates, as well as a certain level of performance being guaranteed.
Motivated by these advantages, event-triggered control strategies have been researched on both general dynamic systems and distributed networked dynamic systems. Reference [3] provided an introductory overview on some recent works in these areas. In [4] the author proposed an execution rule of the control task based on an ISS-Lyapunov function for the closed-loop system, which is the main idea of event-triggered control in many subsequent works. The author in [5] provided a unifying Lyapunov-based framework for the event-triggered control of nonlinear systems, which was modeled as hybrid systems including the case of unnecessary monotonic decrease of Lyapunov function. In [6] the output-based event-triggered control was considered since the full state measurements were not available for feedback in practice. Moreover, it is not the original purpose of introducing event-based control for resource conservations if the event-triggering condition has to be monitored continuously. As a result, periodic event-triggered control was proposed by [7] where the event detection occurred periodically. Model-based event-triggered control was proposed in [8], which provided a larger bound on the minimum intersampling time instead of using a zero-order hold. And [9] extended to model-based periodic event-triggered control with observers.
The issues on resource limitations are more critical in distributed networked dynamic systems [10-13]. Typically, multiagent systems equipped with resource-limited microprocessors have necessitated the event-triggering strategies on actuating the control updates in [14-26], and so on. Most of the existing works focus on distributed event-triggered control for single-integrator multiagent systems. In [14], a distributed event-triggered scheme was proposed for a single-integrator model; then this scheme was improved by [15-18] and extended to double-integrator models [19] and general linear models [20, 21]. Observed from these results, we can hardly find distributed event-triggered consensus schemes applicable to double-integrator models by Lyapunov method, to say nothing of general linear models. This is attributed to the fact that the consensus state of double-integrator agents is no longer a constant; that is, it is impossible to find a measurable error without global information to design the triggering condition by Lyapunov function. The triggering thresholds of measurement errors in most of the aforementioned references are state dependent which are natural and convenient for constructing the ISS-Lyapunov function. On the other hand, [15, 20] proposed event-triggering schemes taking a constant or a time-dependent variable as the triggering threshold. These schemes with state-independent triggering thresholds cannot reflect the evolution of states; however, they do not require monitoring the neighbours' states for event triggering and can be easily extended to the double-integrator [15] or general linear models [20].
Most of the references defined the measurement error as ei (t)=xi (tki )-xi (t) except [16, 22, 23]. In [16] the author proposed a combinational measuring approach to event design, by which the control input of each agent was piecewise constant between its own successive events. And [22, 23] exploited the relative state errors for the event design known as edge events. In large-scale multiagent systems, the absolute information measurement is unavailable or expensive; consequently, it is more reasonable to define relative state-based measurement error instead of the conventional measurement error as ei (t)=xi (tki )-xi (t) .
Based on the limitations mentioned above, the objective of this paper is to find a solution of event design for general linear dynamics using relative information in a distributed fashion to achieve consensus. Firstly, it is notable that the consensus value of general linear models is relative to eAt and (rT [ecedil]7;In )x(0) [27]. Accordingly, the consensus value of general linear models can be represented as a constant by a change of variable. By virtue of a dynamic controller, the new variable will evolve according to the single-integrator model. Sequentially a distributed event-triggering scheme can be achieved based on the relative information of neighbors. This idea of dynamic controller is inspired by [24], which was also introduced in event design by [25, 26]. Nevertheless, [25] considered the absolute measurement error and state-independent triggering threshold; moreover, the event-triggering scheme needed an exact model of the real system. By applying variable substitution method, as well, [26] solved the consensus problem of a special type of high-order linear multiagent systems via event-triggered control. Secondly, as the full states are not available in practice, a distributed event-triggered observer is proposed using relative output information under a mild assumption. Meanwhile, the event triggering of the dynamic controller is independent on that of the observer. Thirdly, the detections of triggering conditions of all agents occur at a sequence of times without requiring continuous monitoring.
The rest of this paper is organized as follows. Section 2 presents a formal problem description. In Section 3, a distributed event-triggered dynamic controller is given, which will be extended to the case of output feedback by proposing a distributed event-triggered observer in Section 4. Section 5 gives the simulations to validate the theoretical results. Conclusions are given in Section 6.
Notation . Throughout the paper, the sets R* and Rm×n denote the * -dimensional vectors and m×n matrices, respectively. The notion ||·|| refers to the Euclidean norm for vectors and the induced 2-norm for matrices. The superscript "T " stands for transposition. P>0 means that P is symmetric and positive definite and the symbol I* represents the identity matrix of * -dimension. For any matrix A , λmin... (A),λmax... (A),ρ(A) , and σmax... (A) are the minimum eigenvalue, maximum eigenvalue, spectral radius, and maximal singular values of A , respectively.
2. Problem Description
This paper addresses the sampled-data consensus problems for multiagent systems taking into account event-triggered strategies.
We use a graph G=(V,E) to model the network topology of multiagent systems, where V={v1 ,v2 ,...,vN } is the node set and E⊆V×V is the edge set. An edge eij ∈E in G denotes that there is a directed information path from agent j to agent i . G is called undirected if eij ∈E...eji ∈E . A=[aij ]∈RN×N is the weighted adjacency matrix associated with G such that aij >0 if eij ⊆V×V , and aij =0 otherwise, as well as aii =0 for all i=1,...,N . The neighbor set of the i th agent is denoted by Ni (t)={j|"aij >0,i...0;j},i∈V . L=[lij ]∈RN×N is Laplacian matrix, where lij =-aij ,lii =∑j=1Naij ,j...0;i . For undirected graphs, the Laplacian matrix is symmetric and positive semidefinite. Hence, the eigenvalues of L are real and can be ordered as λ1 ...4;λ2 ...4;...λN with λ1 =0 and λ2 is the smallest nonzero eigenvalue for connected graphs. Here, we impose the following assumption.
Assumption 1.
Graph G is fixed, undirected, and connected.
The dynamic of the i th agent is described by the general linear time-invariant differential equation [figure omitted; refer to PDF] where xi ∈Rn is the state, yi ∈Rq is the output, and ui ∈Rm is the input. A , B , and C are real constant matrices with appropriate dimensions and the following assumption.
Assumption 2.
( A , B ) is stabilizable, (A,C) is detectable, and all the eigenvalues of A lie in the closed left-half plane.
We say that multiagent system (1) solves a consensus problem asymptotically under given ui (t),i=1,...,N , if for any initial states and any i,j=1,...,N,lim...t[arrow right]∞ ||xi (t)-xj (t)||=0 . When the information transmissions among all agents are continuous, a general consensus protocol of the i th agent is in the following form: [figure omitted; refer to PDF] where K is the feedback gain matrix. A majority of references have facilitated the implementation of the above control law by event-triggered strategies; unfortunately, these existing results are probably not feasible to general linear dynamics.
In this paper, we denote by ξi (t)=∑j∈Niaij (xj (t)-xi (t)) the relative state information of agent i and its neighbors and denote by ζi (t)=∑j∈Niaij (vj (t)-vi (t)) the difference of controller state of agent i and its neighbors. Instead of using continuous event detectors, we intend to implement the event-triggered control with discrete event detecting. That is, the event detections of all agents occur at a sequence of times denoted by t0 ,t1 ,t2 ,... , which are periodic in the sense that tk+1 -tk =h,k∈N , for some properly chosen sampling interval h>0 .
Consequently, we would like to discrete (1) with sampling interval h and propose the following discrete-time event-triggered dynamic control law for each agent i in the case of C=I as [figure omitted; refer to PDF] where vi (tk )∈Rn is the state of dynamic controller, K∈Rm×n is the control gain matrix to be designed, and [figure omitted; refer to PDF] where fi ([straight epsilon]i (tk ),ζi (tk ),ξi (tk )) is the triggering condition to be designed and [straight epsilon]i (tk ) is the measurement error at time tk defined as [figure omitted; refer to PDF]
The main objective of this paper is to design an event-triggered scheme of the dynamic controller (4) with respect to the measurement error [straight epsilon]i (tk ) detecting at time instants tk ,k∈N , such that the multiagent system (1) achieves consensus asymptotically.
3. Distributed Event-Triggered Control of Multiagent Systems
In this section, we will focus on the case that the state feedback is available. According to (3) and (4), we can define zi (tk )=xi (tk )-vi (tk ) and give the following equation by combining (3), (4), and (6): [figure omitted; refer to PDF] Introducing the changes of variables θi (tk )=G-kzi (tk ) and ...i (tk )=G-k[straight epsilon]i (tk ) leads to [figure omitted; refer to PDF] Thus we can obtain the discrete-time system (8) with θ(tk )=[θ1T (tk ),θ2T (tk ),...,θNT (tk )]T and ...(tk )=[...1T (tk ),...2T (tk ),...,...NT (tk )]T in compact form as [figure omitted; refer to PDF]
Before giving the main result of this section, a lemma is presented as follows.
Lemma 3.
Consider the system (9) under Assumption 1. If the triggering condition [figure omitted; refer to PDF] holds and [figure omitted; refer to PDF] then the states of system (9) asymptotically converge to a common value.
Proof.
Consider the following ISS-Lyapunov function for system (9): [figure omitted; refer to PDF] and the time tk =hk is briefly denoted by k .
For any a>0 and any x,y∈Rn , [figure omitted; refer to PDF] Thus, for any k∈N , [figure omitted; refer to PDF] where [varphi]i (k)=∑j∈Niaij (θi (k)-θj (k)) .
Using the triggering condition (10) and choosing ai1 =1/2 and ai2 =1/λn , for all i∈N , we bound V(k+1)-V(k) as [figure omitted; refer to PDF] with h...4;3/4λn .
Hence, V(k+1)-V(k)<0 if 0<h...4;3/4λn and 0<σi <(3-4hλn )/(4+4hλn ), i=1,...,N , which implies that the states of system (9) converge to a common value.
Remark 4.
We have known that the consensus states of general linear dynamics are xi (t)=(rT [ecedil]7;eAt )x(0),i∈1,...,N , which are not a common value except the case of single-integrator dynamic. Thus the so-called Lyapunov function with regard to x(t) of general linear dynamics, which is exploited to the design of distributed event-triggered controllers, becomes invalid as it does not converge to zero. Notice that the design of distributed event-triggering scheme of general linear dynamics is converted to the case of single-integrator dynamic by virtue of the dynamic controller (4). Nevertheless, whether or not the consensus of system (9) concludes the consensus of the original system (1) is dependent on the stability of state matrix A, which will be explicated by the next result.
Theorem 5.
Consider the system (1) with the dynamic control law (4) and suppose that Assumption 1 and 2 hold. The triggering condition of (4) is determined by (16) with (11) [figure omitted; refer to PDF] In addition, |Im[μr -μs ]|...0;2πl/h,l=1,2,... , whenever Re[μr -μs ]=0,∀r,s=1,2,...,n , where μr ,μs denote the eigenvalues of A . Then for any initial states there exists matrix K such that all agents achieve asymptotically the consensus state (rT [ecedil]7;eAt )x(t0 ) .
Proof.
There is a gap to fill before deriving the definite consensus conclusion of system (1) from (9), although the consensus condition of system (9) has been established by Lemma 3. From Lemma 3, θi (tk ),i=1,2,...,N , in system (9) exponentially converge to (rT [ecedil]7;In )x(t0 ) as k[arrow right]∞ , where r is the left eigenvector of L associated with zero eigenvalue; that is, there exist constants α>0 and β>0 such that [figure omitted; refer to PDF] In systems (3) and (4), this means that [figure omitted; refer to PDF] which implies that the consensus of system (3) depends on not only the network topology and control strategy, but also the dynamic of the isolated agents. It is known by [28] that there exists a positive number γ such that, for any k...5;n , if ρ(G)...4;1 , ||Gk ||F ...4;γkn-1 , while if ρ(G)>1,||Gk ||F ...4;γkn-1 ρ(G)k . Thus, for these agents without exponentially unstable eigenvalues, the convergence rate to reach a consensus always dominates the instability of the eigenvalues on the imaginary axis. Therefore, we can verify that ||zi (tk )-(rT [ecedil]7;Gk )x(t0 )||[arrow right]0 as k[arrow right]∞ and obtain the triggering condition (16) from (10) directly. Based on this fact, lim...k[arrow right]∞ (ζ^i (tk )-ξ^i (tk ))=0 in (4) by the definition of zi (tk ) , ξi (tk ) , and ζi (tk ) . We can know that lim...k[arrow right]∞vi (tk )=0 as long as there exists matrix K such that G+HK is stable, which is assured by |Im...[μr -μs ]|...0;2πl/h,l=1,2,... , whenever Re[μr -μs ]=0,∀r,s=1,2,...,n , where μr ,μs denote the eigenvalues of A [29]. Thus, ||xi (tk )-(rT [ecedil]7;Gk )x(t0 )||[arrow right]0 as k[arrow right]∞,i=1,...,N , and the stable intersample behaviour is also guaranteed, which concludes that ||xi (t)-(rT [ecedil]7;eAt )x(t0 )||[arrow right]0 as t[arrow right]∞,i=1,...,N .
Remark 6.
Observed from the triggering condition (16), we do not need an exact model of the real system for the event detection compared with [25], although the changes of variables with the system matrix are introduced.
4. Distributed Event-Triggered Observer-Based Control of Multiagent Systems
As in many applications, the full state measurements are not always available for feedback and the absolute output measurements of each agent are also impractical. In this section, it is desired to design a distributed event-triggered observer using relative output information under the following assumption.
Assumption 7.
There exists at least an agent knowing its own absolute output information in the graph G .
Remark 8.
Assumption 7 is not a very strong restriction on the system; for example, only a robot equipped with high-performance GPS is acceptable in practical applications of large-scale multirobot systems.
We denote by xio (t)∈Rn the estimate of the state xi (t) and denote by yio (t)=Cxio (t) the consequent estimate of the output yi (t) . Then, we denote by ξyi (t)=∑j∈Niaij (yj (t)-yi (t)) the relative output information of agent i and its neighbors and denote by ξyio (t)=C∑j∈Niaij (xjo (t)-xio (t)) the estimate of the relative output information of agent i and its neighbors. The distributed event-triggered observer is designed in the following form: [figure omitted; refer to PDF] where [figure omitted; refer to PDF] where fio ([straight epsilon]io (tk ),ξyi (tk ),ξyio (tk ),wi ,yi (tk ),yio (tk )) is the triggering condition to be designed, wi =1 if agent i knows its own absolute output information, and wi =0 otherwise. And [straight epsilon]io (tk )∈Rq is the measurement error at time tk defined as [figure omitted; refer to PDF] Based on the above estimate of xi (t) , ξi (tk ) in (4), (6), and (16) can be replaced by ξio (tk )=∑j∈Niaij (xjo (tk )-xio (tk )) . Then, we can derive the following result.
Theorem 9.
Consider the system (3) with the observer (19) and suppose that Assumptions 1, 2, and 7 hold. The triggering condition of (19) is determined by [figure omitted; refer to PDF] where tk+1 -tk =h,k∈N , is identical to the detection period of update of controller in Theorem 5. Then there exist matrix F and coupling gain c such that the estimation error dynamics (23) are asymptotically stable.
Proof.
Denote by x~i (tk )=xi (tk )-xio (tk ) the state estimate error for agent i ; then subtracting (19) from (3) gives the dynamic of the state estimation error in compact form as [figure omitted; refer to PDF] where x~=[x~1T ,x~2T ,...,x~NT]T , [straight epsilon]o =[[straight epsilon]1oT ,[straight epsilon]2oT ,...,[straight epsilon]NoT]T , and L~=L+W , W=diag...(w1 ,w2 ,...,wN ) . By Assumption 7, we can easily conclude that all the eigenvalues of the matrix L~ denoted by λ~i ,i=1,...,N , are real and positive.
Since L~>0 , there exists an orthogonal matrix T∈RN×N such that TL~TT =diag...(λ~1 ,λ~2 ,...,λ~N ) . Introduce the state transformation χ~=(T[ecedil]7;In )x~ ; then (23) becomes [figure omitted; refer to PDF] where tk =hk , G~=IN [ecedil]7;G-diag...(λ~1 ,λ~2 ,...,λ~N )[ecedil]7;cFC , and F~=T[ecedil]7;cF .
It is known by [30] that ρ(G-cλ~i FC)<1 for all eigenvalues λ~i ,i=1,...,N , of matrix L~ when the matrix F=GPCT (CPCT)-1 is chosen, and if there exists a covering circle centered at c0 and containing all eigenvalues of L~ with radii r0 such that c=1/c0 <1/r0σmax... (Q-1/2 GPCT (CPCT)-1 CPGTQ-1/2 ) , where P>0 is a solution of the DARE GPGT -P+Q-GPCT (CPCT)-1 CPGT =0,Q>0 ,then ρ(G~)<1 can be concluded.
Now we show that there exists an ISS-Lyapunov function V(χ~(k))=χ~T (k)P~χ~(k),P~>0 , for the system (24), which satisfies [figure omitted; refer to PDF] where α_,α¯ are class K∞ functions and 0<a<1,b>0 . We know that V(χ~(k))=χ~T (k)P~χ~(k) satisfies (25) with α_(||χ~(k)||)=λmin... (P~)||χ~(k)||2 and α¯(||χ~(k)||)=λmax... (P~)||χ~(k)||2 . And (26) with (24) is equivalent to the following LMI: [figure omitted; refer to PDF] By Schur complement, (27) is equivalent to [figure omitted; refer to PDF] Since ρ(G~)<1 , there always exists a P~>0 and a constant 0<a<1 such that [figure omitted; refer to PDF] Hence, (28) holds as the chosen b is large enough.
Consider (26); we get [figure omitted; refer to PDF] Applying inequality (30) repetitively yields [figure omitted; refer to PDF] According to the triggering condition (22) and condition (25), inequality (31) yields [figure omitted; refer to PDF] Thus, [figure omitted; refer to PDF] which concludes lim...k[arrow right]∞ x~(tk )=0 .
Corollary 10.
Under Assumptions 1, 2, and 7, consider the system (1) with the dynamical control law (4) and its triggering condition (16), where ξi (tk ) in (4) and (16) are estimated by the observer (19) with its triggering condition (22). Assume that the sampling period of event detections and parameters of triggering conditions are satisfied with what are stated in Theorem 5. Then, for any initial states, there exist matrices K,F and coupling gain c such that all agents achieve consensus asymptotically.
Remark 11.
Notice that the events of the dynamic controller and observer are triggered independently, although they both need to measure the states of observer for event detections. Theorem 9 has proved that the state of each agent can be estimated by the proposed event-triggered observer. Thus, Corollary 10 is established by the separation principle. However, the drawback of triggering condition (22) is that it is not clear how the state-independent triggering function should be designed.
5. Examples
In this section, two models of agents including double-integrator dynamics and linearized dynamics of the Caltech multivehicle wireless testbed vehicles are considered to illustrate the theoretical results. It will be shown that we can achieve consensus for general linear dynamics by the distributed event-triggered dynamic controller, for both the state-based and observer-based cases.
Example 1.
Consider system (1) of six agents with the following system states and matrices: [figure omitted; refer to PDF] where xi1 ,xi2 are the positions of the i th agent along the x and y coordinates. The initial states of the agents are x1 (t0 )=[60,80]T ,x2 (t0 )=[30,60]T ,x3 (t0 )=[0,0]T ,x4 (t0 )=[-10,-5]T ,x5 (t0 )=[-30,-10]T , and x6 (t0 )=[-120,-15]T , respectively. Let xi (t0 )=x^i (t0 ) and vi (t0 )=v^i (t0 )=0 , for all i∈N , which are applicable to the next examples.
The fixed network topology in Figure 1 is chosen. By calculation, λN =4.5616 ; then the sampling period of event detection and parameters of triggering conditions for all agents are chosen as h=0.002 and σ1 =σ2 =...σN =0.4 , which satisfy all the conditions claimed by Theorem 5. Then the gain matrix K=[500500]T is obtained. By using the dynamic controller (4) and triggering condition (16), the state trajectories of six agents during time intervals [0,10] are shown in Figure 2. Event-triggering instants and control inputs of six agents are shown in Figures 3 and 4, respectively. It can be easily seen that consensus is achieved by the discrete-time detections of events; moreover, the updates of controller of each agent only occur at its own event-triggering instants.
Figure 1: Network topology.
[figure omitted; refer to PDF]
The evolution of states of each agent with double-integrator dynamics.
(a) [figure omitted; refer to PDF]
(b) [figure omitted; refer to PDF]
Figure 3: Event-triggering instants of each agent with double-integrator dynamics.
[figure omitted; refer to PDF]
Figure 4: Control inputs of each agent with double-integrator dynamics.
[figure omitted; refer to PDF]
Example 2.
A linearized model of the Caltech multivehicle wireless testbed vehicles in [31] is considered here. The system states and matrices of six agents are described as [figure omitted; refer to PDF] where xi1 ,xi2 are the positions of the i th agent along the x and y coordinates, respectively. xi3 is the orientation of the i th agent. The initial states of the agents are x1 (t0 )=[5,0,2,5,5,0]T , x2 (t0 )=[10,4,5,2,0]T , x3 (t0 )=[15,0,6,2,5,0]T , x4 (t0 )=[20,0,8,3,5,0]T , x5 (t0 )=[25,0,10,5,3,0]T , and x6 (t0 )=[30,0,12,-3,-3,0]T , respectively. The network topology, sampling period, and parameters of triggering conditions are the same as Example 1, and [figure omitted; refer to PDF] It should be noticed that [26] is incapable of dealing with the above system matrices because they do not satisfy rank...(AB)=rank...(A) in [26]. The computer simulations illustrate the validity of the proposed dynamic controller (4) and triggering condition (16) on the system (37). The state trajectories of the agents are depicted in Figure 5. Also, event-triggering instants and control inputs of six agents are shown in Figures 6 and 7, respectively.
The evolution of states of the CMVWT vehicles.
(a) [figure omitted; refer to PDF]
(b) [figure omitted; refer to PDF]
(c) [figure omitted; refer to PDF]
Figure 6: Event-triggering instants of the CMVWT vehicles.
[figure omitted; refer to PDF]
Figure 7: Control inputs of the CMVWT vehicles.
[figure omitted; refer to PDF]
The simulation results in Figures 3 and 6 are reported in Tables 1 and 2, respectively, which clarify that both the actuation and communication updates are reduced evidently. In addition, a minimum positive interevent interval is guaranteed by the sampling period of event detectors.
Table 1: Mean event intervals for the agents during t∈[0,10] in Example 1.
Agents | v 1 | v 2 | v 3 | v 4 | v 5 | v 6 |
Event times | 17 | 27 | 23 | 17 | 18 | 17 |
Mean intervals | 0.5882 | 0.3704 | 0.5556 | 0.5882 | 0.5556 | 0.5882 |
Table 2: Mean event intervals for the agents during t∈[0,20] in Example 2.
Agents | v 1 | v 2 | v 3 | v 4 | v 5 | v 6 |
Event times | 29 | 56 | 34 | 32 | 32 | 24 |
Mean intervals | 0.6897 | 0.3571 | 0.5882 | 0.6250 | 0.6250 | 0.8333 |
Example 3.
Consider the case of output feedback on the dynamic in Example 1 with C=[10] . From Theorem 9, we obtain F=[1.0020.999]T and c=0.15 . The parameters of triggering condition (22) are chosen as α=0.3 and β=0.5 . The initial states of the agents, the design of the dynamic controller with its triggering condition, and the network topology are also the same as Example 1. As expected, the simulation results demonstrate the consensus. Observed from Figure 8, the convergence rate of the estimate errors is fast so that the evolutions of agent states approximate the case of Example 1. As shown in Figure 9, the mean event intervals are affected slightly by the estimated states, while the event times of observer under the triggering condition (22) are considerable. Unfortunately, it remains an open problem to identify how to reduce the unnecessary updates triggered by the time-dependent triggering function.
The evolutions of state and estimate error of each agent.
(a) [figure omitted; refer to PDF]
(b) [figure omitted; refer to PDF]
(c) [figure omitted; refer to PDF]
(d) [figure omitted; refer to PDF]
Event-triggering instants of controller and observer of each agent.
(a) [figure omitted; refer to PDF]
(b) [figure omitted; refer to PDF]
6. Conclusion
This paper studies the distributed event-triggered control of multiagent systems under the fixed undirected network topology. In order to find distributed event-triggering schemes applicable to general linear dynamics, a dynamic controller is employed to convert the general linear dynamic to the single-integrator model by a change of variable. Therefore, a distributed event-triggering scheme only using the relative state information of neighbors is obtained to update the control law, where the triggering condition is detected periodically. Then, the result is extended to design event-triggered observer-based controller via relative output information. These theoretical results have been verified by simulations. Further work will focus on the event-triggered consensus of general linear dynamics in multiagent systems with switching topologies and/or time delays and other issues on multiagent systems, such as event-triggered formation and/or containment control.
Acknowledgment
The work was partly supported by the National Natural Science Foundation of China (Grants nos. 61174140, 61174050, and 61203016).
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
[1] W. Ren, Y. C. Cao Distributed Coordination of Multi-Agent Networks , Springer, London, UK, 2011.
[2] W. Ren, R. W. Beard Distributed Consensus in Multi-Vehicle Cooperative Control , Springer, London, UK, 2008.
[3] W. P. M. H. Heemels, K. H. Johansson, P. Tabuada, "An introduction to event-triggered and self-triggered control," in Proceedings of the 45rd IEEE Conference on Decision and Control, pp. 3270-3258, Maui, Hawaii, USA, 2012.
[4] P. Tabuada, "Event-triggered real-time scheduling of stabilizing control tasks," IEEE Transactions on Automatic Control , vol. 52, no. 9, pp. 1680-1685, 2007.
[5] R. Postoyan, A. Anta, D. Nesic, P. Tabuada, "A unifying Lyapunov-based framework for the event-triggered control of nonlinear systems," in Proceedings of the 50th IEEE Conference on Decision and Control and European Control Conference, pp. 2565-2570, Orlando, Fla, USA, 2011.
[6] M. C. F. Donkers, W. P. M. H. Heemels, "Output-based event-triggered control with guaranteed-gain and improved and decentralized event-triggering," IEEE Transactions on Automatic Control , vol. 27, no. 6, pp. 1362-1376, 2012.
[7] W. P. M. H. Heemels, M. C. F. Donkers, A. R. Teel, "Periodic event-triggered control for linear systems," IEEE Transactions on Automatic Control , vol. 58, no. 4, pp. 847-861, 2013.
[8] J. Lunze, D. Lehmann, "A state-feedback approach to event-based control," Automatica , vol. 46, no. 1, pp. 211-215, 2010.
[9] W. P. M. H. Heemels, M. C. F. Donkers, "Model-based periodic event-triggered control for linear systems," Automatica , vol. 49, no. 3, pp. 698-711, 2013.
[10] X. Wang, M. D. Lemmon, "Event-triggering in distributed networked control systems," IEEE Transactions on Automatic Control , vol. 56, no. 3, pp. 586-601, 2011.
[11] M. Mazo Jr., P. Tabuada, "Decentralized event-triggered control over wireless sensor actuator networks," IEEE Transactions on Automatic Control , vol. 56, no. 10, pp. 2456-2461, 2011.
[12] M. Guinaldo, D. V. Dimarogonas, K. H. Johansson, J. Sanchez, S. Dormido, "Distributed event-based control strategies for interconnected linear systems," IET Control Theory and Applications , vol. 7, no. 6, pp. 877-886, 2013.
[13] E. Garcia, P. J. Antsaklis, "Decentralized model-based event-triggered control of networked systems," in Proceedings of the American Control Conference, pp. 6485-6490, Montral, Canada, 2012.
[14] D. V. Dimarogonas, K. H. Johansson, "Event-triggered control for multi-agent systems," in Proceedings of the 48th IEEE Conference on Decision and Control (CD '09), pp. 7131-7136, Shanghai, China, December 2009.
[15] G. S. Seyboth, D. V. Dimarogonas, K. H. Johansson, "Event-based broadcasting for multi-agent average consensus," Automatica , vol. 49, no. 1, pp. 245-252, 2013.
[16] Y. Fan, G. Feng, Y. Wang, C. Song, "Distributed event-triggered control of multi-agent systems with combinational measurements," Automatica , vol. 49, no. 2, pp. 671-675, 2013.
[17] E. Garcia, Y. C. Cao, H. Yu, P. Antsaklis, D. Casbeer, "Decentralized event-triggered cooperative control with limited communication," International Journal of Control , vol. 86, no. 9, pp. 1479-1488, 2013.
[18] X. Y. Meng, T. W. Chen, "Event based agreement protocols for multi-agent networks," Automatica , vol. 49, no. 7, pp. 2125-2132., 2013.
[19] J. P. Hu, Y. L. Zhou, Y. S. Lin, "Second-order multiagent systems with event-driven consensus control," Abstract and Applied Analysis , vol. 2013, 2013.
[20] O. Demir, J. Lunze, "Event-based synchronization of multi-agent systems," in Proceedings of the 4th IFAC Conference on Analysis and Design of Hybrid Systems, pp. 1-6, Eindhoven, The Netherlands, 2012.
[21] W. Zhu, Z. P. Jiang, G. Feng, "Event-based consensus of multi-agent systems with general linear models," Automatica , vol. 50, no. 2, pp. 552-558, 2014.
[22] F. Xiao, X. Y. Meng, T. W. Chen, "Average sampled-data consensus driven by edge events," in Proceeding of the 31st Chinese Control Conference, pp. 6239-6244, Hefei, China, 2012.
[23] D. Liuzza, D. V. Dimarogonas, M. D. Bernardo, K. H. Johansson, "Distributed model based event-triggered control for synchronization of multi-agent systems," in Proceedings of the 9th IFAC Symposium on Nonlinear Control Systems, pp. 329-334, Toulouse, France, 2013.
[24] L. Scardovi, R. Sepulchre, "Synchronization in networks of identical linear systems," Automatica , vol. 45, no. 11, pp. 2557-2562, 2009.
[25] S. Golfinger Event-triggered control for synchronization [M.S. thesis] , 2012.
[26] Z. Q. Zhang, F. Hao, L. Zhang, L. Wang, "Consensus of linear multi-agent systems via event-triggered control," International Journal of Control , vol. 87, no. 6, pp. 1243-1251, 2014.
[27] Z. K. Li, Z. S. Duan, G. R. Chen, L. Huang, "Consensus of multiagent systems and synchronization of complex networks: a unified viewpoint," IEEE Transactions on Circuits and Systems I , vol. 57, no. 1, pp. 213-224, 2010.
[28] J. H. Qin, H. J. Gao, C. B. Yu, "On discrete-time convergence for general linear multi-agent systems under dynamic topology," IEEE Transactions on Automatic Control , vol. 59, no. 4, pp. 1054-1059, 2014.
[29] C. T. Chen Linear Systen Theory and Design , Oxford University Press, 1999.
[30] K. Hengster-Movric, F. Lewis, "Cooperative observers and regulators for discrete-time multiagent systems," International Journal of Robust and Nonlinear Control , vol. 23, no. 14, pp. 1545-1562, 2013.
[31] V. Gupta, B. Hassibi, R. M. Murray, "A sub-optimal algorithm to synthesize control laws for a network of dynamic agents," International Journal of Control , vol. 78, no. 16, pp. 1302-1313, 2005.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2014 Xieyan Zhang and Jing Zhang. Xieyan Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
This paper discusses the event-triggered consensus problem of multiagent systems. To investigate distributed event-triggering strategies applied to general linear dynamics, we employ a dynamic controller to convert the general linear dynamic to the single-integrator model by a change of variable. The consensus value of these new states is a constant so that the distributed event-triggering scheme is obtained under periodic event detections, in which agents with general linear dynamics require knowledge only of the relative states with their neighbors. Further, an event-triggered observer is proposed to address the case that only relative output information is available. Hence, the consensus of both the state-based and observer-based cases is achieved by the distributed event-triggered dynamic controller. Finally, numerical simulations are provided to demonstrate the effectiveness of theoretical results.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer