(ProQuest: ... denotes non-US-ASCII text omitted.)
Liyuan Hou 1 and Hong Zhu 1 and Shouming Zhong 2,3 and Yong Zeng 1 and Lin Shi 1
Academic Editor:Qiankun Song
1, School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
2, School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
3, Key Laboratory for Neuroinformation of Ministry of Education, University of Electronic Science and Technology of China, Chengdu 611731, China
Received 14 September 2013; Revised 7 December 2013; Accepted 7 December 2013; 18 February 2014
This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
In the past decades, recurrent neural networks (RNNs) have been widely studied due to their wide applications in some areas such as pattern recognition, associative memory, combinatorial optimization, and signal processing. Dynamical behaviors (e.g., stability, instability, periodic oscillatory, and chaos) of the neural networks are known to be crucial in applications. It is noted that the stability of neural networks is a prerequisite for some optimization problems. As is known to all, many biological and artificial neural networks contain inherent time delays in signal transmission due to the finite speed of information processing, which may cause oscillation, divergence, and instability. In recent years, a great number of papers have been published on various networks with time delays [1-10].
On one hand, delay-dependent stability condition for continuous-time RNNs with time-varying delays was derived by defining a new Lyapunov functional, and the obtained condition could include some existing time delay-independent ones; see [11, 12]. Up to now, however, when we use computer to simulate, experimentalize or compute continuous-time RNNs, it is necessary to discretize the continuous-time networks to formulate a discrete-time system. So, the study on the dynamics of discrete-time neural networks is crucially needed. In particular, the stability of discrete-time neural networks (DNNs) has been studied in [13-18], since DNNs play a more important role than their continuous-time counterparts in today's digital life.
On the other hand, the neuron states are seldom fully available in the network outputs in many applications; the neuron state estimation problem becomes important to utilize the estimated neuron state through the available measurements. Recently, the state estimation problem for the neural networks has engaged lots of scholars' attention and interest. Therefore, delay-dependent state estimation problem has been studied widely for NNs; see [19-26].
Stochastic disturbances are mostly inevitable owing to thermal noise in electronic implementations. It has also been revealed that certain stochastic inputs could make a neural network unstable.
Summarizing the above discussion, in this paper, the stability problem is considered for discrete-time neural networks with discrete and distribute delays. Firstly, the mathematical models are established. Secondly, a less conservative and new stability criterion is derived by using a novel Lyapunov-Krasovskii functional. Thirdly, a numerical example is provided to show the effectiveness of the main result. The technical difficulties of our paper are the partition of the distributed time-varying delays. The novel contribution of this work with respect to existing literature is to construct a novel Lyapunov-Krasovskii functional according to the situation of the distributed time-varying delays' partition. In Corollary 11, we use Lemma 7, which we have proved in Section 2, and we can get a new stability criterion.
Notation. Throughout this paper, ... n and ... n × m denote, respectively, the n -dimensional Euclidean space and the set of all real matrices. The superscript T denotes matrix transposition and the notation X ...5; Y (resp., X > Y ), where X and Y are symmetric matrices, which means that X - Y is positive semidefinite (resp., positive definite). In symmetric block matrices, the symbol * is used as an ellipsis for terms induced by symmetry. | · | stands for the Euclidean vector norm in ... n . Sym ( M ) is defined as Sym ( M ) = M + M T . Z ...5; 0 denotes the set including zero and positive integers. E { x } and E { x |" y } denote the expectation of x and the expectation of x conditional on y . ( Ω , ... , ...AB; ) is a probability space, where Ω is the sample space, ... is the σ -algebra of subsets of the sample space, and ...AB; is the probability measure on ... .
2. Preliminaries
Consider the following discrete-time recurrent neural network with time-varying delays described by [figure omitted; refer to PDF] where x ( k ) = [ x 1 ( k ) , x 2 ( k ) , ... , x n ( k ) ] T is the neural state vector at time k ; C = diag ... [ c 1 , c 2 , ... , c n ] with | c i | < 1 is the state feedback coefficient matrix; the n × n matrices A = [ a i j ] n × n , B = [ b i j ] n × n and D = [ d i j ] n × n are the connection weight matrix, the discretely delayed connection weight matrix and distributively delayed connection weight matrix, respectively; J = [ J 1 , J 2 , ... , J n ] T is the exogenous input; F ( x ( k ) ) , G ( x ( k ) ) , and H ( x ( k ) ) are the neuron activation functions, which satisfy F ( x ( k ) ) = [ f 1 ( x 1 ( k ) ) , ... , f n ( x n ( k ) ) ] T , G ( x ( k ) ) = [ g 1 ( x 1 ( k ) ) , g 2 ( x 2 ( k ) ) , ... , g n ( x n ( k ) ) ] T , and H ( x ( k ) ) = [ h 1 ( x 1 ( k ) ) , h 2 ( x 2 ( k ) ) , ... , h n ( x n ( k ) ) ] T ; and τ ( k ) , d ( k ) respectively, denote the discrete and distributed time-varying delays. ω ( k ) is a scalar Wiener process on a probability space ( Ω , ... , ...AB; ) with E { ω ( k ) } = 0 , E { ω 2 ( k ) } = 1 , and E { ω ( i ) ω ( j ) } = 0 ( i ...0; j ) .
Assumption 1.
For any x , y ∈ R , ( x ...0; y ) , i ∈ { 1,2 , ... , n } , the activation functions satisfy [figure omitted; refer to PDF] where f i - , f i + , g i - , g i + ; h i - , and h i + are constants.
Remark 2.
The condition on the activation function in Assumption 1 was originally employed in [27] and has been subsequently used in recent papers with the problem of stability of neural networks; see [5, 6, 11, 28, 29], for example.
Assumption 3.
The noise intensity function vector δ ( · , · ) : ... ...5; 0 × ... n [arrow right] ... n satisfies the Lipschitz condition; that is, there exists a constant ξ such that for any k ∈ ... ...5; 0 the following inequality: [figure omitted; refer to PDF]
Assumption 4.
The time-varying delays τ ( k ) and d ( k ) are bounded, 0 < τ m ...4; τ ( k ) ...4; τ M , 0 < d m ...4; d ( k ) ...4; d M , and its probability distribution can be observed. Assume that τ ( k ) takes values in [ τ 0 , τ 1 ] ... ... ... ... ... ( τ n 1 - 1 , τ n 1 ] and Prob { τ ( k ) ∈ [ τ i - 1 , τ i ) } = ρ i = 1 - ρ ~ i , where 0 ...4; ρ i ...4; 1 , ∑ i ρ i = 1 , and τ 0 = τ m , τ n 1 = τ M . Similarly, d ( k ) takes values in [ d 0 , d 1 ] ... ... ... ... ... ( d n 2 - 1 , d n 2 ] , and Prob { d ( k ) ∈ [ d i - 1 , d i ) } = ξ i = 1 - ξ ~ i , where 0 ...4; ξ i ...4; 1 , ∑ i ... ξ i = 1 , and d 0 = d m , d n 1 = d M .
Remark 5.
It is noted that the introduction of binary stochastic variable was first introduced in [6].
To describe the probability distribution of time-varying delays, we define the following sets ...9C; i = ( τ i - 1 , τ i ] , i = 1,2 , ... , n 1 , and [Bernoulli] i = ( d i - 1 , d i ] , i = 1,2 , ... , n 2 . Define mapping functions [figure omitted; refer to PDF]
Remark 6.
Consider Prob { ρ i ( k ) = 1 } = E { ρ i ( k ) } = ρ i , Prob { ρ i ( k ) = 0 } = ρ ~ i , [figure omitted; refer to PDF]
Similarly, Prob { ξ i ( k ) = 1 } = E { ξ i ( k ) } = ξ i , Prob { ξ i ( k ) = 0 } = ξ ~ i , [figure omitted; refer to PDF]
Proof .
When i ...0; j , τ ( k ) ∈ [ τ i - 1 , τ i ) and as Prob { τ ( k ) ∈ [ τ i - 1 , τ i ) } = ρ i = 1 - ρ ~ i , we can easily deduce ρ i ( k ) = 1 , ρ j ( k ) = 0 , so [figure omitted; refer to PDF] Similarly, the result about ξ i ( k ) can be deduced. The proof is complete.
The system (1) can be rewritten as [figure omitted; refer to PDF]
As mentioned before, it is very difficult or even impossible to acquire the complete information of the neuron states in relatively large-scale neural networks. The main purpose of this study is to develop a novel approach to estimating the neuron states via the network outputs. As mentioned above, the objective of this study is to present an efficient algorithm to estimate the neuron states via available network outputs. It is assumed that the measured network outputs are of the form [figure omitted; refer to PDF] where y ( t ) ∈ R m is the measured output, E is known constant matrix with appropriate dimensions, and O : ... ...5; 0 × ... n [arrow right] ... m is a nonlinear disturbance on the network outputs satisfying [figure omitted; refer to PDF]
As a matter of fact, the activation functions F ( · ) are known. In order to fully utilize the information of the activation function, the state estimator for the neural network is constructed as [figure omitted; refer to PDF] where x ^ ( k + 1 ) is the estimation of the neuron state and K ∈ ... n × m is the estimator gain matrix to be determined. Define the error signal e ( k + 1 ) = x ( k + 1 ) - x ^ ( k + 1 ) ; thus, we obtain the error state system as follows: [figure omitted; refer to PDF]
Denote f ( k ) = F ( x ( k ) ) - F ( x ^ ( k ) ) , g ( k - τ i ( k ) ) = G ( x ( k - τ i ( k ) ) ) - G ( x ^ ( k - τ i ( k ) ) ) , h ( k + i ) = H ( x ( k + i ) ) - H ( x ^ ( k + i ) , o ( k ) = O ( k , x ( k ) ) - O ( k , x ^ ( k ) ) , then (12) can be rewritten as [figure omitted; refer to PDF]
The initial condition associated with the error system (13) is given as [figure omitted; refer to PDF] where [varsigma] M = max ... { τ M , d M } and || [varphi] || = sup ... - [varsigma] M ...4; k ...4; 0 | [varphi] | 2 < ∞ .
By defining e ^ ( k ) = [ x T ( k ) , e T ( k ) ] T and combing (8) and (13) with J = 0 , we can obtain the following system: [figure omitted; refer to PDF] where [figure omitted; refer to PDF] Then, it is easy to show the following equations: [figure omitted; refer to PDF]
Lemma 7.
For any constant matrix M ∈ R n × n , any integers γ 2 ...5; γ 1 , and any vector function ω : [ γ 1 , γ 1 + 1 , ... , γ 2 ] [arrow right] R n where M staisfies M = M T > 0 such that the sums in the following are well defined, then [figure omitted; refer to PDF] where, matrix F and vector ζ ( k ) independent of γ 1 and γ 2 are appropriate dimensional arbitrary ones.
Proof.
It's well known that [figure omitted; refer to PDF] where where the vector a , b , W with appropriate dimensional and W > 0 . From this, we can get [figure omitted; refer to PDF] which is equivalent to (18).
Lemma 8 (Zhu and Yang [28]).
For any constant matrix M ∈ R n × n , any integers γ 2 ...5; γ 1 , and any vector function ω : [ γ 1 , γ 1 + 1 , ... , γ 2 ] [arrow right] R n where M satisfies M = M T > 0 , such that the sums in the following are well defined; then [figure omitted; refer to PDF]
3. New Stability Criteria
In this section, we will establish new stability criteria for system (1). Since the system in (8) involves a stochastic parameter, to investigate its stability, we need the following definition.
Definition 9.
The system (11) is said to be globally asymptotically state estimator of the system (8), if the estimation error system (13) satisfies is globally asymptotical stable in mean square; that is, [figure omitted; refer to PDF]
Theorem 10.
Under Assumptions 1, 3, and 4, the system (15) is globally asymptotically stable in mean square, if there exist matrices P = P T = diag ... { P 1 , P 2 } > 0 , R i = diag ... { R i 1 , R i 2 } ...5; 0 ( i = 1 , ... , n 2 - 1 ) , S i ...5; 0 ( i = 1 , ... , n 1 ) , X > 0 , and T i ...5; 0 ( i = 1 , ... , n 2 ) , and positive diagonal matrices U = diag ... ( u 1 , ... , u n ) , V = diag ... ( v 1 , ... , v n ) , and W = diag ... ( w 1 , ... , w n ) , and scalars ... > 0 and [vartheta] > 0 such that the following LMI holds: [figure omitted; refer to PDF]
And the estimator gain can be designed as K = P 2 - 1 X .
Proof.
We construct a new Lyapunov-Krasovskii functional as [figure omitted; refer to PDF] where [figure omitted; refer to PDF]
Taking the difference of the functional along the solution of the system, we obtain [figure omitted; refer to PDF] From Remark 6, we can get [figure omitted; refer to PDF]
Similarly, the following equation can be deduced: [figure omitted; refer to PDF] And it is easy to deduce that B ^ T ρ ^ 1 T P ρ ^ 2 B ^ = 0 , B ^ T ρ ^ 1 T P ξ ^ 2 D ^ = 0 , and D ^ T ξ ^ 1 T P ρ ^ 2 B ^ = 0 . Consider [figure omitted; refer to PDF] Then, by using Lemma 7 and d i ( k ) ∈ ( d i - 1 , d i ] , we have [figure omitted; refer to PDF] Then, by using Lemma 7, we have [figure omitted; refer to PDF] Let α 1 ( k ) = ( e ^ T ( k - τ 1 ( k ) ) , e ^ T ( k - τ 2 ( k ) ) , ... , e ^ T ( k - τ n 1 ( k ) ) , e ^ T ( k - τ M ) ) T ; we have [figure omitted; refer to PDF]
From Assumption 1, f i - ...4; ( f i ( x ) - f i ( y ) ) / ( x - y ) ...4; f i + and g i - ...4; ( g i ( x ) - g i ( y ) ) / ( x - y ) ...4; g i + , we have ( f ( k ) - f i - y ( k ) ) ( f ( k ) - f i + y ( k ) ) ...4; 0 and ( g ( k - τ i ( k ) ) - g i - y ( k - τ i ( k ) ) ) ( g ( k - τ i ( k ) ) - g i + e ( k - τ i ( k ) ) ) ...4; 0 .
It can be deduced that there exist U = diag ... [ u 1 , u 2 , ... u n ] > 0 , W = diag ... [ w 1 , w 2 , ... , w n ] > 0 , and V = diag ... [ v 1 , v 2 , ... , v n ] > 0 , such that [figure omitted; refer to PDF] where e i denotes the unit column vector having one element on its r th row and zeros elsewhere.
According to (10), o T ( k ) o ( k ) - e T ( k ) L T L e ( k ) ...4; 0 ; the following equation can be concluded, where [vartheta] is a positive scalar: [figure omitted; refer to PDF] And from Assumption 3, we can obtain, for a positive scalar, σ [figure omitted; refer to PDF] Combining (27)-(36), we obtain [figure omitted; refer to PDF] where [figure omitted; refer to PDF]
Using Schur complement, we can let Γ ^ + Φ T Λ Φ < 0 be equal to [figure omitted; refer to PDF] Pre- and postmultiplying (39), respectively, by diag ... { I , Λ P - 1 } and its transpose yield [figure omitted; refer to PDF] Then, by denoting K = P 2 - 1 X , one gets that the LMI condition (40) can guarantee (23).
It is obvious that [figure omitted; refer to PDF] The summation of both sides of (41) from 1 to N (let N be a positive integer) is equal to [figure omitted; refer to PDF] So, [figure omitted; refer to PDF] We can conclude that ∑ k = 1 + ∞ E { || e ^ ( k ) || 2 } is convergent and [figure omitted; refer to PDF] This completes the proof.
Based on Theorem 10, a further improved delay-dependent stability criterion of the system (15) is given in the following corollary by using Lemma 8.
Corollary 11.
Under Assumptions 1, 3, and 4, the system (15) is globally asymptotically stable in mean square, if there exist matrices P = P T = diag ... { P 1 , P 2 } > 0 , Q i = Q i T ...5; 0 ( i = 1,2 , 3,4 ) , S > 0 , R + S > 0 , and W = diag ... ( w 1 , w 2 , ... , w n ) such that the following LMI holds: [figure omitted; refer to PDF] where Γ is defined in Theorem 10 and E 6 = ( 0 2 n , 0 ( n 1 + 1 ) n , 0 n 2 n , 0 2 n , 0 n 1 n , I n 2 , 0 n 1 n , 0 n 2 n , 0 n , 0 n , 0 n , 0 n , 0 n ) T .
Proof .
From Theorem 10, we can know that [figure omitted; refer to PDF] Then, Using Lemma 8 and d i ( k ) ∈ ( d i - 1 , d i ] , we can deduce that, for any matrices Q i with appropriate dimension, [figure omitted; refer to PDF]
4. Examples
In this section, a numerical example is given to illustrate the effectiveness and benefits of the developed methods.
Example 1.
We consider the delayed stochastic DNNs (1) with the following parameters: [figure omitted; refer to PDF] And the activation functions satisfy Assumption 1 with [figure omitted; refer to PDF] For the parameters listed above, letting τ m = 1 , τ M = 5 , τ 1 = 3 , and ρ 1 = 0.89 , we can obtain the following feasible solution. Therefore, it is clear to see that our method is effective. Due to the limitation of the length of this paper, we only provide a part of the feasible solution here: [figure omitted; refer to PDF]
Therefore, according to Theorem 10, the gain matrix of the desired estimator can be obtained as [figure omitted; refer to PDF] When we define f 1 ( s ) = g 1 ( s ) = h 1 ( s ) = tanh ( - 0.4 s ) , f 2 ( s ) = g 2 ( s ) = h 2 ( s ) = tanh ( 0.4 s )) , O ( k , x ( k ) ) = 0.2 sin ( x ( k ) ) , δ ( k , x ( k ) ) = sin ( x ( k ) ) , we get Figures 1 and 2. They represent the trajectories of x ( k ) and the state estimator x ^ ( k ) with the initial condition x ( k ) = [ - 1,0.5 ] , x ^ ( k ) = [ - 0.5 , - 0.5 ] . And from Theorem 10, it follows that the state estimator (46) is indeed a state estimator of the delayed neural network (1). Figure 3 further confirmed that the estimation error e ( k ) tends to zero as k [arrow right] ∞ .
Figure 1: The trajectories of x ( k ) with J = 0 .
[figure omitted; refer to PDF]
Figure 2: The trajectories of x ~ ( k ) with J = 0 .
[figure omitted; refer to PDF]
Figure 3: The evolution of estimation errors e ( k ) .
[figure omitted; refer to PDF]
5. Conclusions
The robust stability for stochastic discrete-time NNs with mixed delays has been investigated in this research via the Lyapunov functional method. By employing delay partitioning and introducing a new Lyapunov functional, more general LMIs conditions on the stability of the stochastic discrete-time NNs are established. Finally, the feasibility and effectiveness of the developed methods and their less conservatism than most of the existing results have been shown by numerical simulation examples. The foregoing results have the potential to be useful for the study of stochastic discrete-time NNs. And the results can also been extended to complex networks with mixed time-varying delays.
Acknowledgments
This work was supported by the Foundation of National Nature Science of China (2010 CB732501) and the Fund of Sichuan Provincial Key Laboratory of Signal and Information Processing (SZJJ2009-002).
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
[1] S. Lakshmanan, V. Vembarasan, P. Balasubramaniam, "Delay decomposition approach to state estimation of neural networks with mixed time-varying delays and Markovian jumping parameters," Mathematical Methods in the Applied Sciences , vol. 36, no. 4, pp. 395-412, 2013.
[2] Z. Wang, Y. Liu, X. Liu, Y. Shi, "Robust state estimation for discrete-time stochastic neural networks with probabilistic measurement delays," Neurocomputing , vol. 74, no. 1-3, pp. 256-264, 2010.
[3] Y. Liu, Z. Wang, A. Serrano, X. Liu, "Discrete-time recurrent neural networks with time-varying delays: exponential stability analysis," Physics Letters A , vol. 362, no. 5-6, pp. 480-488, 2007.
[4] Q. Song, Z. Wang, "A delay-dependent LMI approach to dynamics analysis of discrete-time recurrent neural networks with time-varying delays," Physics Letters A , vol. 368, no. 1-2, pp. 134-145, 2007.
[5] Y. Liu, Z. Wang, X. Liu, "State estimation for discrete-time Markovian jumping neural networks with Markov-mode-dependent lower and upper bounds on the distributed delays," Neural Processing Letters , vol. 36, no. 1, pp. 1-19, 2012.
[6] J. Liang, J. Lam, "Robust state estimation for stochastic genetic regulatory networks," International Journal of Systems Science , vol. 41, no. 1, pp. 47-63, 2010.
[7] Q. Song, Z. Wang, "New results on passivity analysis of uncertain neural networks with time-varying delays," International Journal of Computer Mathematics , vol. 87, no. 3, pp. 668-678, 2010.
[8] J. Cheng, H. Zhu, S. Zhong, Y. Zhang, "Robust stability of switched delay systems with average dwell time under asynchronous switching," Journal of Applied Mathematics , vol. 2012, 2012.
[9] J. Cheng, H. Zhu, S. Zhong, G. Li, "Novel delay-dependent robust stability criteria for neutral systems with mixed time-varying delays and nonlinear perturbations," Applied Mathematics and Computation , vol. 219, no. 14, pp. 7741-7753, 2013.
[10] J. Cheng, H. Zhu, S. Zhong, Y. Zhang, Y. Zeng, "Improved delay-dependent stability criteria for continuous system with two additive time-varying delay components," Communications in Nonlinear Science and Numerical Simulation , vol. 19, no. 1, pp. 210-215, 2014.
[11] P. Balasubramaniam, R. Rakkiyappan, "Delay-dependent robust stability analysis of uncertain stochastic neural networks with discrete interval and distributed time-varying delays," Neurocomputing , vol. 72, no. 13-15, pp. 3231-3237, 2009.
[12] X.-L. Zhu, G.-H. Yang, "New delay-dependent stability results for neural networks with time-varying delay," IEEE Transactions on Neural Networks , vol. 19, no. 10, pp. 1783-1791, 2008.
[13] B. Zhang, S. Xu, Y. Zou, "Improved delay-dependent exponential stability criteria for discrete-time recurrent neural networks with time-varying delays," Neurocomputing , vol. 72, no. 1-3, pp. 321-330, 2008.
[14] C. Song, H. Gao, W. X. Zheng, "A new approach to stability analysis of discrete-time recurrent neural networks with time-varying delay," Neurocomputing , vol. 72, no. 10-12, pp. 2563-2568, 2009.
[15] Y. Tang, J.-A. Fang, M. Xia, D. Yu, "Delay-distribution-dependent stability of stochastic discrete-time neural networks with randomly mixed time-varying delays," Neurocomputing , vol. 72, no. 16-18, pp. 3830-3838, 2009.
[16] Y. Ou, H. Liu, Y. Si, Z. Feng, "Stability analysis of discrete-time stochastic neural networks with time-varying delays," Neurocomputing , vol. 73, no. 4-6, pp. 740-748, 2010.
[17] Y. Liu, Z. Wang, X. Liu, "Robust stability of discrete-time stochastic neural networks with time-varying delays," Neurocomputing , vol. 71, no. 4-6, pp. 823-833, 2008.
[18] H. Wang, Q. Song, "Synchronization for an array of coupled stochastic discrete-time neural networks with mixed delays," Neurocomputing , vol. 74, no. 10, pp. 1572-1584, 2011.
[19] T. Wang, Y. Ding, L. Zhang, K. Hao, "Robust state estimation for discrete-time stochastic genetic regulatory networks with probabilistic measurement delays," Neurocomputing , vol. 111, pp. 1-12, 2013.
[20] X. Kan, Z. Wang, H. Shu, "State estimation for discrete-time delayed neural networks with fractional uncertainties and sensor saturations," Neurocomputing , vol. 117, pp. 64-71, 2013.
[21] S. Lakshmanan, J. H. Park, H. Y. Jung, P. Balasubramaniam, S. M. Lee, "Design of state estimator for genetic regulatory networks with time-varying delays and randomly occurring uncertainties," Biosystems , vol. 111, no. 1, pp. 51-70, 2013.
[22] T. H. Lee, J. H. Park, O. M. Kwon, S. M. Lee, "Stochastic sampled-data control for state estimation of time-varying delayed neural networks," Neural Networks , vol. 46, pp. 99-108, 2013.
[23] H. Huang, T. Huang, X. Chen, "A mode-dependent approach to state estimation of recurrent neural networks with Markovian jumping parameters and mixed delays," Neural Networks , vol. 46, pp. 50-61, 2013.
[24] Y. Chen, W. X. Zheng, "Stochastic state estimation for neural networks with distributed delays and Markovian jump," Neural Networks , vol. 25, pp. 14-20, 2012.
[25] H. Bao, J. Cao, "Delay-distribution-dependent state estimation for discrete-time stochastic neural networks with random delay," Neural Networks , vol. 24, no. 1, pp. 19-28, 2011.
[26] H. Wang, Q. Song, "State estimation for neural networks with mixed interval time-varying delays," Neurocomputing , vol. 73, no. 7-9, pp. 1281-1288, 2010.
[27] Z. Wang, H. Shu, Y. Liu, D. W. C. Ho, X. Liu, "Robust stability analysis of generalized neural networks with discrete and distributed time delays," Chaos, Solitons and Fractals , vol. 30, no. 4, pp. 886-896, 2006.
[28] X.-L. Zhu, G.-H. Yang, "Jensen inequality approach to stability analysis of discrete-time systems with time-varying delay," in Proceedings of the American Control Conference (ACC '08), pp. 1644-1649, Seattle, Wash, USA, June 2008.
[29] Y. Tang, J.-A. Fang, M. Xia, D. Yu, "Delay-distribution-dependent stability of stochastic discrete-time neural networks with randomly mixed time-varying delays," Neurocomputing , vol. 72, no. 16-18, pp. 3830-3838, 2009.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2014 Liyuan Hou et al. Liyuan Hou et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
This paper investigates the analysis problem for stability of discrete-time neural networks (NNs) with discrete- and distribute-time delay. Stability theory and a linear matrix inequality (LMI) approach are developed to establish sufficient conditions for the NNs to be globally asymptotically stable and to design a state estimator for the discrete-time neural networks. Both the discrete delay and distribute delays employ decomposing the delay interval approach, and the Lyapunov-Krasovskii functionals (LKFs) are constructed on these intervals, such that a new stability criterion is proposed in terms of linear matrix inequalities (LMIs). Numerical examples are given to demonstrate the effectiveness of the proposed method and the applicability of the proposed method.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer