(ProQuest: ... denotes non-US-ASCII text omitted.)
Li Wan 1 and Qinghua Zhou 2 and Jizi Li 3
Recommended by Xiaodi Li
1, School of Mathematics and Computer Science, Wuhan Textile University, Wuhan 430073, China
2, Department of Mathematics, Zhaoqing University, Zhaoqing 526061, China
3, School of Management, Wuhan Textile University, Wuhan 430073, China
Received 22 August 2012; Accepted 24 September 2012
This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
Cohen and Grossberg proposed and investigated Cohen-Grossberg neural networks in 1983 [ 1]. Hopfield neural networks, recurrent neural networks, cellular neural networks, and bidirectional associative memory neural networks are special cases of this model. Since then, the Cohen-Grossberg neural networks have been widely studied in the literature, see for example, [ 2- 12] and references therein.
Strictly speaking, diffusion effects cannot be avoided in the neural networks when electrons are moving in asymmetric electromagnetic fields. Therefore, we must consider that the activations vary in space as well as in time. In [ 13- 19], the authors gave some stability conditions of reaction-diffusion neural networks, but these conditions were independent of diffusion effects.
On the other hand, it has been well recognized that stochastic disturbances are ubiquitous and inevitable in various systems, ranging from electronic implementations to biochemical systems, which are mainly caused by thermal noise, environmental fluctuations, as well as different orders of ongoing events in the overall systems [ 20, 21]. Therefore, considerable attention has been paid to investigate the dynamics of stochastic neural networks, and many results on stability of stochastic neural networks have been reported in the literature, see for example, [ 22- 38] and references therein.
The above references mainly considered the stability of equilibrium point of neural networks. What do we study when the equilibrium point does not exist? Except for stability property, boundedness and attractor are also foundational concepts of dynamical systems, which play an important role in investigating the uniqueness of equilibrium, global asymptotic stability, global exponential stability, the existence of periodic solution, and so on [ 39, 40]. Recently, ultimate boundedness and attractor of several classes of neural networks with time delays have been reported. In [ 41], the globally robust ultimate boundedness of integrodifferential neural networks with uncertainties and varying delays was studied. Some sufficient criteria on the ultimate boundedness of deterministic neural networks with both varying and unbounded delays were derived in [ 42]. In [ 43, 44], a series of criteria on the boundedness, global exponential stability, and the existence of periodic solution for nonautonomous recurrent neural networks were established. In [ 45, 46], some criteria on ultimate boundedness and attractor of stochastic neural networks were derived. To the best of our knowledge, there are few results on the ultimate boundedness and attractor of stochastic reaction-diffusion neural networks.
Therefore, the arising questions about the ultimate boundedness, attractor and stability for the stochastic reaction-diffusion Cohen-Grossberg neural networks with time-varying delays are important yet meaningful.
The rest of the paper is organized as follows: some preliminaries are in Section 2, main results are presented in Section 3, a numerical example and conclusions will be drawn in Sections 4and 5, respectively.
2. Model Description and Assumptions
Consider the following stochastic Cohen-Grossberg neural networks with delays and diffusion terms: [figure omitted; refer to PDF] for 1 ...4;i ...4;n and t ...5;0 . In the above model, n ...5;2 is the number of neurons in the network; x i is space variable; y i ( t ,x ) is the state variable of the i th neuron at time t and in space x ; f j ( y j ( t ,x ) ) and g j ( y j ( t ,x ) ) denote the activation functions of the j th unit at time t and in space x ; constant D ik ...5;0 ; d i ( y i ( t ,x ) ) presents an amplification function; c i ( y i ( t ,x ) ) is an appropriately behavior function; a ij and b ij denote the connection strengths of the j th unit on the i th unit, respectively; τ j ( t ) corresponds to the transmission delay and satisfies 0 ...4; τ j ( t ) ...4; τ ; J i denotes the external bias on the i th unit; σ ij ( · , · ,x ) is the diffusion function; X is a compact set with smooth boundary ∂X and measure mes X >0 in R l ; ξ i (s ,x ) is the initial boundary value; w (t ) = ( w 1 (t ) , ... , w m (t ) ) T is m -dimensional Brownian motion defined on a complete probability space ( Ω , ... , ... ) with a natural filtration { ... t } t ...5;0 generated by {w (s ) :0 ...4;s ...4;t } , where we associate Ω with the canonical space generated by all { w i (t ) } and denote by ... the associated σ -algebra generated by {w (t ) } with the probability measure ... .
System ( 2.1) has the following matrix form: [figure omitted; refer to PDF] where [figure omitted; refer to PDF]
Let L 2 (X ) be the space of real Lebesgue measurable functions on X and a Banach space for the L 2 -norm [figure omitted; refer to PDF] Note that ξ = { ( ξ 1 (s ,x ) , ... , ξ n (s ,x ) ) T : - τ ...4;s ...4;0 } is C ( [ - τ ,0 ] × R l ; R n ) -valued function and ... 0 -measurable R n -valued random variable, where ... 0 = ... s on [ - τ ,0 ] , C ( [ - τ ,0 ] × R l ; R n ) is the space of all continuous R n -valued functions defined on [ - τ ,0 ] × R l with a norm || ξ i (t ) || 2 2 = ∫ X ... ξ i 2 (t ,x )dx .
The following assumptions and lemmas will be used in establishing our main results.
(A1) There exist constants l i - , l i + , m i - and m i + such that [figure omitted; refer to PDF]
(A2) There exist constants μ and γ i >0 such that [figure omitted; refer to PDF]
(A3) d i is bounded, positive, and continuous, that is, there exist constants d i _ , d i ¯ such that 0 < d i _ ...4; d i (u ) ...4; d i ¯ , for u ∈R , i =1,2 , ... ,n .
Lemma 2.1 (Poincaré inequality, [ 47]).
Assume that a real-valued function w (x ) :X [arrow right]R satisfies w (x ) ∈D = {w (x ) ∈ L 2 (X ) , ( ∂w / ∂ x i ) ∈ L 2 (X ) (1 ...4;i ...4;l ) , ( ∂w (x ) / ∂v ) | ∂X =0 } , where X is a bounded domain of R l with a smooth boundary ∂X . Then, [figure omitted; refer to PDF] which λ 1 is the lowest positive eigenvalue of the Neumann boundary problem: [figure omitted; refer to PDF] ∇ = ( ∂ / ∂ x 1 , ... , ∂ / ∂ x m ) is the gradient operator, Δ = ∑ k =1 m ... ( ∂ 2 / ∂ x k 2 ) is the Laplace operator.
Remark 2.2.
Assumption (A1) is less conservative than that in [ 26, 28], since the constants l i - , l i + , m i - , and m i + are allowed to be positive, negative, or zero, that is to say, the activation function in (A1) is assumed to be neither monotonic, differentiable, nor bounded. Assumption (A2) is weaker than those given in [ 23, 27, 30] since μ is not required to be zero or smaller than 1 and is allowed to take any value.
Remark 2.3.
According to the eigenvalue theory of elliptic operators, the lowest eigenvalue λ 1 is only determined by X [ 47]. For example, if X = [0 ,L ] , then λ 1 = ( π /L ) 2 ; if X = (0 ,a ) × (0 ,b ) , then λ 1 =min { ( π /a ) 2 , ( π /b ) 2 } .
The notation A > 0 (resp., A ...5; 0 ) means that matrix A is symmetric-positive definite (resp., positive semidefinite). A T denotes the transpose of the matrix A . λ min (A ) represents the minimum eigenvalue of matrix A . ||y (t ) || 2 = ∫ X ... y T (t ,x )y (t ,x )dx = ∑ i =1 n ... || y i (t ) || 2 2 .
3. Main Results
Theorem 3.1.
Suppose that assumptions (A1)-(A3) hold and there exist some matrices P =diag ( p 1 , ... , p n ) >0 , Q i ...5;0 , σ i > 0 , V i =diag ( v i1 , ... , v in ) ...5;0 ( i =1,2 ) , U j =diag ( u j1 , ... , u jn ) ...5;0 (j =1,2 ,3 ) , and σ 3 such that the following linear matrix inequality hold:
(A4) vvvvvvvvvv [figure omitted; refer to PDF]
where x ∈X , * means the symmetric term, [figure omitted; refer to PDF]
Then system ( 2.1) is stochastically ultimately bounded, that is, if for any [straight epsilon] ∈ (0,1 ) , there is a positive constant C =C ( [straight epsilon] ) such that the solution y (t ,x ) of system ( 2.1) satisfies [figure omitted; refer to PDF]
Proof.
If μ ...4;1 , then it follows from (A4) that there exists a sufficiently small λ >0 such that [figure omitted; refer to PDF] where [figure omitted; refer to PDF]
If μ >1 , then it follows from (A4) that there exists a sufficiently small λ >0 such that [figure omitted; refer to PDF] where Δ 1 , Δ 3 , and Δ 4 are the same as in ( 3.4), [figure omitted; refer to PDF]
Consider the following Lyapunov functional: [figure omitted; refer to PDF]
Applying Itô formula in [ 48] to V (y (t ) ) along ( 2.2), one obtains [figure omitted; refer to PDF]
From assumptions (A1)-(A4), one obtains [figure omitted; refer to PDF]
From the boundary condition and Lemma 2.1, one obtains [figure omitted; refer to PDF] where "·" is inner product, D i = min 1 ...4;k ...4;l { D ik } , [figure omitted; refer to PDF]
Combining ( 3.10) and ( 3.11) into ( 3.9), we have [figure omitted; refer to PDF] where h ( μ ) = e - λ τ ( μ ...4;1 ) or 1 ( μ >1 ) .
In addition, it follows from (A1) that [figure omitted; refer to PDF] Similarly, one obtains [figure omitted; refer to PDF]
From ( 3.13)-( 3.15), one derives [figure omitted; refer to PDF] or [figure omitted; refer to PDF] where η (t ,x ) = ( y T (t ,x ) , y T (t - τ (t ) ,x ) , f T (y (t ,x ) ) , g T (y (t ,x ) ) , g T (y (t - τ (t ) ,x ) ) ) T , [figure omitted; refer to PDF] Thus, one obtains [figure omitted; refer to PDF]
For any [straight epsilon] >0 , set C = λ -1 C 1 / λ min (P ) [straight epsilon] . By Chebyshev's inequality and ( 3.20), we obtain [figure omitted; refer to PDF] which implies [figure omitted; refer to PDF] The proof is completed.
Theorem 3.1shows that there exists t 0 > 0 such that for any t ...5; t 0 , P { || y ( t ) || ...4;C } ...5; 1 - [straight epsilon] . Let B C be denoted by [figure omitted; refer to PDF] Clearly, B C is closed, bounded, and invariant. Moreover, [figure omitted; refer to PDF] with no less than probability 1 - [straight epsilon] , which means that B C attracts the solutions infinitely many times with no less than probability 1 - [straight epsilon] , so we may say that B C is a weak attractor for the solutions.
Theorem 3.2.
Suppose that all conditions of Theorem 3.1hold. Then there exists a weak attractor B C for the solutions of system ( 2.1).
Theorem 3.3.
Suppose that all conditions of Theorem 3.1hold and c (0 ) =f (0 ) =g (0 ) =J =0 . Then zero solution of system ( 2.1) is mean square exponential stability.
Remark 3.4.
Assumption (A4) depends on λ 1 and μ , so the criteria on the stability, ultimate boundedness, and weak attractor depend on diffusion effects and the derivative of the delays and are independent of the magnitude of the delays.
4. An Example
In this section, a numerical example is presented to demonstrate the validity and effectiveness of our theoretical results.
Example 4.1.
Consider the following system [figure omitted; refer to PDF] where n =2 , l =m =1 , X = [0 , π ] , D 11 = D 21 =0.5 , d 1 ( y 1 (t ) ) =0.3 +0.1 cos y 1 (t ) , d 2 ( y 2 (t ) ) =0.3 +0.1 sin y 2 (t ) , c (y (t ) ) = γy (t ) , f (y ) =g (y ) =0.1 tanh (y ) , [figure omitted; refer to PDF] w (t ) is one-dimensional Brownian motion. Then we compute that λ 1 =1 , D =diag (0.5,0.5 ) , L 1 = M 1 =0 , L 2 = M 2 = M 3 =diag (0.1,0.1 ) , d _ =diag (0.2,0.2 ) , d - =diag (0.4,0.4 ) , σ 1 = G T PG , σ 2 = H T PH , and σ 3 = G T PH . By using the Matlab LMI Toolbox, for μ =0.1 , based on Theorem 3.1, such system is stochastically ultimately bounded when [figure omitted; refer to PDF]
5. Conclusion
In this paper, new results and sufficient criteria on the ultimate boundedness, weak attractor, and stability are established for stochastic reaction-diffusion Cohen-Grossberg neural networks with delays by using Lyapunov method, Poincaré inequality and matrix technique. The criteria depend on diffusion effect and derivative of the delays and are independent of the magnitude of the delays.
Acknowledgments
This work was supported by the National Natural Science Foundation of China (nos. 11271295, 10926128, 11047114, and 71171152), Science and Technology Research Projects of Hubei Provincial Department of Education (nos. Q20111607 and Q20111611) and Young Talent Cultivation Projects of Guangdong (LYM09134).
[1] M. A. Cohen, S. Grossberg, "Absolute stability of global pattern formation and parallel memory storage by competitive neural networks," IEEE Transactions on Systems, Man, and Cybernetics , vol. 13, no. 5, pp. 815-826, 1983.
[2] Z. Chen, J. Ruan, "Global dynamic analysis of general Cohen-Grossberg neural networks with impulse," Chaos, Solitons & Fractals , vol. 32, no. 5, pp. 1830-1837, 2007.
[3] T. Huang, A. Chan, Y. Huang, J. Cao, "Stability of Cohen-Grossberg neural networks with time-varying delays," Neural Networks , vol. 20, no. 8, pp. 868-873, 2007.
[4] T. Huang, C. Li, G. Chen, "Stability of Cohen-Grossberg neural networks with unbounded distributed delays," Chaos, Solitons & Fractals , vol. 34, no. 3, pp. 992-996, 2007.
[5] Z. W. Ping, J. G. Lu, "Global exponential stability of impulsive Cohen-Grossberg neural networks with continuously distributed delays," Chaos, Solitons & Fractals , vol. 41, no. 1, pp. 164-174, 2009.
[6] J. Li, J. Yan, "Dynamical analysis of Cohen-Grossberg neural networks with time-delays and impulses," Neurocomputing , vol. 72, no. 10-12, pp. 2303-2309, 2009.
[7] M. Tan, Y. Zhang, "New sufficient conditions for global asymptotic stability of Cohen-Grossberg neural networks with time-varying delays," Nonlinear Analysis: Real World Applications , vol. 10, no. 4, pp. 2139-2145, 2009.
[8] M. Gao, B. Cui, "Robust exponential stability of interval Cohen-Grossberg neural networks with time-varying delays," Chaos, Solitons & Fractals , vol. 40, no. 4, pp. 1914-1928, 2009.
[9] C. Li, Y. K. Li, Y. Ye, "Exponential stability of fuzzy Cohen-Grossberg neural networks with time delays and impulsive effects," Communications in Nonlinear Science and Numerical Simulation , vol. 15, no. 11, pp. 3599-3606, 2010.
[10] Y. K. Li, L. Yang, "Anti-periodic solutions for Cohen-Grossberg neural networks with bounded and unbounded delays," Communications in Nonlinear Science and Numerical Simulation , vol. 14, no. 7, pp. 3134-3140, 2009.
[11] X. D. Li, "Exponential stability of Cohen-Grossberg-type BAM neural networks with time-varying delays via impulsive control," Neurocomputing , vol. 73, no. 1-3, pp. 525-530, 2009.
[12] J. Yu, C. Hu, H. Jiang, Z. Teng, "Exponential synchronization of Cohen-Grossberg neural networks via periodically intermittent control," Neurocomputing , vol. 74, no. 10, pp. 1776-1782, 2011.
[13] J. Liang, J. Cao, "Global exponential stability of reaction-diffusion recurrent neural networks with time-varying delays," Physics Letters A , vol. 314, no. 5-6, pp. 434-442, 2003.
[14] Z. J. Zhao, Q. K. Song, J. Y. Zhang, "Exponential periodicity and stability of neural networks with reaction-diffusion terms and both variable and unbounded delays," Computers & Mathematics with Applications , vol. 51, no. 3-4, pp. 475-486, 2006.
[15] X. Lou, B. Cui, "Boundedness and exponential stability for nonautonomous cellular neural networks with reaction-diffusion terms," Chaos, Solitons & Fractals , vol. 33, no. 2, pp. 653-662, 2007.
[16] K. Li, Z. Li, X. Zhang, "Exponential stability of reaction-diffusion generalized Cohen-Grossberg neural networks with both variable and distributed delays," International Mathematical Forum , vol. 2, no. 29-32, pp. 1399-1414, 2007.
[17] R. Wu, W. Zhang, "Global exponential stability of delayed reaction-diffusion neural networks with time-varying coefficients," Expert Systems with Applications , vol. 36, no. 6, pp. 9834-9838, 2009.
[18] Z. A. Li, K. L. Li, "Stability analysis of impulsive Cohen-Grossberg neural networks with distributed delays and reaction-diffusion terms," Applied Mathematical Modelling , vol. 33, no. 3, pp. 1337-1348, 2009.
[19] J. Pan, S. M. Zhong, "Dynamical behaviors of impulsive reaction-diffusion Cohen-Grossberg neural network with delays," Neurocomputing , vol. 73, no. 7-9, pp. 1344-1351, 2010.
[20] M. Kærn, T. C. Elston, W. J. Blake, J. J. Collins, "Stochasticity in gene expression: from theories to phenotypes," Nature Reviews Genetics , vol. 6, no. 6, pp. 451-464, 2005.
[21] K. Sriram, S. Soliman, F. Fages, "Dynamics of the interlocked positive feedback loops explaining the robust epigenetic switching in Candida albicans," Journal of Theoretical Biology , vol. 258, no. 1, pp. 71-88, 2009.
[22] C. Huang, J. D. Cao, "On p th moment exponential stability of stochastic Cohen-Grossberg neural networks with time-varying delays," Neurocomputing , vol. 73, no. 4-6, pp. 986-990, 2010.
[23] M. Dong, H. Zhang, Y. Wang, "Dynamics analysis of impulsive stochastic Cohen-Grossberg neural networks with Markovian jumping and mixed time delays," Neurocomputing , vol. 72, no. 7-9, pp. 1999-2004, 2009.
[24] Q. Song, Z. Wang, "Stability analysis of impulsive stochastic Cohen-Grossberg neural networks with mixed time delays," Physica A , vol. 387, no. 13, pp. 3314-3326, 2008.
[25] C. H. Wang, Y. G. Kao, G. W. Yang, "Exponential stability of impulsive stochastic fuzzy reaction-diffusion Cohen-Grossberg neural networks with mixed delays," Neurocomputing , vol. 89, pp. 55-63, 2012.
[26] H. Huang, G. Feng, "Delay-dependent stability for uncertain stochastic neural networks with time-varying delay," Physica A , vol. 381, no. 1-2, pp. 93-103, 2007.
[27] H. Y. Zhao, N. Ding, L. Chen, "Almost sure exponential stability of stochastic fuzzy cellular neural networks with delays," Chaos, Solitons & Fractals , vol. 40, no. 4, pp. 1653-1659, 2009.
[28] W. H. Chen, X. M. Lu, "Mean square exponential stability of uncertain stochastic delayed neural networks," Physics Letters A , vol. 372, no. 7, pp. 1061-1069, 2008.
[29] C. Huang, J. D. Cao, "Almost sure exponential stability of stochastic cellular neural networks with unbounded distributed delays," Neurocomputing , vol. 72, no. 13-15, pp. 3352-3356, 2009.
[30] C. Huang, P. Chen, Y. He, L. Huang, W. Tan, "Almost sure exponential stability of delayed Hopfield neural networks," Applied Mathematics Letters , vol. 21, no. 7, pp. 701-705, 2008.
[31] C. Huang, Y. He, H. Wang, "Mean square exponential stability of stochastic recurrent neural networks with time-varying delays," Computers & Mathematics with Applications , vol. 56, no. 7, pp. 1773-1778, 2008.
[32] R. Rakkiyappan, P. Balasubramaniam, "Delay-dependent asymptotic stability for stochastic delayed recurrent neural networks with time varying delays," Applied Mathematics and Computation , vol. 198, no. 2, pp. 526-533, 2008.
[33] Y. Sun, J. D. Cao, " p th moment exponential stability of stochastic recurrent neural networks with time-varying delays," Nonlinear Analysis: Real World Applications , vol. 8, no. 4, pp. 1171-1185, 2007.
[34] Z. Wang, J. Fang, X. Liu, "Global stability of stochastic high-order neural networks with discrete and distributed delays," Chaos, Solitons & Fractals , vol. 36, no. 2, pp. 388-396, 2008.
[35] X. D. Li, "Existence and global exponential stability of periodic solution for delayed neural networks with impulsive and stochastic effects," Neurocomputing , vol. 73, no. 4-6, pp. 749-758, 2010.
[36] Y. Ou, H. Y. Liu, Y. L. Si, Z. G. Feng, "Stability analysis of discrete-time stochastic neural networks with time-varying delays," Neurocomputing , vol. 73, no. 4-6, pp. 740-748, 2010.
[37] Q. Zhu, J. Cao, "Exponential stability of stochastic neural networks with both Markovian jump parameters and mixed time delays," IEEE Transactions on Systems, Man, and Cybernetics B , vol. 41, no. 2, pp. 341-353, 2011.
[38] Q. Zhu, C. Huang, X. Yang, "Exponential stability for stochastic jumping BAM neural networks with time-varying and distributed delays," Nonlinear Analysis: Hybrid Systems , vol. 5, no. 1, pp. 52-77, 2011.
[39] P. Wang, D. Li, Q. Hu, "Bounds of the hyper-chaotic Lorenz-Stenflo system," Communications in Nonlinear Science and Numerical Simulation , vol. 15, no. 9, pp. 2514-2520, 2010.
[40] P. Wang, D. Li, X. Wu, J. Lü, X. Yu, "Ultimate bound estimation of a class of high dimensional quadratic autonomous dynamical systems," International Journal of Bifurcation and Chaos , vol. 21, no. 9, pp. 2679-2694, 2011.
[41] X. Y. Lou, B. Cui, "Global robust dissipativity for integro-differential systems modeling neural networks with delays," Chaos, Solitons & Fractals , vol. 36, no. 2, pp. 469-478, 2008.
[42] Q. Song, Z. Zhao, "Global dissipativity of neural networks with both variable and unbounded delays," Chaos, Solitons & Fractals , vol. 25, no. 2, pp. 393-401, 2005.
[43] H. Jiang, Z. Teng, "Global eponential stability of cellular neural networks with time-varying coefficients and delays," Neural Networks , vol. 17, no. 10, pp. 1415-1425, 2004.
[44] H. Jiang, Z. Teng, "Boundedness, periodic solutions and global stability for cellular neural networks with variable coefficients and infinite delays," Neurocomputing , vol. 72, no. 10-12, pp. 2455-2463, 2009.
[45] L. Wan, Q. H. Zhou, "Attractor and ultimate boundedness for stochastic cellular neural networks with delays," Nonlinear Analysis: Real World Applications , vol. 12, no. 5, pp. 2561-2566, 2011.
[46] L. Wan, Q. H. Zhou, P. Wang, J. Li, "Ultimate boundedness and an attractor for stochastic Hopfield neural networks with time-varying delays," Nonlinear Analysis: Real World Applications , vol. 13, no. 2, pp. 953-958, 2012.
[47] R. Temam Infinite Dimensional Dynamical Systems in Mechanics and Physics , Springer, New York, NY, USA, 1998.
[48] X. Mao Stochastic Differential Equations and Applications , Horwood Publishing Limited, 1997.
[]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2012 Li Wan et al. Li Wan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
This paper investigates dynamical behaviors of stochastic Cohen-Grossberg neural network with delays and reaction diffusion. By employing Lyapunov method, Poincaré inequality and matrix technique, some sufficient criteria on ultimate boundedness, weak attractor, and asymptotic stability are obtained. Finally, a numerical example is given to illustrate the correctness and effectiveness of our theoretical results.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer