(ProQuest: ... denotes non-US-ASCII text omitted.)
Chuangxia Huang 1 and Xinsong Yang 2 and Yigang He 3 and Lehua Huang 1
Recommended by Binggen Zhang
1, College of Mathematics and Computing Science, Changsha University of Science and Technology, Changsha, Hunan 410076, China
2, Department of Mathematics, Honghe University, Mengzi, Yunnan 661100, China
3, College of Electrical and Information Engineering, Hunan University, Changsha, Hunan 410082, China
Received 5 September 2010; Accepted 11 January 2011
1. Introduction
For decades, studies have been intensively focused on recurrent neural networks (RNNs) because of the successful hardware implementation and their various applications such as classification, associative memories, parallel computation, optimization, signal processing and pattern recognition, see, for example, [1-3]. These applications rely crucially on the analysis of the dynamical behavior of neural networks. Recently, it has been realized that the axonal signal transmission delays often occur in various neural networks and may cause undesirable dynamic network behaviors such as oscillation and instability. Consequently, the stability analysis problems resting with delayed recurrent neural networks (DRNNs) have drawn considerable attention. To date, a great deal of results on DRNNs have been reported in the literature, see, for example, [4-9] and references therein. To a large extent, the existing literature on theoretical studies of DRNNs is predominantly concerned with deterministic differential equations. The literature dealing with the inherent randomness associated with signal transmission seems to be scarce; such studies are, however, important for us to understand the dynamical characteristics of neuron behavior in random environments for two reasons: (i) in real nervous systems and in the implementation of artificial neural networks, synaptic transmission is a noisy process brought on by random fluctuations from the release of neurotransmitters and other probabilistic causes; hence, noise is unavoidable and should be taken into consideration in modeling [10]; (ii) it has been realized that a neural network could be stabilized or destabilized by certain stochastic effects [11, 12].
Although systems are often perturbed by various types of environmental "noise", it turns out that one of the reasonable interpretation for the "noise" perturbation is the so-called white noise dω(t)/dt , where ω(t) is the Brownian motion process, also called as Wiener process [12, 13]. More detailed mechanism of the stochastic effects on the interaction of neurons and analog circuit implementing can be found in [13, 14]. However, because the Brownian motion ω(t) is nowhere differentiable, the derivative of Brownian motion dω(t)/dt can not be defined in the ordinary way, the stability analysis for stochastic neural networks is difficult. Some initial results have just appeared, for example, [11, 15-23]. In [11, 15], Liao and Mao discussed the exponentially stability of stochastic recurrent neural networks (SRNNs); in [16], the authors continued their research to discuss almost sure exponential stability for a class of stochastic CNN with discrete delays by using the nonnegative semimartingale convergence theorem; in [18], exponential stability of SRNNs via Razumikhin-type was investigated; in [17], Wan and Sun investigated mean square exponential stability of stochastic delayed Hopfield neural networks(HNN); in [19], Zhao and Ding studied almost sure exponential stability of SRNN; in [20], Sun and Cao investigated p th moment exponential stability of stochastic recurrent neural networks with time-varying delays.
The delays in all above-mentioned papers have been largely restricted to be discrete. As is well known, the use of constant fixed delays in models of delayed feedback provides of a good approximation in simple circuits consisting of a small number of cells. However, neural networks usually have a spatial extent due to the presence of a multitude of parallel pathways with a variety of axon sizes and lengths. Thus there will be a distribution of conduction velocities along these pathways and a distribution of propagation delays. In these circumstances, the signal propagation is not instantaneous and cannot be modeled with discrete delays and a more appropriate way is to incorporate continuously distributed delays. For instance, in [24], Tank and Hopfield designed an analog neural circuit with distributed delays, which can solve a general problem of recognizing patterns in a time-dependent signal. A more satisfactory hypothesis is that to incorporate continuously distributed delays, we refer to [5, 25, 26]. In [27], Wang et al. developed a linear matrix inequality (LMI) approach to study the stability of SRNNs with mixed delays. To the best of the authors' knowledge, few authors investigated the convergence dynamics of SRNNs with unbounded distributed delays. On the other hand, if the RNNs depend on only time or instantaneously time and time delay, the model is in fact an ordinary differential equation or a functional differential equation. In the factual operations, however, the diffusion phenomena could not be ignored in neural networks and electric circuits once electrons transport in a nonuniform electromagnetic field. Hence, it is essential to consider the state variables varying with the time and space variables. The neural networks with diffusion terms can commonly be expressed by partial differential equations [28-30].
Keeping this in mind, in this paper, we consider the SRNNs described by the following stochastic reaction-diffusion RNNs [figure omitted; refer to PDF] In the above model, n≥2 is the number of neurons in the network, xi is space variable, yi (t,x) is the state variable of the i th neuron at time t and in space variable x , fj (yj (t,x)) , and gj (yj (t,x)) denote the output of the j th unit at time t and in space variable x ; ci , aij , bij are constants: ci represents the rate with which the i th unit will reset its potential to the resting state in isolation when disconnected from the network and the external stochastic perturbation, and is a positive constant; aij and bij weight the strength of the j th unit on the i th unit at time t . Moreover, {wil (t): i=1,...,n,l∈N} are independent scalar standard Wiener processes on the complete probability space (Ω,...,P) with the natural filtration {...t}t≥0 generated by the standard Wiener process {w(s): 0≤s≤t} which is independent of wil (t) , where we associate Ω with the canonical space generated by w(t) , and denote by ... the associated σ -algebra generated by w(t) with the probability measure P ; Furthermore, we assume the following boundary condition
(A0 ) : smooth function Dik =Dik (t,x,y)≥0 is a diffusion operator, X is a compact set with smooth boundary ∂X and measure mes X>0 in Rm . ∂yi /∂n|∂X =0 , t≥0 and ξi (s,x) are the boundary value and initial value, respectively.
For the sake of convenience, some of the standing definitions and assumptions are formulated below:
(A1 ) : hi :R[arrow right]R is differentiable and γi =inf x∈R {hi[variant prime] (x)}>0 , hi (0)=0 for i∈{1,...,n} ,
(A2 ) : fj , gj , and σil are Lipschitz continuous with positive Lipschitz constants αj , βj , Lil , respectively, and fj (0)=gj (0)=σil (0)=0 , ∑l=1∞Lil <∞ for i,j∈{1,...,n} , l∈... ,
(A3 ) : For i,j∈{1,...,n} , there is a positive constant μ , such that [figure omitted; refer to PDF]
Remark 1.1.
The authors in [28] (see (H1 ) , (H2 ) , (H3 ) ) and [30] (see (H1 ) , (H2 ) , (H3[variant prime] ) ) also studied the convergence dynamics of (1.1) under the following foundational conditions:
(H1 ) : the kernels Kij , i,j=1,...,n are real-valued nonnegative piecewise continuous functions defined on [0,∞) and satisfy: ∫0∞Kij (t)dt=1 ,
(H2 ) : for each i,j , we have ∫0∞ tKij (t)dt<∞ ,
(H3 ) : for each i,j , there is a positive constant μ such that [figure omitted; refer to PDF]
(H3[variant prime] ) : for each i,j , there is a positive constant μ such that [figure omitted; refer to PDF]
Just take the widely applied delay kernels as mentioned in [31-33] as an example, which given by Kij (s)=(sr /r!)γijr+1e-γij s for s∈[0,∞) , where γij ∈[0,∞) , r∈{0,1,...,n} . One can easy to find out that the conditions about the kernels in [28, 30] are not satisfy at the same time. Therefore, the applications of the main results in [28, 30] appears to be somewhat certain limits because of the obviously restrictive assumptions on the kernels. The main purpose of this paper is to further investigate the convergence of stochastic reaction-diffusion RNNs with more general kernels.
2. Preliminaries
Let u=(u1 ,...,un)T and L2 (X) is the space of scalar value Lebesgue measurable functions on X which is a Banach space for the L2 -norm: ||ν||2 =(∫X |ν(x)|2 dx)1/2 , ν∈L2 (X) , then we define the norm ||u|| as ||u||=(∑i=1n ||ui||22)1/2 . Assume ξ={(ξ1 (s,x),...,ξn (s,x))T :-∞<s≤0}∈C((-∞,0];L^2(X,Rn )) be an ...0 -measurable Rn valued random variable, where, for example, ...0 =...s restricted on (-∞,0] , and C((-∞,0];L^2(X,Rn )) is the space of all continuous Rn -valued functions defined on (-∞,0]×X with a norm ||ξ||c =sup -∞<s≤0 {||ξ(s,x)||2 } . Clearly, (1.1) admits an equilibrium solution x(t)≡0 .
Definition 2.1.
Equation (1.1) is said to be almost surely exponentially stable if there exists a positive constant λ such that for each pair of t0 and ξ , such that [figure omitted; refer to PDF]
Definition 2.2.
Equation (1.1) is said to be exponentially stable in mean square if there exists a pair of positive constants λ and K such that [figure omitted; refer to PDF]
Lemma 2.3 (semimartingale convergence theorem [12]).
Let A(t) and U(t) be two continuous adapted increasing processes on t≥0 with A(0)=U(0)=0 a.s. Let M(t) be a real-valued continuous local martingale with M(0)=0 a.s. Let ξ be a nonnegative ...0 -measurable random variable. Define [figure omitted; refer to PDF] If X(t) is nonnegative, then [figure omitted; refer to PDF] where B⊂D a.s. means P(B∩Dc )=0. In particular, If lim...t[arrow right]∞ ...A(t)<∞ a.s., then for almost all ω∈Ω [figure omitted; refer to PDF] that is both X(t) and U(t) converge to finite random variables.
Lemma 2.4 (see [34]).
A nonsingular M -matrix A=(aij ) is equivalent to the following properties: there exists rj >0 , such that ∑j=1naijrj >0 , 1≤i≤n .
Furthermore, an M -matrix is a Z -matrix with eigenvalues whose real parts are positive. In mathematics, the class of Z -matrices are those matrices whose off-diagonal entries are less than or equal to zero, that is, a Z -matrix Z satisfies [figure omitted; refer to PDF]
3. Main Results
Theorem 3.1.
Under the assumptions (A0 ) , (A1 ) , (A2 ) , (A3 ) , and (A4 ) , Cγ-C¯-A+ α-B+ β is a nonsingular M -matrix, where [figure omitted; refer to PDF] then the trivial solution of system (1.1) is almost surely exponentially stable and also is exponential stability in mean square.
Proof.
From (A4 ) and Lemma 2.4, there exist constants qi >0 , 1≤i≤n such that [figure omitted; refer to PDF] That also can be expressed as follows: [figure omitted; refer to PDF] From the assumption (A3 ) , one can choose a constant 0<λ<<μ such that [figure omitted; refer to PDF] [figure omitted; refer to PDF] Consider the following Lyapunov functional: [figure omitted; refer to PDF] From the boundary condition (A0 ) , we get [figure omitted; refer to PDF] Using the Itô formula, for T>0 , we have [figure omitted; refer to PDF] Notice that [figure omitted; refer to PDF] Submitting inequality (3.9) to (3.8), it is easy to calculate that [figure omitted; refer to PDF] On the other hand, we observe that [figure omitted; refer to PDF] From inequality (3.4), we have [figure omitted; refer to PDF] Hence, V(0,u(0)) is bounded. Integrate both sides of (3.10) with respect to x , we have [figure omitted; refer to PDF] Therefore, we have [figure omitted; refer to PDF] It is obvious that the right-hand side of (3.14) is a nonnegative martingale, From Lemma 2.3, it can be easily seen that [figure omitted; refer to PDF] that is [figure omitted; refer to PDF] On the other hand, since E(∫0T 2e2λt∑i=1n∑l=1∞∫Xqiyi (t,x)σil (yi (t,x))dxdωil (t)dt)=0 , taking expectation on both sides of the equality (3.14) yields [figure omitted; refer to PDF] From (3.16) and (3.17), the trivial solution of system (1.1) is almost surely exponentially stable, which is also exponential stability in mean square. This completes the proof.
Removing the reaction-diffusion term from the system (1.1), we investigate the following stochastic recurrent neural networks with unbounded distributed delays: [figure omitted; refer to PDF] We have the following Corollary 3.2 for system (3.18). The derived conditions for almost surely exponentially stable and exponential stability in mean square can be viewed as byproducts of our results from Theorem 3.1, so the proof is trivial, we omit it.
Corollary 3.2.
Under the assumptions (A1 ) , (A2 ) , (A3 ) , and (A4 ) , Cγ-C¯-A+ α-B+ β is a nonsingular M -matrix, where [figure omitted; refer to PDF] then the trivial solution of system (3.18) is almost surely exponentially stable and also is exponential stability in mean square.
Furthermore, if we remove noise from the system, then system (3.18) turns out to be system as the following: [figure omitted; refer to PDF] To investigate the stability of model (3.20), we should modify the assumption (A2[variant prime] ) as follows:
(A2[variant prime] ) : fj , gj , and ωil are Lipschitz continuous with positive Lipschitz constants αj , βj , respectively, and fj (0)=gj (0)=0 , for i,j∈{1,...,n} , l∈... .
The derived conditions for exponential stability of system (3.20) can be obtained directly from of our results from Corollary 3.2.
Corollary 3.3.
Under the assumptions (A1 ) , (A2[variant prime] ) , (A3 ) , and (A4 ) , Cγ-C...-A+ α-B+ β is a nonsingular M -matrix, where [figure omitted; refer to PDF] then the trivial solution of system (3.20) is exponentially stable.
4. An Illustrative Example
In this section, a numerical example is presented to illustrate the correctness of our main result.
Example 4.1.
Consider a two-dimensional stochastic reaction-diffusion recurrent neural networks with unbounded distributed delays as follows: [figure omitted; refer to PDF] where, c1 =10 , c2 =15 ; h1 (y)=h2 (y)=y ; a11 =1/5 , a12 =2/5 , a21 =3/5 , a22 =1 ; b11 =4/5 , b12 =1/5 , b21 =1 , b22 =2/5 ; f1 (u)=f2 (u)=tanh(u) ; g1 (u)=g2 (u)=1/2(|u-1|-|u+1|)k11 (s)=2e-2s , k12 (s)=4e-4s , k21 (s)=3e-3s , k22 (s)=5e-5s ; σ11 (y)=y , σ12 (y)=y/2 , σ21 (y)=σ22 (y)=3y/2 , σij (y)=0 for i,j≠1,2 . In the example, let Dik is a positive constant, according to Theorem 3.1, by simple computation, we get [figure omitted; refer to PDF] Therefore, [figure omitted; refer to PDF] With the help of Matlab, one can get the eigenvalues of matrix Q quickly, which are λ1 =0.5361 , λ2 =2.7139 , by Lemma 2.4, Q is a nonsingular M -matrix, therefore, all conditions in Theorem 3.1 are hold, that is to say, the trivial solution of system (4.1) is almost surely exponentially stable and also is exponential stability in mean square. Just choose x≡constant , then these conclusions can be verified by the following numerical simulations (Figures 1-4).
Figure 1: Numerical simulation for y1 (t) .
[figure omitted; refer to PDF]
Figure 2: Numerical simulation for y2 (t) .
[figure omitted; refer to PDF]
Figure 3: Numerical simulation for the mean square of y1 (t) .
[figure omitted; refer to PDF]
Figure 4: Numerical simulation for the mean square of y2 (t) .
[figure omitted; refer to PDF]
5. Conclusions
In this paper, stochastic recurrent neural networks with unbounded distributed delays and reaction-diffusion have been investigated. All features of stochastic systems, reaction-diffusion systems have been taken into account in the neural networks. The proposed results generalized and improved some of the earlier published results. The results obtained in this paper are independent of the magnitude of delays and diffusion effect, which implies that strong self-regulation is dominant in the networks. If we remove the noise and reaction-diffusion terms from the system, the derived conditions for stability of general deterministic neural networks can be viewed as byproducts of our results.
Acknowledgments
The authors are extremely grateful to Professor Binggen Zhang and the anonymous reviewers for their constructive and valuable comments, which have contributed a lot to the improved presentation of this paper. This work was supported by the Key Project of Chinese Ministry of Education (2011), the Foundation of Chinese Society for Electrical Engineering (2008), the Excellent Youth Foundation of Educational Committee of Hunan Province of China (10B002), the National Natural Science Funds of China for Distinguished Young Scholar (50925727), the National Natural Science Foundation of China (60876022), and the Scientific Research Fund of Yunnan Province of China (2010ZC150).
[1] S. Arik, "A note on the stability of dynamical neural networks," IEEE Transactions on Circuits and Systems. I , vol. 49, no. 4, pp. 502-504, 2002.
[2] J. Cao, "A set of stability criteria for delayed cellular neural networks," IEEE Transactions on Circuits and Systems. I , vol. 48, no. 4, pp. 494-498, 2001.
[3] J. Cao, J. Liang, "Boundedness and stability for Cohen-Grossberg neural network with time-varying delays," Journal of Mathematical Analysis and Applications , vol. 296, no. 2, pp. 665-685, 2004.
[4] T. Chen, S. I. Amari, "New theorems on global convergence of some dynamical systems," Neural Networks , vol. 14, no. 3, pp. 251-255, 2001., [email protected]
[5] Y. Chen, "Global asymptotic stability of delayed Cohen-Grossberg neural networks," IEEE Transactions on Circuits and Systems. I , vol. 53, no. 2, pp. 351-357, 2006.
[6] S. Guo, L. Huang, "Stability analysis of Cohen-Grossberg neural networks," IEEE Transactions on Neural Networks , vol. 17, no. 1, pp. 106-117, 2006., [email protected]
[7] Z. Yuan, L. Huang, D. Hu, B. Liu, "Convergence of nonautonomous Cohen-Grossberg-type neural networks with variable delays," IEEE Transactions on Neural Networks , vol. 19, no. 1, pp. 140-147, 2008., [email protected]
[8] J. H. Park, O. M. Kwon, "Further results on state estimation for neural networks of neutral-type with time-varying delay," Applied Mathematics and Computation , vol. 208, no. 1, pp. 69-75, 2009.
[9] J. H. Park, O. M. Kwon, "Delay-dependent stability criterion for bidirectional associative memory neural networks with interval time-varying delays," Modern Physics Letters B , vol. 23, no. 1, pp. 35-46, 2009., [email protected]; [email protected]
[10] S. Haykin Neural Networks , Prentice Hall, Upper Saddle River, NJ, USA, 1994.
[11] X. X. Liao, X. Mao, "Exponential stability and instability of stochastic neural networks," Stochastic Analysis and Applications , vol. 14, no. 2, pp. 165-185, 1996.
[12] X. Mao Stochastic Differential Equations and Applications , pp. xviii+422, Horwood Publishing Limited, Chichester, UK, 2008., 2nd.
[13] C. Turchetti, M. Conti, P. Crippa, S. Orcioni, "On the approximation of stochastic processes by approximate identity neural networks," IEEE Transactions on Neural Networks , vol. 9, no. 6, pp. 1069-1085, 1998.
[14] C. Huang, Y. He, L. Huang, W. Zhu, " p th moment stability analysis of stochastic recurrent neural networks with time-varying delays," Information Sciences , vol. 178, no. 9, pp. 2194-2203, 2008.
[15] X. X. Liao, X. Mao, "Stability of stochastic neural networks," Neural, Parallel & Scientific Computations , vol. 4, no. 2, pp. 205-224, 1996.
[16] S. Blythe, X. Mao, X. Liao, "Stability of stochastic delay neural networks," Journal of the Franklin Institute , vol. 338, no. 4, pp. 481-495, 2001.
[17] L. Wan, J. Sun, "Mean square exponential stability of stochastic delayed Hopfield neural networks," Physics Letters Section A , vol. 343, no. 4, pp. 306-318, 2005., [email protected]; [email protected]
[18] X. Li, J. Cao, "Exponential stability of stochastic Cohen-Grossberg neural networks with time-varying delays," in Proceedings of the 2nd International Symposium on Neural Networks: Advances in Neural Networks (ISNN '05), vol. 3496, of Lecture Notes in Computer Science, pp. 162-167, [email protected], 2005.
[19] H. Zhao, N. Ding, "Dynamic analysis of stochastic Cohen-Grossberg neural networks with time delays," Applied Mathematics and Computation , vol. 183, no. 1, pp. 464-470, 2006.
[20] Y. Sun, J. Cao, " p th moment exponential stability of stochastic recurrent neural networks with time-varying delays," Nonlinear Analysis. Real World Applications , vol. 8, no. 4, pp. 1171-1185, 2007.
[21] C. Li, X. Liao, "Robust stability and robust periodicity of delayed recurrent neural networks with noise disturbance," IEEE Transactions on Circuits and Systems I , vol. 53, no. 10, pp. 2265-2273, 2006., [email protected]; [email protected]
[22] C. Li, L. Chen, K. Aihara, "Stochastic stability of genetic networks with disturbance attenuation," IEEE Transactions on Circuits and Systems II , vol. 54, no. 10, pp. 892-896, 2007., [email protected]
[23] H. Zhang, Y. Wang, "Stability analysis of Markovian Jumping stochastic Cohen-Grossberg neural networks with mixed time delays," IEEE Transactions on Neural Networks , vol. 19, no. 2, pp. 366-370, 2008., [email protected]; [email protected]
[24] D. W. Tank, J. J. Hopfield, "Neural computation by concentrating information in time," Proceedings of the National Academy of Sciences of the United States of America , vol. 84, no. 7, pp. 1896-1900, 1987.
[25] S. Mohamad, "Global exponential stability in DCNNs with distributed delays and unbounded activations," Journal of Computational and Applied Mathematics , vol. 205, no. 1, pp. 161-173, 2007.
[26] P. Balasubramaniam, R. Rakkiyappan, "Global asymptotic stability of stochastic recurrent neural networks with multiple discrete delays and unbounded distributed delays," Applied Mathematics and Computation , vol. 204, no. 2, pp. 680-686, 2008.
[27] Z. Wang, Y. Liu, M. Li, X. Liu, "Stability analysis for stochastic Cohen-Grossberg neural networks with mixed time delays," IEEE Transactions on Neural Networks , vol. 17, no. 3, pp. 814-820, 2006., [email protected]
[28] Y. Lv, W. Lv, J. Sun, "Convergence dynamics of stochastic reaction-diffusion recurrent neural networks in continuously distributed delays," Nonlinear Analysis. Real World Applications , vol. 9, no. 4, pp. 1590-1606, 2008.
[29] L. Wan, Q. Zhou, "Exponential stability of stochastic reaction-diffusion Cohen-Grossberg neural networks with delays," Applied Mathematics and Computation , vol. 206, no. 2, pp. 818-824, 2008.
[30] Z. Liu, J. Peng, "Delay-independent stability of stochastic reaction-diffusion neural networks with Dirichlet boundary conditions," Neural Computing and Applications , vol. 19, no. 1, pp. 151-158, 2010., [email protected]
[31] D. W. Tank, J. J. Hopfield, "Neural computation by concentrating information in time," Proceedings of the National Academy of Sciences of the United States of America , vol. 84, no. 7, pp. 1896-1900, 1987.
[32] K. Gopalsamy, X. Z. He, "Stability in asymmetric Hopfield nets with transmission delays," Physica D. Nonlinear Phenomena , vol. 76, no. 4, pp. 344-358, 1994.
[33] S. Mohamad, K. Gopalsamy, "Dynamics of a class of discrete-time neural networks and their continuous-time counterparts," Mathematics and Computers in Simulation , vol. 53, no. 1-2, pp. 1-39, 2000.
[34] J. Cao, "Exponential stability and periodic solutions of delayed cellular neural networks," Science in China. Series E. Technological Sciences , vol. 43, no. 3, pp. 328-336, 2000.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2011 Chuangxia Huang et al. Chuangxia Huang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
Stability of reaction-diffusion recurrent neural networks (RNNs) with continuously distributed delays and stochastic influence are considered. Some new sufficient conditions to guarantee the almost sure exponential stability and mean square exponential stability of an equilibrium solution are obtained, respectively. Lyapunov's functional method, M-matrix properties, some inequality technique, and nonnegative semimartingale convergence theorem are used in our approach. The obtained conclusions improve some published results.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer