(ProQuest: ... denotes non-US-ASCII text omitted.)
Chuangxia Huang 1 and Lehua Huang 1 and Yigang He 2
Recommended by Juan J. Nieto
1, College of Mathematics and Computing Science, Changsha University of Science and Technology, Changsha, Hunan 410114, China
2, College of Electrical and Information Engineering, Hunan University, Changsha, Hunan 410082, China
Received 5 July 2010; Accepted 1 September 2010
1. Introduction
Consider the Cohen-Grossberg neural networks (CGNN) described by a system of ordinary differential equations [figure omitted; refer to PDF] where t≥0 , n≥2 ; n corresponds to the number of units in a neural network; xi (t) denotes the potential (or voltage) of cell i at time t ; fj (·) denotes a non-linear output function between cell i and j ; ai (·)>0 represents an amplification function; b...i (·) represents an appropriately behaved function; the n×n connection matrix C=(cij )n×n denotes the strengths of connectivity between cells, and if the output from neuron j excites (resp., inhibits) neuron i , then cij ≥0 (resp., cij ≤0 ).
During hardware implementation, time delays do exist due to finite switching speed of the amplifiers and communication time and, thus, it is important to incorporate delays in the neural networks. Just take the delayed cellular neural network as an example, which had been successfully applied to solve some moving image processing problem [1]. For model (1.1), Ye et al. [2] introduced delays by considering the following delay differential equations: [figure omitted; refer to PDF] Guo and Huang [3] generalized model (1.1) as the following delay differential equations: [figure omitted; refer to PDF] Some other more detailed justifications for introducing delays into model equations of neural networks can be found in [4, 5] and references therein.
The delays in all above mentioned papers have been largely restricted to be discrete. As is well known, the use of constant fixed delays in models of delayed feedback provides of a good approximation in simple circuits consisting of a small number of cells. However, neural networks usually have a spatial extent due to the presence of a multitude of parallel pathways with a variety of axon sizes and lengths. Thus there will be a distribution of conduction velocities along these pathways and a distribution of propagation delays. In these circumstances, the signal propagation is not instantaneous and cannot be modeled with discrete delays and a more appropriate way is to incorporate continuously distributed delays. For instance, in [6], Tank and Hopfield designed an analog neural circuit with distributed delays, which can solve a general problem of recognizing patterns in a time-dependent signal. A more satisfactory hypothesis is that to incorporate continuously distributed delays, we refer to [7-11]. Then model (1.3) can be modified as a system of integro-differential equations of the form [figure omitted; refer to PDF] with initial values given by ui (s)=ψi (s) for s∈(-∞,0] , where each ψi (·) is bounded and continuous on (-∞,0] .
In the past few years, the dynamical behaviors of stochastic neural networks have emerged as a new subject of research mainly for two reasons: (i) in real nervous systems and in the implementation of artificial neural networks, synaptic transmission is a noisy process brought on by random fluctuations from the release of neurotransmitters and other probabilistic causes, hence, noise is unavoidable and should be taken into consideration in modeling [12-14]; (ii) it has been realized that a neural network could be stabilized or destabilized by certain stochastic effects [15-17]. Although systems are often perturbed by various types of environmental "noise" [12-14, 18], it turns out that one of the reasonable interpretation for the "noise" perturbation is the so-called white noise dω(t)/dt , where ω(t) is the Brownian motion process, also called as Wiener process [17, 19]. More detailed mechanism of the stochastic effects on the interaction of neurons can be found in [19]. However, because the Brownian motion ω(t) is nowhere differentiable, the derivative of Brownian motion dω(t)/dt cannot be defined in the ordinary way, the stability analysis for stochastic neural networks is difficult. In [12], through constructing a novel Lyapunov-Krasovskii functional, Zhu and Cao obtain several novel sufficient conditions to ensure the exponential stability of the trivial solution in the mean square. In [13], using linear matrix inequality (LMI) approach, Zhu et al. investigated the asymptotical mean square stability of Cohen-Grossberg neural networks with random delay, In [14], by utilizing Poincaré inequality, Pan and Zhong derived some sufficient conditions to check the almost sure exponential stability and mean square exponential stability of stochastic reaction-diffusion Cohen-Grossberg neural networks. In [20], Wang et al. developed a linear matrix inequality (LMI) approach to study the stability of SCGNN with mixed delays. To the best of the authors' knowledge, the convergence dynamics of stochastic Cohen-Grossberg neural networks with unbounded distributed delays have not been studied yet, and still remain as a challenging task.
Keeping this in mind, in this paper, we consider the SCGNN described by the following stochastic nonlinear integro-differential equations: [figure omitted; refer to PDF] where σ(t)=(σij (t))n×n is the diffusion coefficient matrix and ω(t)=(ω1 (t),...,ωn (t))T is an n -dimensional Brownian motion defined on a complete probability space (Ω,...,P) with a natural filtration {...t }t≥0 (i.e., ...t =σ{w(s):0≤s≤t} ).
Obviously, model (1.5) is quite general and it includes several well-known neural networks models as its special cases such as Hopfield neural networks, cellular neural networks, and bidirectional association memory neural networks [21, 22].
The remainder of this paper is organized as follows. In Section 2, the basic notations and assumptions are introduced. In Section 3, some criteria are proposed to determine mean square exponential stability for (1.5). Furthermore, we also establish some sufficient conditions for checking exponential stability for Cohen-Grossberg neural networks with unbounded distributed delays in this section. In Section 4, an illustrative examples is given. We conclude this paper in Section 5.
2. Preliminaries
Noting that a vector x=(x1 ,...,xn )T ∈Rn usually can be equipped with the common norms ||x|| as [figure omitted; refer to PDF] For the sake of convenience, some of the standing definitions and assumptions are formulated below.
Definition 2.1 (see [17]).
The trivial solution of (1.5) is said to be mean square exponentially stable if there is a pair of positive constants λ and G such that [figure omitted; refer to PDF] for all x0 ∈Rn , where λ also called as convergence rate.
One also assumes that
(H1 ): there exist positive constants Lj ,Gj , j=1,...,n , such that [figure omitted; refer to PDF]
(H2 ): There exist positive constants bj , such that [figure omitted; refer to PDF]
(H3 ): There exist positive constants α...i ,α¯i , such that [figure omitted; refer to PDF]
(H4 ): Assume σij (xj* )=0 (x* =(x1* ,...,xn* )T to be determined later), and there exist positive constants Mi,j , i,j=1,...,n , such that [figure omitted; refer to PDF]
Remark 2.2.
The activation functions are typically assumed to be continuous, bounded, differentiable, and monotonically increasing, such as the functions of sigmoid type; these conditions are no longer needed in this paper. For example, when neural networks are designed for solving optimization problems in the presence of constraints (linear, quadratic, or more general programming problems), unbounded activations modelled by diode-like exponential-type functions are needed to impose constraints satisfaction [3]. In this paper, the activation functions fi (·),gi (·) also including some kinds of typical functions widely used in the circuit designs, such as nondifferentiable piecewise linear output functions of the form f(u)=(1/2)(|u-1|-|u+1|) , nonmonotonically increasing functions of the form Gaussian and inverse Gaussian functions, see [4, 5, 23] and references therein.
Using variable substitution, (t-s)...s[variant prime] , we get [figure omitted; refer to PDF] therefore, system (1.5) for convenience can be put in the form [figure omitted; refer to PDF] As usual, the initial conditions for system (2.8) are x(t)=[straight phi](t) , -∞<t≤0 , [straight phi]∈L...0 2 ((-∞,0],Rn ) , here L...0 2 ((-∞,0],Rn ) is the family of all ...0 -measurable Rn -valued random variables satisfying that E{sup -∞≤s≤0 |[straight phi](s)|2 }<∞.
The conditions (H1 ) and (H4 ) imply that (2.8) has a unique global solution on t≥0 for the initial conditions[17].
If V∈C1,2 (R×Rn ;R+ ) , define an operator [Lagrangian (script capital L)]V associated with (2.8) as [figure omitted; refer to PDF] where Vt (t,x)=∂V(t,x)/∂t , Vxx (t,x)=(∂2 V(t,x)/∂xi ∂xj )n×n .
We always assume that the delay kernels Kij , i,j=1,...,n to be real-valued nonnegative functions defined on [0,∞) and satisfy [figure omitted; refer to PDF] for some positive constant μ . A typical example of such delay kernel function is given by Kij (s)=(sr /r!)γijr+1e-γij s for s∈[0,∞) , where γij ∈[0,∞), r∈{0,1,...,n} . These kernels have been used in [6, 9, 24] for various stability investigations on neural network models. In [24], these kernels are called as the Gamma Memory Filter.
3. Main Results
A point x* =(x1* ,...,xn* )T in Rn is called to be an equilibrium (or trivial solution) of system (1.4) if this point x* satisfies the following: [figure omitted; refer to PDF] Different from the bounded activation function case where the existence of an equilibrium point is always guaranteed [25], for unbounded activations, it may happen that there is no equilibrium point [26]. In fact, we have the following theorem.
Theorem 3.1.
Under the assumptions (H1 ),(H2 ) , if there exist a positive diagonal matrix Q=diag (q1 ,...,qn ) , such that [figure omitted; refer to PDF] then for every input I=(I1 ,...,In )T , system (1.4) has a unique equilibrium x* =(x1* ,...,xn* )T .
Proof of Theorem 3.1.
A point x* =(x1* ,...,xn* )T is said to be an equilibrium of system (1.5) if this point x* satisfies the following: [figure omitted; refer to PDF] Let [figure omitted; refer to PDF] Similar to the proofs of Theorems 1 , 2 , and 3 of [25], one can get that H is injective on Rn , and lim ||y||[arrow right]∞ ||H(y)||[arrow right]∞ . Then, H is homeomorphism on Rn , therefore, H has a unique root on Rn . This completes the proof.
Obviously, the following inequality (3.5) impling the inequality (3.2) holds if the assumptions (H1 ),(H2 ),(H4 ) are true, then system (2.8) admits an equilibrium solution x(t)≡x* . We have the following theorems on the stochastic stability of the unique equilibrium x* =(x1* ,...,xn* )T .
Theorem 3.2.
Under the assumptions (H1 ) - (H4 ) , if there exist a positive diagonal matrix Q=diag (q1 ,...,qn ) , such that [figure omitted; refer to PDF] then the trivial solution of system (2.8) is mean square exponentially stable.
Proof of Theorem 3.2.
Let yi (t)=xi (t)-xi* , [varphi]i (t)=[straight phi]i (t)-xi* , y(t)=(y1 (t),...,yn (t))T , [varphi](t)=([varphi]1 (t),...,[varphi]n (t))T , using (3.3), model (2.8) take the following form: [figure omitted; refer to PDF] with the initial condition y(t)=[varphi](t), t∈(-∞,0] , where ai (yi (t))=ai (xi (t)+xi* ) , b...i (yi (t))=b...i (xi (t))-b...i (xi* ) , fj (yj (t))=f...j (xj (t))-f...j (xj* ) , fj (yj (t))=g...j (xj (t))-g...j (xj* ) , and σ...ij (yj (t))=σij (xj (t))-σij (xj* ) .
Then the assumptions (H1 ) , (H3 ) , (H4 ) can be transformed as follows:
(H1[variant prime] ): fj (0)=gj (0)=0 and there exist positive constants Lj ,Gj , j=1,...,n , such that [figure omitted; refer to PDF]
(H3[variant prime] ): There exist positive constants bj , such that [figure omitted; refer to PDF]
(H4[variant prime] ): σ...ij (0)=0 and there exist positive constants Mi,j , i,j=1,...,n , such that [figure omitted; refer to PDF]
From assumptions (2.10) and (3.5), we pick a constant λ : 0<λ<μ satisfying the following requirements: [figure omitted; refer to PDF] [figure omitted; refer to PDF] Define the following Lyapunov functional [figure omitted; refer to PDF] As V at most has one point y=0 where it is not differentiable, we can calculate Vy ,Vyy by instead using Dini derivative. Obviously, calculate the operator [Lagrangian (script capital L)]V1 associated with (3.6) as follows: [figure omitted; refer to PDF] On the other hand, by directly applying the elementary inequality 2ab≤a2 +b2 , we have [figure omitted; refer to PDF] At the same time, calculate the operator [Lagrangian (script capital L)]V2 associated with (3.6) as follows: [figure omitted; refer to PDF] From (3.14) and (3.15), it is easy to get [figure omitted; refer to PDF] Using the Ito... formula, for δ>0 , from inequality (3.11) and inequality (3.16), we have [figure omitted; refer to PDF] On the other hand, we observe that [figure omitted; refer to PDF] From inequality (3.10), we have [figure omitted; refer to PDF] Hence, V(0,y(0)) is bounded. According to stochastic analysis theory [17], we get [figure omitted; refer to PDF] Take expectation on both sides of (3.17) yields [figure omitted; refer to PDF] That is to say [figure omitted; refer to PDF] Therefore, there exist a positive constant G , such that [figure omitted; refer to PDF] then the trivial solution of system (3.6) is mean square exponentially stable that is to say, the trivial solution of system (2.8) is mean square exponentially stable. This completes the proof.
Furthermore, if we remove noise from the system, then system (2.8) turns out to be system (1.4), the derived conditions for exponential stability of system (1.4) can be viewed as byproducts of our results from general SRNN. For convenience of reading, we provide the definition of the exponential stability.
Definition 3.3.
The trivial solution of CGNN model (1.4) is said to be globally exponentially stable if there exist positive constants G,λ , which are independent of the initial values of the system, such that [figure omitted; refer to PDF] Where, the initial values take the following continuous function [figure omitted; refer to PDF]
We have the following Corollary 3.4 for system (1.4). As the main idea come from Theorem 3.2, so the proof is trivial, we omit it.
Corollary 3.4.
Under the assumptions (H1 ),(H2 ),(H3 ) , if there exist a positive diagonal matrix Q=diag (q1 ,...,qn ) , such that [figure omitted; refer to PDF] then the trivial solution of system (1.5) is globally exponentially stable.
Remark 3.5.
To the best of our knowledge, few authors have considered exponential stability for recurrent neural networks with unbounded distributed delays. We can find paper [27] in this direction. However, it is assumed in [27] that kij (·) satisfies: (i) ∫0∞Kij (s)ds=1 ; (ii) ∫0∞Kij (s)eμs ds=1 ; (iii) ∫0∞ sKij (s)ds<∞ . Obviously, these requirements are strong, and our paper relax assumptions on kij (·) . In fact, using assumptions (2.10) instead of assumptions (i), (ii), (iii), and choosing the same Lyapunov functionals in Theorem 3.2 of this paper, one can get the same results as those in [27].
Remark 3.6.
We notice that Wang et al. developed a linear matrix inequality (LMI) approach to study the stability of SCGNN with mixed delays and obtained some novel results in a very recent paper [20], where the considered model including both discrete and distributed delays. Yet the distributed delays are bounded. In fact, a neural network model with unbounded distributed delay is more general than that with discrete and bounded distributed delays, this is because the distributed delay becomes a discrete delay when the delay kernel is a δ function at a certain time.
Remark 3.7.
The results in this paper show that, the convergence criteria on SGNN with unbounded distributed delays are independent of delays, but are dependent of the magnitude of noise, and therefore, noisy fluctuations should be regarded adequately when design SGNN.
Remark 3.8.
Comparing our results with the previous results derived in the literature for the usual continuously distributed delays CNN without stochastic perturbation, by Corollary 3.4, we can find that the corresponding main results obtained in [8-10] are trivial.
4. Illustrative Examples
In this section, an example is presented to demonstrate the correctness and effectiveness of the main obtained results in this paper.
Example 4.1.
Consider the following stochastic neural networks with distributed delays: [figure omitted; refer to PDF] Figure 1 shows the schematic of the entire delayed neural network, where the nonlinear neuron transfer function (S) is constructed by using the voltage operational amplifiers. The time delay is achieved by using a digital signal processor (DSP) with an analog-to-digital converter (ADC) and a digital-to-analog converter (DAC). In the experiment, the circuit of time delay consists of a TMS320F2812 device and a DAC7724 device. Here, white noise is generated by signal generator Agient E8257D.
Figure 1: An analog circuit implementing of system (4.1).
[figure omitted; refer to PDF]
In the example, let [figure omitted; refer to PDF] and fi (u)=tan h(u),gi (u)=(1/2)(|u-1|-|u+1|) . Notice that each delay kernel Kij (·) satisfies (2.10), and each fj (·),gj (·),σij (·) satisfies assumptions (H1 ),(H4 ) . In the example, let p=3, qi =1 , by simple computation, one can easily get that [figure omitted; refer to PDF] It follows from Corollary 3.4 that system (4.1) is mean square exponentially stable.
5. Conclusions
In this paper, we studied a nonlinear continuous-time stochastic Cohen-Grossberg neural networks (SCGNN) with unbounded distributed delays. Without assuming the smoothness, monotonicity and boundedness of the activation functions, by applying Lyapunov functional method, stochastic analysis technique, and inequality techniques, some novel sufficient conditions on mean square exponential stability for SCGNN are given. Furthermore, as the byproducts of our main results, we also establish some sufficient conditions for checking exponential stability for Cohen-Grossberg neural networks with unbounded distributed delays. The significant of this paper does offer a wider selection on the networks parameters in order to achieve some necessary convergence in practice.
Acknowledgments
The authors are extremely grateful to Professor Juan J. Nieto and the anonymous reviewers for their constructive and valuable comments, which have contributed a lot to the improved presentation of this paper. This work was supported by the Foundation of Chinese Society for Electrical Engineering (2008), the Excellent Youth Foundation of Educational Committee of Hunan Provincial (10B002), the National Natural Science Funds of China for Distinguished Young Scholar (50925727), and the National Natural Science Foundation of China (60876022).
[1] L. O. Chua, T. Roska, "CNN: a new paradigm of nonlinear dynamics in space," World Congress of Nonlinear Analysts '92, Vol. I-IV (Tampa, FL, 1992) , pp. 2979-2990, de Gruyter, Berlin, Germany, 1996.
[2] H. Ye, A. N. Michel, K. Wang, "Qualitative analysis of Cohen-Grossberg neural networks with multiple delays," Physical Review E , vol. 51, no. 3, part B, pp. 2611-2618, 1995.
[3] S. Guo, L. Huang, "Stability analysis of Cohen-Grossberg neural networks," IEEE Transactions on Neural Networks , vol. 17, no. 1, pp. 106-117, 2006., [email protected]
[4] J. Cao, J. Liang, "Boundedness and stability for Cohen-Grossberg neural network with time-varying delays," Journal of Mathematical Analysis and Applications , vol. 296, no. 2, pp. 665-685, 2004.
[5] Z. Yuan, L. Huang, D. Hu, B. Liu, "Convergence of nonautonomous Cohen-Grossberg-type neural networks with variable delays," IEEE Transactions on Neural Networks , vol. 19, no. 1, pp. 140-147, 2008., [email protected]
[6] D. W. Tank, J. J. Hopfield, "Neural computation by concentrating information in time," Proceedings of the National Academy of Sciences of the United States of America , vol. 84, no. 7, pp. 1896-1900, 1987.
[7] Y. Chen, "Global asymptotic stability of delayed Cohen-Grossberg neural networks," IEEE Transactions on Circuits and Systems. I. Regular Papers , vol. 53, no. 2, pp. 351-357, 2006.
[8] Y. Chen, "Global stability of neural networks with distributed delays," Neural Networks , vol. 15, no. 7, pp. 867-871, 2002., [email protected]
[9] S. Mohamad, K. Gopalsamy, "Dynamics of a class of discrete-time neural networks and their continuous-time counterparts," Mathematics and Computers in Simulation , vol. 53, no. 1-2, pp. 1-39, 2000.
[10] Q. Zhang, X. Wei, J. Xu, "Global exponential stability of Hopfield neural networks with continuously distributed delays," Physics Letters A , vol. 315, no. 6, pp. 431-436, 2003.
[11] S. Mohamad, "Global exponential stability in DCNNs with distributed delays and unbounded activations," Journal of Computational and Applied Mathematics , vol. 205, no. 1, pp. 161-173, 2007.
[12] Q. Zhu, J. Cao, "Robust exponential stability of Markovian jump impulsive stochastic Cohen-Grossberg neural networks with mixed time delays," IEEE Transactions on Neural Networks , vol. 21, no. 8, pp. 1314-1325, 2010., [email protected]; [email protected]
[13] E. Zhu, H. Zhang, J. Zou, "Asymptotical mean square stability of Cohen-Grossberg neural networks with random delay," Journal of Inequalities and Applications , vol. 2010, 2010.
[14] J. Pan, S. Zhong, "Dynamic analysis of stochastic reaction-diffusion Cohen-Grossberg neural networks with delays," Advances in Difference Equations , vol. 2009, 2009.
[15] S. Haykin Neural Networks , of , Prentice-Hall, Upper Saddle River, NJ, USA, 1994.
[16] X. X. Liao, X. Mao, "Exponential stability and instability of stochastic neural networks," Stochastic Analysis and Applications , vol. 14, no. 2, pp. 165-185, 1996.
[17] X. Mao Stochastic Differential Equations and Applications , pp. xviii+422, Horwood, Chichester, UK, 2008., 2nd.
[18] W.-S. Li, Y.-K. Chang, J. J. Nieto, "Solvability of impulsive neutral evolution differential inclusions with state-dependent delay," Mathematical and Computer Modelling , vol. 49, no. 9-10, pp. 1920-1927, 2009.
[19] C. Turchetti, M. Conti, P. Crippa, S. Orcioni, "On the approximation of stochastic processes by approximate identity neural networks," IEEE Transactions on Neural Networks , vol. 9, no. 6, pp. 1069-1085, 1998.
[20] Z. Wang, Y. Liu, M. Li, X. Liu, "Stability analysis for stochastic Cohen-Grossberg neural networks with mixed time delays," IEEE Transactions on Neural Networks , vol. 17, no. 3, pp. 814-820, 2006., [email protected]
[21] S. Arik, "A note on the global stability of dynamical neural networks," IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications , vol. 49, no. 4, pp. 502-504, 2002.
[22] J. Cao, "A set of stability criteria for delayed cellular neural networks," IEEE Transactions on Circuits and Systems. I. Fundamental Theory and Applications , vol. 48, no. 4, pp. 494-498, 2001.
[23] Z. Liu, A. Chen, J. Cao, L. Huang, "Existence and global exponential stability of periodic solution for BAM neural networks with periodic coefficients and time-varying delays," IEEE Transactions on Circuits and Systems. I. Fundamental Theory and Applications , vol. 50, no. 9, pp. 1162-1173, 2003.
[24] J. Principle, J. Kuo, S. Celebi, "An analysis of the gamma memory in dynamics neural networks," IEEE Transactions on Neural Networks , vol. 5, no. 2, pp. 337-361, 1994.
[25] H. Zhao, J. Cao, "New conditions for global exponential stability of cellular neural networks with delays," Neural Networks , vol. 18, no. 10, pp. 1332-1340, 2005., [email protected]; [email protected]
[26] L. Huang, C. Huang, B. Liu, "Dynamics of a class of cellular neural networks with time-varying delays," Physics Letters A , vol. 345, no. 4-6, pp. 330-344, 2005., [email protected]
[27] Y. Lv, W. Lv, J. Sun, "Convergence dynamics of stochastic reaction-diffusion recurrent neural networks in continuously distributed delays," Nonlinear Analysis. Real World Applications , vol. 9, no. 4, pp. 1590-1606, 2008.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2010 Chuangxia Huang et al. Chuangxia Huang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
This paper addresses the issue of mean square exponential stability of stochastic Cohen-Grossberg neural networks (SCGNN), whose state variables are described by stochastic nonlinear integrodifferential equations. With the help of Lyapunov function, stochastic analysis technique, and inequality techniques, some novel sufficient conditions on mean square exponential stability for SCGNN are given. Furthermore, we also establish some sufficient conditions for checking exponential stability for Cohen-Grossberg neural networks with unbounded distributed delays.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer