(ProQuest: ... denotes non-US-ASCII text omitted.)
O. M. Kwon 1 and M. J. Park 1 and Ju H. Park 2 and S. M. Lee 3 and E. J. Cha 4
Academic Editor:He Huang
1, School of Electrical Engineering, Chungbuk National University, 52 Naesudong-Ro, Cheongju 361-763, Republic of Korea
2, Nonlinear Dynamics Group, Department of Electrical Engineering, Yeungnam University, 280 Daehak-Ro, Kyongsan 712-749, Republic of Korea
3, School of Electronics Engineering, Daegu University, Gyungsan 712-714, Republic of Korea
4, Department of Biomedical Engineering, School of Medicine, Chungbuk National University, 52 Naesudong-Ro, Cheongju 361-763, Republic of Korea
Received 14 April 2014; Accepted 29 May 2014; 17 June 2014
This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
Neural networks are the networks of mutual elements that behave like biological neurons, which can be mathematically described by difference or differential equations. For this reason, during a few decades, neural networks have been extensively applied in many areas such as reconstruction of moving image, signal processing, the tasks of pattern recognition, associative memories, and fixed-point computations [1-10]. Also, the stability analysis of the concerned neural networks is a very important and prerequisite job because the application of neural networks heavily depends on the dynamic behavior of equilibrium points.
On the other hand, we need to pay keen attention to a delay in the time and passivity. It is well known that time-delay is a natural concomitant of the finite speed of information processing in the implementation of the networks and often causes undesirable dynamic behaviors such as oscillation and instability of the networks. In various scientific and engineering problems, stability issues are often linked to the theory of dissipative systems. It postulates that the energy dissipated inside a dynamic system is less than the energy supplied from the external source [11]. Based on the concept of energy, the passivity is the property of dynamical systems and describes the energy flow through the system. It is also an input/output characterization and related to Lyapunov method. In the field of nonlinear control, the concept of dissipativeness was firstly introduced by Willems [12] in the form of inequality including supply rate and the storage function. The main idea of passivity theory is that passive properties of a system can keep the system internally. Therefore, the study on passivity analysis for uncertain neural networks with time-delay has been widely investigated in [13-19] since parametric uncertainties, which sometimes affect the stability of systems, are also undesirable dynamics in the hardware implementation of neural networks due to the fact that the connection weights of the neurons are dependent on the values of certain resistances and capacitances including variations of fluctuations [20]. In [16], two types of time-varying delays were considered in passivity analysis of uncertain neural networks. Recently, in [17], by considering some useful terms which were ignored in previous literatures and utilizing free-weighting matrix techniques, the enhancement of feasible region of passivity criteria was shown. In [18], by proposing a complete delay-decomposing approach and utilizing a segmentation technique, improved conditions for passivity of neural networks were presented. In the authors' previous work [19], some less conservative conditions for passivity of neural networks were derived by taking more information of states. All the works [13-19] show their advantages of the proposed methods via comparison of maximum delay bounds with the previous works since delay bounds for guaranteeing the passivity of the concerned networks are recognized as one of the most important index for checking the conservatism of criteria. Very recently, up to now, one of the most remarkable methods in reducing the conservatism of stability criteria is Wiritinger-based integral inequality [21] which reduced Jensen's gap effectively. Therefore, there are rooms for further improvements in passivity analysis of the neural networks with both time-delay and parameter uncertainties.
With this motivation mentioned above, in this paper, the problem on passivity for uncertain neural networks with time-varying delays is addressed. In Theorem 6, by utilizing Wiritinger-based integral inequality [21], a passivity condition for neural networks with time-varying delays and parameter uncertainties is introduced with the framework of LMIs. Based on the results of Theorem 6, a newly constructed Lyapunov-Krasovskii functional is introduced and further improved results will be derived in Theorem 7. Inspired by the work of [22, 23], the reciprocally convex approach and some zero equality will be utilized in Theorems 6 and 7. Finally, through two numerical examples, it will be shown that Theorems 6 and 7 obtain the less conservative results.
Notation . Throughout this paper, the used notations are standard. R n is the n -dimensional Euclidean vector space and R m × n denotes the set of all m × n real matrices. For symmetric matrices X and Y , X > Y means that the matrix X - Y is positive definite, whereas X ...5; Y means that the matrix X - Y is nonnegative. I n , 0 n , and 0 m · n denote n × n identity matrix and n × n and m × n denote zero matrices, respectively. diag ... { ... } denotes the block diagonal matrix. For square matrix X , sym { X } means the sum of X and its symmetric matrix X T ; that is, sym { X } = X + X T . X [ f ( t ) ] ∈ R m × n means that the elements of matrix X [ f ( t ) ] include the scalar value of f ( t ) ; that is, X [ f 0 ] = X [ f ( t ) = f 0 ] .
2. Preliminaries and Problem Statement
Consider the following uncertain neural networks with time-varying delays: [figure omitted; refer to PDF] where n denotes the number of neurons in a neural network, x ( t ) = [ x 1 ( t ) , x 2 ( t ) , ... , x n ( t ) ] T ∈ R n is the neuron state vector, f ( x ( t ) ) = [ f ( x 1 ( t ) ) , f ( x 2 ( t ) ) , ... , f ( x n ( t ) ) ] T ∈ R n denotes the neuron activation function vector, y ( t ) ∈ R n is the output vector, u ( t ) ∈ R n is the input vector, A = diag ... { a 1 , ... , a n } ∈ R n × n is a positive diagonal matrix, W i ∈ R n × n ( i = 0,1 ) are the interconnection weight matrices, C i ∈ R n × n ( i = 1,2 ) are known constant matrices, and Δ A ( t ) and Δ W i ( t ) ( i = 0,1 ) are the parameter uncertainties of the form [figure omitted; refer to PDF] where D ∈ R n × l , E a ∈ R l × n , E 0 ∈ R l × n , and E 1 ∈ R l × n are the constant matrices and F ( t ) ∈ R l × l is the time-varying nonlinear function satisfying [figure omitted; refer to PDF] The delay h ( t ) is time-varying function satisfying [figure omitted; refer to PDF] where h M and h D are known positive scalars.
It is assumed that the neuron activation functions satisfy the following condition.
Assumption 1.
The neuron activation functions f i ( · ) ( i = 1 , ... , n ) are continuous, bounded and satisfy [figure omitted; refer to PDF] where k i + and k i - are constants.
From (5), if v = 0 , then we have [figure omitted; refer to PDF] Also, the conditions (5) and (6) are, respectively, equivalent to [figure omitted; refer to PDF] The systems (1) can be rewritten as [figure omitted; refer to PDF]
The objective of this paper is to investigate delay-dependent passivity conditions for system (9). Before deriving our main results, the following definition and lemmas are introduced.
Definition 2.
The system (1) is called passive if there exists a scalar γ ...5; 0 such that [figure omitted; refer to PDF] for all t p ...5; 0 and for all solution of (1) with x ( 0 ) = 0 .
Lemma 3 (see [21]).
For a given matrix M > 0 , the following inequality holds for all continuously differentiable function x in [ a , b ] [arrow right] R n : [figure omitted; refer to PDF] where ξ 1 ( t ) = x ( b ) - x ( a ) and ξ 2 ( t ) = x ( b ) + x ( a ) - ( 2 / ( b - a ) ) ∫ a b ... x ( s ) d s .
Lemma 4 (see [22]).
For any vectors x 1 , x 2 , constant matrices M , S , and real scalars 0 ...4; α ...4; 1 satisfying that [ M S S T M ] ...5; 0 , the following inequality holds: [figure omitted; refer to PDF]
Lemma 5 (see [24]).
Let ζ ∈ R n , Φ = Φ T ∈ R n × n , and B ∈ R m × n such that rank ... ( B ) < n . Then, the following statements are equivalent:
(i) ζ T Φ ζ < 0 , B ζ = 0 , ζ ...0; 0 ,
(ii) ( B [perpendicular] ) T Φ B [perpendicular] < 0 , where B [perpendicular] is a right orthogonal complement of B ,
(iii): ∃ X ∈ R n × m : Φ + X B + ( X B ) T < 0 .
3. Main Results
In this section, new passivity criteria for the system (9) will be proposed in Theorems 6 and 7.
For the sake of simplicity on matrix representation, e i ∈ R ( 13 n + l ) × n ( i = 1,2 , ... , 13 ) and e 14 ∈ R ( 13 n + l ) × l are defined as block entry matrices. For example, e 5 = [ 0 n · 4 n , I n , 0 n · 8 n , 0 n · l ] T and e 14 = [ 0 l · 13 n , I l ] T . And the notations of several matrices are defined as [figure omitted; refer to PDF]
Then, the following theorem is given as the first main result.
Theorem 6.
For given positive scalars h M , h D and diagonal matrices K - = diag ... { k 1 - , ... , k n - } and K + = diag ... { k 1 + , ... , k n + } , the system (9) is passive for 0 ...4; h ( t ) ...4; h M and h ( t ) ...4; h D , if there exist positive scalars ... and γ , positive diagonal matrices Λ = diag ... { λ 1 , ... , λ n } ∈ R n × n , Δ = diag ... { δ 1 , ... , δ n } ∈ R n × n , H i = diag ... { h 1 i , ... , h n i } ( i = 1,2 , ... , 5 ) , positive definite matrices P ∈ R 4 n × 4 n , Q 1 ∈ R 3 n × 3 n , Q 2 ∈ R 2 n × 2 n , R i ∈ R n × n ( i = 1,2 , 3 ) , R 4 ∈ R 2 n × 2 n , any symmetric matrices Z i ∈ R n × n ( i = 1,2 ) , and any matrices S 1 ∈ R 2 n × 2 n , S 2 ∈ R n × n satisfying the following LMIs: [figure omitted; refer to PDF] where the notation 0 in (14)-(16) means a zero matrix with an appropriate dimension.
Proof.
Let us consider the following Lyapunov-Krasovskii functional candidate as [figure omitted; refer to PDF] where [figure omitted; refer to PDF] The time-derivatives of V 1 , V 2 , and V 3 can be calculated as [figure omitted; refer to PDF] By the use of Lemma 3, V 4 is bounded as [figure omitted; refer to PDF] where [figure omitted; refer to PDF] Furthermore, if Ψ 2 > 0 , then applying Lemma 4 to (22) leads to [figure omitted; refer to PDF] where 1 / α ( t ) = h M / h ( t ) and S 1 is any 2 n × 2 n matrix.
By the use of Lemma 3 and Jensen's inequality [25], if Ψ 3 > 0 , then V 5 can be bounded as [figure omitted; refer to PDF] where S 2 is any n × n matrix.
And an upper bound of V 6 with Jensen's inequality [25] can be obtained as [figure omitted; refer to PDF] Before calculating the estimation of V 7 , inspired by the work of [23], the following zero equalities with any symmetric matrices Z 1 and Z 2 are considered as a tool of reducing the conservatism of criterion [figure omitted; refer to PDF] Adding (27) to V 7 can be obtained as [figure omitted; refer to PDF] Here, the bound of V 7 presented in (28) is valid when Ψ i > 0 ( i = 4,5 ) hold.
By utilizing the authors' work [26], from (7), choosing [ u , v ] as [ x ( t ) , x ( t - h ( t ) ) ] and [ x ( t - h ( t ) ) , x ( t - h M ) ] leads to [figure omitted; refer to PDF] where H 1 and H 2 are positive diagonal matrices.
Also, from (8), the following inequality holds [figure omitted; refer to PDF] where H 3 , H 4 , and H 5 are positive diagonal matrices.
Lastly, in succession, with the relational expression between p ( t ) and q ( t ) , p T ( t ) p ( t ) ...4; q T ( t ) q ( t ) , from the system (9), there exists any scalar ... > 0 satisfying the following inequality: [figure omitted; refer to PDF] From (19)-(31) and by applying S -procedure [27], an upper bound of V - 2 y T ( t ) u ( t ) - γ u T ( t ) u ( t ) can be [figure omitted; refer to PDF] By applying (i) and (iii) of Lemma 5, ζ T ( t ) ( Ξ [ h ( t ) ] + Ω 1 + Ω 2 + Γ + Θ ) ζ ( t ) < 0 with Υ ζ ( t ) = 0 n · 1 is equivalent to [figure omitted; refer to PDF] for any free matrix X with appropriate dimension.
Lastly, by utilizing (ii) and (iii) of Lemma 5, one can confirm that the inequality (33) is equivalent to [figure omitted; refer to PDF] Therefore, if LMIs (14), (15), and (16) hold, then (34) holds, which means [figure omitted; refer to PDF] By integrating (35) with respect to t over the time period from 0 to t p , we have [figure omitted; refer to PDF] for x ( 0 ) = 0 . Since V ( x ( 0 ) ) = 0 , the inequality (10) in Definition 2 holds. This implies that the neural networks (1) are passive in the sense of Definition 2. This completes our proof.
In the second place, an improved passivity criterion for the system (9) will be derived in Theorem 7 by utilizing modified V 3 . The notations of several matrices are defined for simplicity: [figure omitted; refer to PDF] and other notations will be used in Theorem 7.
Theorem 7.
For given positive scalars h M , h D and diagonal matrices K - = diag ... { k 1 - , ... , k n - } and K + = diag ... { k 1 + , ... , k n + } , the system (9) is passive for 0 ...4; h ( t ) ...4; h M and h ( t ) ...4; h D , if there exist positive scalars ... and γ , positive diagonal matrices Λ ∈ R n × n , Δ ∈ R n × n , H i = diag ... { h 1 i , ... , h n i } ( i = 1,2 , ... , 5 ) , positive definite matrices P ∈ R 4 n × 4 n , Q 1 ∈ R 3 n × 3 n , Q 2 = [ Q 2 , i j ] 3 × 3 ∈ R 3 n × 3 n , R i ∈ R n × n ( i = 1,2 , 3 ) , R 4 ∈ R 2 n × 2 n , any symmetric matrices Z i ∈ R n × n ( i = 1,2 ) , and any matrices S 1 ∈ R 2 n × 2 n , S 2 ∈ R n × n satisfying the LMIs (16) and [figure omitted; refer to PDF] where Ξ ^ [ h ( t ) ] = Ξ 1 [ h ( t ) ] + Ξ 2 + Ξ ^ 3 [ h ( t ) ] + Ξ 4 + Ξ 5 + Ξ 6 [ h ( t ) ] + Ξ 7 .
Proof.
By choosing V 3 as [figure omitted; refer to PDF] a newly Lyapunov-Krasovskii functional is given by [figure omitted; refer to PDF] Its new upper bound can be calculated as [figure omitted; refer to PDF] where the following inequality [figure omitted; refer to PDF] is used in (41). Here, other terms are very similar to the proof of Theorem 6, so it is omitted
Remark 8.
In Theorem 6, Lemma 3 (Wirtinger-based integral inequality) was applied to only the integral term - h M ∫ t - h M t ... x T ( s ) R 1 x ( s ) d s obtained by calculating the time derivative of V 4 . The other integral terms such as - h M ∫ t - h M t ... f T ( x ( s ) ) R 1 f ( x ( s ) ) d s and - h M ∫ t - h M t ... x T ( s ) R 3 x ( s ) d s were estimated by using Jensen's inequality. In the authors' future work, further improved stability or passivity criteria for neural networks with time-varying delays will be proposed by utilizing Lemma 3 in estimating other integral terms.
Remark 9.
Unlike Theorem 6, by utilizing V ^ 3 as one of the terms of Lyapunov-Krasovskii functional, some new cross terms such as [figure omitted; refer to PDF] are included, which may reduce the passivity criterion of Theorem 6. In the next section, the effectiveness of the proposed Lyapunov-Krasovsii functional will be shown by comparing maximum delay bounds which guarantee the passivity of the numerical examples.
Remark 10.
When the information of h ( t ) is unknown, then, Theorems 6 and 7 can provide passivity criteria for the system (9) by choosing Q 2 = 0 .
4. Numerical Examples
In this section, two numerical examples are introduced to show the improvements of the proposed theorems. In examples, MATLAB, YALMIP, and SeDuMi 1.3 are used to solve LMI problems.
Example 11.
Consider the neural networks (1) where [figure omitted; refer to PDF]
The results of the maximum delay bounds for guaranteeing the passivity of the above neural networks with different h D obtained by Theorems 6 and 7 are listed in Table 1. One can see that Theorem 6 for this example gives larger maximum delay bounds than those of [13-15, 19]. This indicates that the presented sufficient conditions relieve the conservativeness of the passivity caused by time-delay and parameter uncertainties. Furthermore, Theorem 7 provides larger delay bound than that of Theorem 6. This means that the newly constructed Lyapunov-Krasovskii functional plays an important role to reduce the conservatism of Theorem 6.
Table 1: Maximum delay bounds h M with different h D (Example 11).
h D | 0.3 | 0.5 | 0.7 | 0.9 | Unknown |
[13] | 0.4197 | 0.4145 | 0.4117 | 0.4082 | 0.3994 |
[14] | 0.5624 | 0.5580 | 0.5565 | 0.5523 | 0.5420 |
[15] | 0.5763 | 0.5679 | 0.5566 | 0.5273 | 0.5129 |
[19] | 1.1921 | 1.1590 | 1.1297 | 1.1081 | 1.1008 |
Theorem 6 | 2.2044 | 2.1798 | 2.1504 | 2.1262 | 2.1209 |
Theorem 7 | 2.3290 | 2.2442 | 2.1718 | 2.1335 | 2.1209 |
Example 12.
Consider the neural networks (1) where [figure omitted; refer to PDF]
In Table 2, the results of the maximum allowable delay bound for guaranteeing passivity are compared with the existing works. From Table 2, it can be seen that the maximum delay bounds for guaranteeing the passivity of the above neural networks are significantly larger than those of [16-18].
Table 2: Maximum delay bounds h M with different h D (Example 12).
h D | 0.5 | 0.9 | Unknown |
[16] | 0.5227 | 0.4613 | 0.4613 |
[17] | 1.3752 | 1.3027 | 1.3027 |
[18] ( m = 2 ) * | 1.4693 | 1.4243 | 1.4240 |
Theorem 6 | 3.3289 | 3.0700 | 3.0534 |
Theorem 7 | 3.4305 | 3.0770 | 3.0534 |
* m is a delay-partitioning number.
5. Conclusions
In this paper, the two passivity criteria for neural networks with time-varying delays and parameter uncertainties have been proposed by the use of Lyapunov method and LMI framework. In Theorem 6, by constructing the suitable Lyapunov-Krasovskii functional and utilizing Wirtinger-based inequality, the sufficient condition for passivity of the concerned networks was derived. Based on the result of Theorem 6, the improved criterion for the networks was proposed in Theorem 7 by introducing the newly augmented Lyapunov-Krasovskii functional. Via two numerical examples that dealt with previous works, the improvements of the proposed passivity criteria have been successfully verified. Based on the proposed methods, future works will focus on solving various problems such as state estimation [28, 29], passivity analysis for neural networks [30], stabilization for BAM neural networks [31], synchronization for complex networks [32], stability analysis, and filtering for dynamic systems with time delays [33-37]. Moreover, in [38], to reduce the conservatism of stability sufficient conditions, the triple integral forms of Lyapunov-Krasovskii functional was proposed and its effectiveness was shown. Thus, by grafting such approach onto the proposed idea of this paper, further improved results will be investigated in the near future.
Acknowledgments
This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2008-0062611) and by a grant of the Korea Healthcare Technology R & D Project, Ministry of Health & Welfare, Republic of Korea (A100054).
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
[1] L. O. Chua, L. Yang, "Cellular neural networks: applications," IEEE Transactions on Circuits and Systems , vol. 35, no. 10, pp. 1273-1290, 1988.
[2] A. Cichocki, R. Unbehauen Neural Networks for Optimization and Signal Processing , John Wiley & Sons, Hoboken, NJ, USA, 1993.
[3] G. Joya, M. A. Atencia, F. Sandoval, "Hopfield neural networks for optimization: study of the different dynamics," Neurocomputing , vol. 43, pp. 219-237, 2002.
[4] W.-J. Li, T. Lee, "Hopfield neural networks for affine invariant matching," IEEE Transactions on Neural Networks , vol. 12, no. 6, pp. 1400-1410, 2001.
[5] F. Beaufays, Y. Abdel-Magid, B. Widrow, "Application of neural networks to load-frequency control in power systems," Neural Networks , vol. 7, no. 1, pp. 183-194, 1994.
[6] M. Galicki, H. Witte, J. Dörschel, M. Eiselt, G. Griessbach, "Common optimization of adaptive preprocessing units and a neural network during the learning period. Application in EEG pattern recognition," Neural Networks , vol. 10, no. 6, pp. 1153-1163, 1997.
[7] M. Zhenjiang, Y. Baozong, "Analysis and optimal design of continuous neural networks with applications to associative memory," Neural Networks , vol. 12, no. 2, pp. 259-271, 1999.
[8] Z. Waszczyszyn, L. Ziemianski, "Neural networks in mechanics of structures and materials--new results and prospects of applications," Computers and Structures , vol. 79, no. 22-25, pp. 2261-2276, 2001.
[9] A. Rawat, R. N. Yadav, S. C. Shrivastava, "Neural network applications in smart antenna arrays: a review," International Journal of Electronics and Communications , vol. 66, no. 11, pp. 903-912, 2012.
[10] O. Faydasicok, S. Arik, "Equilibrium and stability analysis of delayed neural networks under parameter uncertainties," Applied Mathematics and Computation , vol. 218, no. 12, pp. 6716-6726, 2012.
[11] J. H. Park, "Further results on passivity analysis of delayed cellular neural networks," Chaos, Solitons and Fractals , vol. 34, no. 5, pp. 1546-1551, 2007.
[12] J. C. Willems, "Dissipative dynamical systems. I. General theory," Archive for Rational Mechanics and Analysis , vol. 45, pp. 321-351, 1972.
[13] B. Chen, H. Li, C. Lin, Q. Zhou, "Passivity analysis for uncertain neural networks with discrete and distributed time-varying delays," Physics Letters A: General, Atomic and Solid State Physics , vol. 373, no. 14, pp. 1242-1248, 2009.
[14] Y. Chen, W. Li, W. Bi, "Improved results on passivity analysis of uncertain neural networks with time-varying discrete and distributed delays," Neural Processing Letters , vol. 30, no. 2, pp. 155-169, 2009.
[15] J. Fu, H. Zhang, T. Ma, Q. Zhang, "On passivity analysis for stochastic neural networks with interval time-varying delay," Neurocomputing , vol. 73, no. 4-6, pp. 795-801, 2010.
[16] S. Xu, W. X. Zheng, Y. Zou, "Passivity analysis of neural networks with time-varying delays," IEEE Transactions on Circuits and Systems II: Express Briefs , vol. 56, no. 4, pp. 325-329, 2009.
[17] H.-B. Zeng, Y. He, M. Wu, S.-P. Xiao, "Passivity analysis for neural networks with a time-varying delay," Neurocomputing , vol. 74, no. 5, pp. 730-734, 2011.
[18] H. B. Zeng, Y. He, M. Wu, H. Q. Xiao, "Improved conditions for passivity of neural networks with a time-varying delay," IEEE Transactions on Cybernetics , vol. 44, no. 6, pp. 785-792, 2014.
[19] O. M. Kwon, M. J. Park, J. H. Park, S. M. Lee, E. J. Cha, "Passivity analysis of uncertain neural networks with mixed time-varying delays," Nonlinear Dynamics , vol. 73, no. 4, pp. 2175-2189, 2013.
[20] Y. Chen, W. X. Zheng, "Stability analysis of time-delay neural networks subject to stochastic perturbations," IEEE Transactions on Cybernetics , vol. 43, pp. 2122-2133, 2013.
[21] A. Seuret, F. Gouaisbaut, "Wirtinger-based integral inequality: application to time-delay systems," Automatica , vol. 49, no. 9, pp. 2860-2866, 2013.
[22] P. G. Park, J. W. Ko, C. Jeong, "Reciprocally convex approach to stability of systems with time-varying delays," Automatica , vol. 47, no. 1, pp. 235-238, 2011.
[23] S. H. Kim, P. Park, C. K. Jeong, "Robust H ∞ stabilisation of networked control systems with packet analyser," IET Control Theory and Applications , vol. 4, no. 9, pp. 1828-1837, 2010.
[24] M. C. de Oliveira, R. E. Skelton, "Stability tests for constrained linear systems," Perspectives in Robust Control , pp. 241-257, Springer, Berlin, Germany, 2001.
[25] K. Gu, "A further refinement of discretized Lyapunov functional method for the stability of time-delay systems," International Journal of Control , vol. 74, no. 10, pp. 967-976, 2001.
[26] O. M. Kwon, J. H. Park, S. M. Lee, E. J. Cha, "Analysis on delay-dependent stability for neural networks with time-varying delays," Neurocomputing , vol. 103, pp. 114-120, 2013.
[27] S. Boyd, L. El Ghaoui, E. Feron, V. Balakrishnan Linear Matrix Inequalities in System and Control Theory , vol. 15, of SIAM Studies in Applied Mathematics, SIAM, Philadelphia, Pa, USA, 1994.
[28] X. Liu, J. Cao, "Robust state estimation for neural networks with discontinuous activations," IEEE Transactions on Systems, Man, and Cybernetics B, Cybernetics , vol. 40, no. 6, pp. 1425-1437, 2010.
[29] X. Liu, J. Cao, "On periodic solutions of neural networks via differential inclusions," Neural Networks , vol. 22, no. 4, pp. 329-334, 2009.
[30] X. Liu, T. Chen, J. Cao, W. Lu, "Dissipativity and quasi-synchronization for neural networks with discontinuous activations and parameter mismatches," Neural Networks , vol. 24, no. 10, pp. 1013-1021, 2011.
[31] X. Liu, N. Jiang, J. Cao, S. Wang, Z. Wang, "Finite-time stochastic stabilization for BAM neural networks with uncertainties," Journal of the Franklin Institute , vol. 350, no. 8, pp. 2109-2123, 2013.
[32] X. Liu, J. Cao, W. Yu, "Filippov systems and quasi-synchronization control for switched networks," Chaos , vol. 22, no. 3, 2012.
[33] R. Lu, Y. Xu, A. Xue, " H ∞ filtering for singular systems with communication delays," Signal Processing , vol. 90, no. 4, pp. 1240-1248, 2010.
[34] R. Lu, H. Li, Y. Zhu, "Quantized H ∞ filtering for singular time-varying delay systems with unreliable communication channel," Circuits, Systems, and Signal Processing , vol. 31, no. 2, pp. 521-538, 2012.
[35] R. Lu, H. Wu, J. Bai, "New delay-dependent robust stability criteria for uncertain neutral systems with mixed delays," Journal of the Franklin Institute , vol. 351, no. 3, pp. 1386-1399, 2014.
[36] R. Lu, H. Su, J. Chu, A. Xue, "A simple approach to robust d-stability analysis for uncertain singular delay systems," Asian Journal of Control , vol. 11, no. 4, pp. 411-419, 2009.
[37] R. Lu, X. Dai, H. Su, J. Chu, A. Xue, "Delay-dependant robust stability and stabilization conditions for a class of Lur'e singular time-delay systems," Asian Journal of Control , vol. 10, no. 4, pp. 462-469, 2008.
[38] J. Sun, G. P. Liu, J. Chen, "Delay-dependent stability and stabilization of neutral time-delay systems," International Journal of Robust and Nonlinear Control , vol. 19, no. 10, pp. 1364-1375, 2009.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2014 O. M. Kwon et al. O. M. Kwon et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
The problem of passivity analysis for neural networks with time-varying delays and parameter uncertainties is considered. By the consideration of newly constructed Lyapunov-Krasovskii functionals, improved sufficient conditions to guarantee the passivity of the concerned networks are proposed with the framework of linear matrix inequalities (LMIs), which can be solved easily by various efficient convex optimization algorithms. The enhancement of the feasible region of the proposed criteria is shown via two numerical examples by the comparison of maximum allowable delay bounds.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer