Rajchakit Advances in Dierence Equations 2013, 2013:241 http://www.advancesindifferenceequations.com/content/2013/1/241
Web End =http://www.advancesindifferenceequations.com/content/2013/1/241
R E S E A R C H Open Access
Delay-dependent optimal guaranteed cost control of stochastic neural networks with interval nondifferentiable time-varying delays
Grienggrai Rajchakit*
*Correspondence: mailto:[email protected]
Web End [email protected] Division of Mathematics, Faculty of Science, Maejo University, Chiangmai, 50290, Thailand
Abstract
This paper studies the problem of a guaranteed cost control for a class of stochastic
delayed neural networks. The time delay is a continuous function belonging to a given interval, but it is not necessarily dierentiable. A cost function is considered as a
nonlinear performance measure for the closed-loop system. The stabilizing controllers
to be designed must satisfy some mean square exponential stability constraints on
the closed-loop poles. By constructing a set of augmented Lyapunov-Krasovskii
functional, a guaranteed cost controller is designed via memory less state feedback control, and new sucient conditions for the existence of the guaranteed cost
state-feedback for the system are given in terms of linear matrix inequalities (LMIs).
A numerical example is given to illustrate the eectiveness of the obtained result. Keywords: stochastic neural networks; guaranteed cost control; mean square
stabilization; interval time-varying delays; Lyapunov function; linear matrix inequalities
1 Introduction
Stability and control of neural networks with the time delay have attracted considerable attention in recent years []. In many practical systems, it is desirable to design neural networks, which are not only asymptotically or exponentially stable, but can also guarantee an adequate level of system performance. In the area of control, signal processing, pattern recognition and image processing, and delayed neural networks have many useful applications. Some of these applications require that the equilibrium points of the designed network be stable. In both biological and articial neural systems, time delays due to integration and communication are ubiquitous and often become a source of instability. The time delays in electronic neural networks are usually time-varying, and sometimes vary violently with respect to time due to the nite switching speed of ampliers and faults in the electrical circuitry. A guaranteed cost control problem [] has the advantage of providing an upper bound on a given system performance index, and, thus, the system performance degradation, incurred by the uncertainties or time delays, is guaranteed to be less than this bound. The Lyapunov-Krasovskii functional technique has been among the popular and eective tools in the design of guaranteed cost controls for neural networks with time delay. Nevertheless, despite such diversity
2013 Rajchakit; licensee Springer. This is an Open Access article distributed under the terms of the Creative Commons Attribu
tion License (http://creativecommons.org/licenses/by/2.0
Web End =http://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Rajchakit Advances in Dierence Equations 2013, 2013:241 Page 2 of 11 http://www.advancesindifferenceequations.com/content/2013/1/241
Web End =http://www.advancesindifferenceequations.com/content/2013/1/241
of results available, most existing works either assumed that the time delays are constant or are dierentiable []. Although, in some cases, the delay-dependent guaranteed cost control for systems with time-varying delays was considered in [, , ], the approach used there cannot be applied to the systems with interval, nondierentiable time-varying delays. To the best of our knowledge, the guaranteed cost control and state feedback stabilization for stochastic neural networks with interval, nondierentiable time-varying delays have not been fully studied yet (see, e.g., [] and the references therein), which are important in both theories and applications. This motivates our research.
In this paper, we investigate the guaranteed cost control for stochastic delayed neural networks problem. The novel features here are that the delayed neural network under consideration is with various globally Lipschitz continuous activation functions, and the time-varying delay function is interval, nondierentiable. A nonlinear cost function is considered as a performance measure for the closed-loop system. The stabilizing controllers to be designed must satisfy some mean square exponential stability constraints on the closed-loop poles. Based on constructing a set of augmented Lyapunov-Krasovskii functional, new delay-dependent criteria for guaranteed cost control via memoryless feedback control is established in terms of LMIs, which allow simultaneous computation of two bounds that characterize the mean square exponential stability rate of the solution and can be easily determined by utilizing MATLABs LMI control tool-box.
The outline of the paper is as follows. Section presents denitions and some well-known technical propositions, needed for the proof of the main result. LMI delay-dependent criteria for the guaranteed cost control and a numerical example showing the eectiveness of the result are presented in Section . The paper ends with the conclusions and cited references.
2 Preliminaries
The following notation will be used in this paper. R+ denotes the set of all real non-negative numbers; Rn denotes the n-dimensional space with the scalar product x, y or xTy of two
vectors x, y, and the vector norm ; Mnr denotes the space of all matrices of (n r)-
dimensions. AT denotes the transpose of matrix A; A is symmetric if A = AT; I denotes the identity matrix; (A) denotes the set of all eigenvalues of A; max(A) = max{Re ; (A)}.
xt := {x(t + s) : s [h, ]}, xt = sups[h,] x(t + s) ; C([, t],
Rn) denotes the set of all
Rn-valued continuously dierentiable functions on [, t]; L([, t], Rm) denotes the set of all the Rm-valued square integrable functions on [, t].
Matrix A is called semi-positive denite (A ) if Ax, x for all x
Rn; A is positive
denite (A > ) if Ax, x > for all x = ; A > B means A B > . The notation diag{ }
stands for a block-diagonal matrix. The symmetric term in a matrix is denoted by .
Consider the following stochastic neural networks with interval time-varying delay:
dx(t) = Ax(t) + Wf
(.)
x(t) + Wg
x
t h(t) + Bu(t) dt
t h(t) d(t), t ,
x(t) = (t), t [h, ],
+
t, x(t), x
Rajchakit Advances in Dierence Equations 2013, 2013:241 Page 3 of 11 http://www.advancesindifferenceequations.com/content/2013/1/241
Web End =http://www.advancesindifferenceequations.com/content/2013/1/241
where x(t) = [x(t), x(t), . . . , xn(t)]T
Rn is the state of the neurons, u() L([, t],
Rm) is
the control; n is the number of neurons, and
f
x(t) =
f
x(t)
, f
x(t)
, . . . , fn
xn(t)
T
,
g
x(t) =
g
x(t)
, g
x(t)
, . . . , gn
xn(t)
T
,
are the activation functions; A = diag(a, a, . . . , an), ai > represents the self-feedback term; B Rnm is control input matrix; W, W denote the connection weights, the delayed
connection weights.
(t) is a scalar Wiener process (Brownian motion) on ( , F, P) with
E (t) = , E
(t)
(i)(j) = (i = j), (.)
and : Rn Rn R Rn is the continuous function, and is assumed to satisfy that
T
t, x(t), x t h(t) t,x(t), x t h(t) xT(t)x(t) + xT
t h(t)
= , E
x
t h(t)
,
t h(t) Rn, (.)
where > and > are known constant scalars. For simplicity, we denote (t, x(t), x(t h(t))) by .
The time-varying delay function h(t) satises the condition
h h(t) h.
The initial functions (t) C([h, ], Rn), with the norm
= sup
t[h,]
(t)
+
x(t), x
(t)
.
In this paper, we consider various activation functions and assume that the activation functions f (), g() are Lipschitzian with the Lipschitz constants fi, ei > :
f
i() fi()
fi| |, i = , , . . . , n, ,
R,
g
i() gi()
ei| |, i = , , . . . , n, ,
(.)
R.
The performance index, associated with the system (.), is the following function
J =
f t,
, u(t) dt, (.)
where f (t, x(t), x(t h(t)), u(t)) : R+ Rn Rn Rm R+ is a nonlinear cost function
satisfying
Q, Q, R : f (t, x, y, u) Qx, x + Qy, y + Ru, u (.)
for all (t, x, y, u) R+ Rn Rn Rm and Q, Q Rnn, R Rmm are given symmet
ric positive denite matrices. The objective of this paper is to design a memoryless state
x(t), x
t h(t)
Rajchakit Advances in Dierence Equations 2013, 2013:241 Page 4 of 11 http://www.advancesindifferenceequations.com/content/2013/1/241
Web End =http://www.advancesindifferenceequations.com/content/2013/1/241
feedback controller u(t) = Kx(t) for system (.) and the cost function (.) such that the resulting closed-loop system
dx(t) = (A BK)x(t) + Wf
x(t) + Wg
x
t h(t) dt
, t d(t) (.)
is mean square exponentially stable, and the closed-loop value of the cost function (.) is minimized.
Denition . Given > . The zero solution of a closed-loop system (.) is -stabilizable in the mean square if there exists a positive number N > such that every solution x(t, ) satises the following condition:
E x(t,
)
+
x(t), x
t h(t)
E
Net
, t .
Denition . Consider the control system (.). If there exist a memoryless state feedback control law u(t) = Kx(t) and a positive number J such that the zero solution of the closed-loop system (.) is mean square exponentially stable and the cost function (.) satises J J, then the value J is a guaranteed constant and u(t) is a guaranteed cost
control law of the system and its corresponding cost function.
We introduce the following technical well-known propositions, which will be used in the proof of our results.
Proposition . (Integral matrix inequality []) For any symmetric positive denite matrix M > , scalar > and vector function : [, ]
Rn such that the integrations concerned are well dened, the following inequality holds
(s) ds
T M
(s) ds
T(s)M(s) ds
.
3 Design of guaranteed cost controller
In this section, we give a design of memoryless guaranteed feedback cost control for stochastic neural networks (.). Let us set
W = AP PAT P BBT + .BRBT +
i=
Gi
i= ehiHi
+ PFDFP + PQP + I, W = P + AP + .BBT,
W = ehH + .BBT + AP, W = ehH + .BBT + AP,
W = P.BBT + AP,
W =
i=
WiDiWTi +
i=hiHi + (h h)U P BBT,
Rajchakit Advances in Dierence Equations 2013, 2013:241 Page 5 of 11 http://www.advancesindifferenceequations.com/content/2013/1/241
Web End =http://www.advancesindifferenceequations.com/content/2013/1/241
W = P, W = P, W = P,
W = ehG ehH ehU +
i=
WiDiWTi,
W = , W = ehU,
W =
i=
WiDiWTi ehU ehG ehH,
W = ehU,
W = ehU +
i=
WiDiWTi ehU + PEDEP + PQP + I,
E = diag{ei, i = , . . . , n}, F = diag{fi, i = , . . . , n}, = min
P
,
= max
P + hmax
P
i=
Gi
P
+ hmax
P
i=
Hi
P + (h h) max
PUP
.
Theorem . Consider the control system (.) and the cost function (.). Given > . If there exist symmetric positive denite matrices P, U, G, G, H, H, and diagonal positive denite matrices Di, i = , , satisfying the following LMIs
W W W W W W W W W
W W W
W W
W
< , (.)
then
u(t) =
BTPx(t), t (.)
is a guaranteed cost control, and the guaranteed cost value is given by
J = E
.
Moreover, the solution x(t, ) of the system satises
E
x(t,
)
E
et
, t .
Rajchakit Advances in Dierence Equations 2013, 2013:241 Page 6 of 11 http://www.advancesindifferenceequations.com/content/2013/1/241
Web End =http://www.advancesindifferenceequations.com/content/2013/1/241
Proof Let Y = P, y(t) = Yx(t). Using the feedback control (.), we consider the following Lyapunov-Krasovskii functional taking the mathematical expectation
E V(t, xt)
= E
i=
Vi(t, xt)
,
E{V} = E
xT(t)Yx(t)
,
E{V} = E
tth e(st)xT(s)YGYx(s) ds
,
E{V} = E
tth e(st)xT(s)YGYx(s) ds
,
E{V} = E
h
h
tt+s e(t)xT(
)YHY x() d ds
,
E{V} = E
h
h
tt+s e(t)xT(
)YHY x() d ds
,
E{V} = E
(h h)
th th
tt+s e(t)xT(
)YUY x() d ds
.
It easy to check that
E
, t . (.)
Taking the derivative of Vi, i = , , . . . , , and taking the mathematical expectation, we have
E{ V} = E
xT(t)Y x(t)
x(t)
E
V(t, xt)
E
xt
= E
yT(t)
PAT AP
y(t) yT(t)BBTy(t)
+ E
yT(t)Wf () + yT(t)Wg() + yT(t)(t)
;
E{ V} = E
E
yT(t)Gy(t) ehyT(t h)Gy(t h) V
;
E{ V} = E
yT(t)Gy(t) ehyT(t h)Gy(t h) V
;
E{ V} = E
hyT(t)Hy(t) heh
tth xT(s)Hx(s) ds
V
;
E{ V} = E
hyT(t)Hy(t) heh
tth yT(s)Hy(s) ds
V
;
E{ V} = E
(h h)yT(t)Uy(t) (h h)eh
thth yT(s)Uy(s) ds
V
.
Applying Proposition . and
t
s y() d = y(t) y(s), we have for i, j = , ,
E
hi
tthi yT(s)Hjy(s) ds
E
tthi y(s) ds T
Hj
tthi y(s) ds
E
y(t) y
t h(t) T Hj
y(t) y
t h(t)
Rajchakit Advances in Dierence Equations 2013, 2013:241 Page 7 of 11 http://www.advancesindifferenceequations.com/content/2013/1/241
Web End =http://www.advancesindifferenceequations.com/content/2013/1/241
= E
yT(t)Hjy(t) + xT(t)Hjy
t h(t)
E
yT(t hi)Hjy(t hi)
. (.)
Note that
E
thth yT(s)Uy(s) ds
= E
th(t)th yT(s)Uy(s) ds
+ E
thth(t) yT(s)Uy(s) ds
.
Applying Proposition . gives
E
h h(t)
th(t)th yT(s)Uy(s) ds
E
th(t)th y(s) ds T
U
th(t)th y(s) ds
E
y t
h(t)
y(t h)
T U
y
t h(t) y(t h) .
Since h h(t) h h, we have
E
[h h]
th(t)th yT(s)Uy(s) ds
E
y t
h(t)
y(t h)
T U
y
t h(t) y(t h) ,
then
E
[h h]
th(t)th yT(s)Uy(s) ds
E
y t
h(t)
y(t h)
T U
y
t h(t) y(t h) .
Similarly, we have
E
(h h)
thth(t) yT(s)Uy(s) ds
E
y(t
h) y
t h(t) T U
y(t h) y
t h(t)
.
Then, we have
E
V() + V()
E
yT(t)
PAT AP
y(t) yT(t)BBTy(t) + yT(t)Wf ()
+ E
yT(t)Wg() + yT(t)(t) + yT(t)
i=
Gi
y(t) +
!Py(t), y(t)
"
+ E
yT(t)
i= hiHi
y(t) + (h h)yT(t)Uy(t)
E
i=ehiyT(t hi)Giy(t hi)
Rajchakit Advances in Dierence Equations 2013, 2013:241 Page 8 of 11 http://www.advancesindifferenceequations.com/content/2013/1/241
Web End =http://www.advancesindifferenceequations.com/content/2013/1/241
E
eh
y(t) y(t h)
T H
y(t) y(t h)
E
eh
y(t) y(t h)
T H
y(t) y(t h)
E
eh
y
t h(t) y(t h)
T U
y
t h(t) y(t h)
E
eh
y(t h) y
t h(t) T U
y(t h) y
t h(t)
.
(.)
Using equation (.)
Py(t) + APy(t) Wf () Wg() + .BBTy(t) (t) = ,
multiplying both sides with [y(t), y(t), y(t h), y(t h), y(t h(t)), (t)]T, and
taking the mathematical expectation, we have
E yT(t)Py(t) + yT(t)APy(t) yT(t)Wf () yT(t)Wg()
+ E yT(t)BBTy(t) yT(t)(t)
= ,
E
yT(t)Py(t) yT(t)APy(t) + yT(t)Wf ()
+ E
yT(t)Wg() yT(t)BBTy(t) + yT(t)(t)
= ,
E
yT(t h)Py(t) + yT(t h)APy(t) yT(t h)Wf ()
E
yT(t h)Wg() + yT(t h)BBTy(t) yT(t h)(t)
= ,
(.)
E
yT(t h)Py(t) + yT(t h)APy(t) yT(t h)Wf ()
E
yT(t h)Wg() + yT(t h)BBTy(t) yT(t h)(t)
= ,
E
yT
t h(t)
Py(t) + yT
t h(t)
APy(t) yT
t h(t)
Wf ()
E
yT
t h(t)
Wg() + yT
t h(t)
BBTy(t) yT
t h(t)
(t) = ,
E
T(t)TPy(t) + T(t)TAPy(t) T(t)TWf ()
E
T(t)TWg() + T(t)TBBTy(t) T(t)T(t)
= .
Adding all the zero items of (.) and f (t, x(t), x(t h(t)), u(t)) f (t, x(t), x(t h(t)), u(t)) = , respectively into (.), applying assumptions (.), (.), using the condition (.) for the following estimations, and taking the mathematical expectation
E f
t, x(t), x
t h(t)
, u(t) E !Qx(t), x(t)
"
+
!Qx
t h(t)
, x
t h(t)
"
+ E
!Ru(t),
u(t)
"
= E
!PQ
Py(t), y(t)
"
+
!PQPy
t h(t)
, y
t h(t)
"
+ E
.
!BRBTy(t), y(t)
" ,
E
!Wf (x), y
"
E
!
WDWTy, y
"
+ !Df (x), f (x)
" ,
E
!Wg(z), y
"
E
!W
DWTy, y
"
+ !Dg(z), g(z) " ,
E
!Df (x), f (x)
"
E
!FD
Fx, x
" ,
E
!Dg(z), g(z)
"
E
!
EDEz, z
"
,
Rajchakit Advances in Dierence Equations 2013, 2013:241 Page 9 of 11 http://www.advancesindifferenceequations.com/content/2013/1/241
Web End =http://www.advancesindifferenceequations.com/content/2013/1/241
we obtain
E
V() + V()
E
T(t)E(t) f
t, x(t), x
t h(t)
, u(t) ,
(.)
where (t) = [y(t), y(t), y(t h), y(t h), y(t h(t))], and
E =
W W W W W W W W W
W W W
W W
W
.
Therefore, by condition (.), we obtain from (.) that
E
V(t, xt)
, t . (.)
Integrating both sides of (.) from to t, we obtain
E V(t, xt)
E
V(t, xt)
, t .
Furthermore, taking condition (.) into account, we have
E
E
V()et
x(t,
)
E
V(xt)
E
V()et
E
et
,
then
E
x(t,
)
E
et
, t ,
which concludes the mean square exponential stability of the closed-loop system (.). To prove the optimal level of the cost function (.), we derive from (.) and (.) that
E
V(t, zt)
, u(t) ,t . (.)
Integration of both sides of (.) from to t leads to
E
t f t,
E
f
t, x(t), x
t h(t)
x(t), x
t h(t)
, u(t) dt
E
V(, z) V(t, zt)
E
V(, z)
,
due to E{V(t, zt)} . Hence, letting t +, we have
J = E
f t,
x(t), x
t h(t)
, u(t) dt
E
V(, z)
E
= J.
This completes the proof of the theorem.
Rajchakit Advances in Dierence Equations 2013, 2013:241 Page 10 of 11 http://www.advancesindifferenceequations.com/content/2013/1/241
Web End =http://www.advancesindifferenceequations.com/content/2013/1/241
Example . Consider the stochastic neural networks with interval time-varying delays (.), where
A =
. .
, W =
. . . .
, W =
. . . .
,
B =
. .
, E =
. .
, F =
. .
,
Q =
. . . .
, Q =
. . . .
, R =
. .
. .
,
h(t) = . + . sin t if t I =
&k[k, (k + )], h(t) = if t R+ \ I.
Note that h(t) is nondierentiable, therefore, the stability criteria proposed in [, , ] are not applicable to this system. Given = ., = ., = ., h = ., h = ., by using the Matlab LMI toolbox, we can solve for P, U, G, G, H, H, D, and D, which satisfy the condition (.) in Theorem .. A set of solutions are
P =
. .
. .
, U =
. .
. .
,
G =
. .
. .
, G =
. .
. .
,
H =
. .
. .
, H =
. .
. .
,
D =
.
.
, D =
.
.
.
Then
u(t) = .x(t) .x(t), t
is a guaranteed cost control law and the cost given by
J = E
.
.
Moreover, the solution x(t, ) of the system satises
E
x(t,
)
E
.e.t
, t .
4 Conclusion
In this paper, the problem of guaranteed cost control for stochastic neural networks with the interval nondierentiable time-varying delay has been studied. A nonlinear quadratic cost function is considered as a performance measure for the closed-loop system. The stabilizing controllers to be designed must satisfy some mean square exponential stability
Rajchakit Advances in Dierence Equations 2013, 2013:241 Page 11 of 11 http://www.advancesindifferenceequations.com/content/2013/1/241
Web End =http://www.advancesindifferenceequations.com/content/2013/1/241
constraints on the closed-loop poles. By constructing a set of time-varying Lyapunov-Krasovskii functional, a memoryless state feedback guaranteed cost controller design has been presented and sucient conditions for the existence of the guaranteed cost state-feedback for the system have been derived in terms of LMIs.
Competing interests
The author declares that they have no competing interests.
Acknowledgements
This work was supported by the Thailand Research Fund Grant, the Higher Education Commission and Faculty of Science, Maejo University, Thailand. The author thanks anonymous reviewers for valuable comments and suggestions, which allowed to improve the paper.
Received: 6 February 2013 Accepted: 24 July 2013 Published: 8 August 2013
References
1. Hopeld, JJ: Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 79, 2554-2558 (1982)
2. Kevin, G: An Introduction to Neural Networks. CRC Press, Boca Raton (1997)3. Wu, M, He, Y, She, JH: Stability Analysis and Robust Control of Time-Delay Systems. Springer, Berlin (2010)4. Arik, S: An improved global stability result for delayed cellular neural networks. IEEE Trans. Circuits Syst. 499, 1211-1218 (2002)
5. He, Y, Wang, QG, Wu, M: LMI-based stability criteria for neural networks with multiple time-varying delays. Physica D 112, 126-131 (2005)
6. Kwon, OM, Park, JH: Exponential stability analysis for uncertain neural networks with interval time-varying delays. Appl. Math. Comput. 212, 530-541 (2009)
7. Phat, VN, Trinh, H: Exponential stabilization of neural networks with various activation functions and mixed time-varying delays. IEEE Trans. Neural Netw. 21, 1180-1185 (2010)
8. Botmart, T, Niamsup, P: Robust exponential stability and stabilizability of linear parameter dependent systems with delays. Appl. Math. Comput. 217, 2551-2566 (2010)
9. Chen, WH, Guan, ZH, Lua, X: Delay-dependent output feedback guaranteed cost control for uncertain time-delay systems. Automatica 40, 1263-1268 (2004)
10. Palarkci, MN: Robust delay-dependent guaranteed cost controller design for uncertain neutral systems. Appl. Math. Comput. 215, 2939-2946 (2009)
11. Park, JH, Kwon, OM: On guaranteed cost control of neutral systems by retarded integral state feedback. Appl. Math. Comput. 165, 393-404 (2005)
12. Park, JH, Choi, K: Guaranteed cost control of nonlinear neutral systems via memory state feedback. Chaos Solitons Fractals 24, 183-190 (2005)
13. Fridman, E, Orlov, Y: Exponential stability of linear distributed parameter systems with time-varying delays. Automatica 45, 194-201 (2009)
14. Xu, S, Lam, J: A survey of linear matrix inequality techniques in stability analysis of delay systems. Int. J. Syst. Sci. 39(12), 1095-1113 (2008)
15. Xie, JS, Fan, BQ, Young, SL, Yang, J: Guaranteed cost controller design of networked control systems with state delay. Acta Autom. Sin. 33, 170-174 (2007)
16. Yu, L, Gao, F: Optimal guaranteed cost control of discrete-time uncertain systems with both state and input delays.J. Franklin Inst. 338, 101-110 (2001)17. Gu, K, Kharitonov, V, Chen, J: Stability of Time-Delay Systems. Birkhuser, Berlin (2003)
doi:10.1186/1687-1847-2013-241Cite this article as: Rajchakit: Delay-dependent optimal guaranteed cost control of stochastic neural networks with interval nondifferentiable time-varying delays. Advances in Dierence Equations 2013 2013:241.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
The Author(s) 2013
Abstract
This paper studies the problem of a guaranteed cost control for a class of stochastic delayed neural networks. The time delay is a continuous function belonging to a given interval, but it is not necessarily differentiable. A cost function is considered as a nonlinear performance measure for the closed-loop system. The stabilizing controllers to be designed must satisfy some mean square exponential stability constraints on the closed-loop poles. By constructing a set of augmented Lyapunov-Krasovskii functional, a guaranteed cost controller is designed via memory less state feedback control, and new sufficient conditions for the existence of the guaranteed cost state-feedback for the system are given in terms of linear matrix inequalities (LMIs). A numerical example is given to illustrate the effectiveness of the obtained result.[PUBLICATION ABSTRACT]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer