(ProQuest: ... denotes non-US-ASCII text omitted.)
Chih-Hong Kao 1 and Chun-Fei Hsu 2 and Chih-Hu Wang 2 and Hon-Son Don 1
Recommended by E. E. N. Macau
1, Department of Electrical Engineering, National Chung-Hsing University, Taichung 402, Taiwan
2, Department of Electrical Engineering, Chung Hua University, Hsinchu 300, Taiwan
Received 22 July 2010; Accepted 20 January 2011
1. Introduction
Radial basis function (RBF) networks are characterized by a simple structure with rapid computation time and superior adaptive performance [1]. There have been considerable interests in exploring the applications of RBF network to deal with the nonlinearity and uncertainty in control systems [2-5]. One main advantage of these RBF-based adaptive neural controllers is that the online parameter adaptive laws were derived without the requirement of offline training. Though the favorable control performance can be achieved in [2-5], the structure of the used RBF network should be determined by some trial-and-error tuning procedure. It is difficult to consider the balance between the number of hidden neurons and desired performance. To solve this problem, a dynamic RBF (DRBF) network was proposed for the structural adaptation of the RBF network [6-9]. However, some structural learning algorithms are complex and some structural learning algorithms cannot avoid the structure of RBF network growing unboundedly.
Another drawback of the RBF-based adaptive neural controller is how to determine the learning rates of the parameter adaptive laws. For a small value of the learning rates, the convergence of the tracking error can be easily guaranteed but with slow convergence speed. If the learning rates are large, the parameter adaptive laws may become system unstable. To attack this problem, a variable learning rate was studied in [10-13]. A discrete-type Lyapunov function was utilized to determine the optimal learning rates in [10, 11]; however, the exact calculation of the Jacobian term cannot be determined due to the unknown control dynamics. A genetic algorithm and a particle swarm optimization algorithm were used to determine the optimal learning rates [12, 13]; however, the computation loading is heavy and their scheme lacks the real-time adaptation ability.
In the last decade, control and synchronization of chaotic systems have become an important topic. Chaos synchronization can be applied in the vast areas of physics and engineering systems such as in chemical reactions, power converters, biological systems, information processing, and secure communication [14-16]. Many different methods have been applied to synchronize chaotic systems. Chang and Yan [17] proposed an adaptive robust PID controller using the sliding-mode approach; however, the phenomenon of chattering will appear. An adaptive sliding mode control was proposed to cope with the fully unknown system parameters [18]. To eliminate the chattering, a continuous control law is used; however, the system stability cannot be guaranteed. The adaptive control techniques are applied to chaos synchronization in [19]; however, adaptive control requires the structural knowledge of the chaotic dynamic functions. Yau [20] proposed a nonlinear rule-based controller for chaos synchronization. The fuzzy rules should be preconstructed by a time-consuming trial-and-error tuning procedure to achieve the required performance.
This paper proposes an adaptive dynamic neural network control (ADNNC) system to synchronize two nonlinear identical chaotic gyros. The proposed ADNNC system is composed of a neural controller and a smooth compensator. The neural controller uses a DRBF network to approximate an ideal controller and the smooth compensator is designed to dispel the approximation error introduced by the neural controller. This paper has successfully developed a low-computation loading requirement of the online structural learning algorithm for the DRBF network. To speed up the convergence rate of the tracking errors, an analytical method based on a discrete-type Lyapunov function is proposed to determine the variable learning rates of the parameter adaptive laws. Finally, some simulations are provided to verify the effectiveness of the proposed ADNNC system.
2. Problem Formulation
In this paper, a symmetric gyro with linear-plus-cubic damping as shown in Figure 1 [15] is considered. The dynamics of a gyro is a very interesting nonlinear problem in classical mechanics. According to the study by Chen [15], the dynamics of the symmetrical gyro with linear-plus-cubic damping of the angle θ can be expressed as [figure omitted; refer to PDF] where θ is the angle; fsinωt is the parametric excitation; c1 θ... and c2θ...3 are the linear and nonlinear damping, respectively; α2 (1-cos θ)2 /sin3 θ-βsinθ is a nonlinear resilience force. The open-loop system behavior was simulated with α2 =100 , β=1 , c1 =0.5 , c2 =0.05 , and ω=2 for observing the chaotic unpredictable behavior. For the phase trajectory with f=33 , an uncontrolled chaotic trajectory of period 2 motion can be found, and for the phase trajectory with f=36 , a quasiperiod motion in the uncontrolled chaotic trajectory happens [15]. The time responses of the uncontrolled chaotic gyro with initial condition (1, 1) with f=33 and f=36 are shown in Figures 2(a) and 2(b), respectively. It is shown that the uncontrolled chaotic gyro has different types of trajectories for different system parameters.
Figure 1: A schematic diagram of a symmetric gyroscope.
[figure omitted; refer to PDF]
Uncontrolled chaotic trajectory for different system parameters.
(a) [figure omitted; refer to PDF]
(b) [figure omitted; refer to PDF]
Generally, the two chaotic systems in synchronization are called the drive system and response system, respectively. The interest in chaos synchronization is the problem of how to design a controller to drive the response chaotic gyros system to track the drive chaotic gyros system closely. Consider the following two nonlinear gyros, where the drive system and response system are denoted with x and y , respectively. The systems are given as
Drive System
[figure omitted; refer to PDF]
Response System
[figure omitted; refer to PDF]
where u is the control input and F(x...,y...) is the coupling term. To achieve the control objective, the tracking error between the response system (2.3) and the drive system (2.2) is defined as [figure omitted; refer to PDF] The error dynamic equation can be obtained as [figure omitted; refer to PDF] If the system dynamics g(x,x...) , g(y,y...) , and F(x...,y...) can be obtained, there is an ideal controller as [21] [figure omitted; refer to PDF] where k1 and k2 are the nonzero constants. Applying the ideal controller (2.6) into error dynamic equation (2.5) obtains [figure omitted; refer to PDF] If k1 and k2 are chosen to correspond to the coefficients of a Hurwitz polynomial, it implies lim t[arrow right]∞ e(t)=0 [21]. The system dynamics of these chaotic systems are always unknown; thus the ideal controller u* cannot be implemented.
3. Design of the ADNNC System
In this paper, an adaptive dynamic neural network control (ADNNC) system as shown in Figure 3 is introduced where a sliding surface is defined as [figure omitted; refer to PDF] with k1 and k2 being nonzero positive constants. The proposed ADNNC system is composed of a neural controller and a smooth compensator, that is, [figure omitted; refer to PDF] where the neural controller unc uses a DRBF network to mimic the ideal controller and the smooth compensator usc is designed to compensate for the differences between the ideal controller and neural controller. The output of the DRBF network with n hidden neurons is given as [figure omitted; refer to PDF] where αi represents the i th connection weights between the hidden layer and output layer and θi , mi , and σi are the firing weight, center, and width of the i th hidden neuron, respectively.
Figure 3: Block diagram of the ADNNC system for the chaos synchronization.
[figure omitted; refer to PDF]
3.1. Structural Learning of DRBF Network
To attack the problem of the structure determination in RBF network, this paper proposed a simple structural learning algorithm. In the growing process, the mathematical description of the existing layers can be expressed as clusters [1, 22]. If a new input data falls within the boundary of the clusters, the DRBF network will not generate a new hidden neuron but update the parameters of the existing hidden neurons. Find the maximum degree θmax defined as [1] [figure omitted; refer to PDF] It can be observed that the maximum degree θmax is smaller as the incoming data is far from the existing hidden neurons. If θmax ≤θth is satisfied, where θth ∈(0,1) a pregiven threshold, then a new hidden neuron is generated. The center and width of the new hidden neurons and the output action strength are selected as follows: [figure omitted; refer to PDF] where σ... is a prespecified constant. Next, the structural learning phase is considered to determine whether or not to cancel the existing hidden neurons and weights which are inappropriate. A significance index is determined for the importance of the i th hidden neurons and can be given as follows [22]: [figure omitted; refer to PDF] where N denotes the number of iterations, Ii is the significance index of the i th hidden neurons whose initial value is 1, ρ is the reduction threshold value, and τ is the reduction speed constant. If Ii ≤Ith is satisfied, where Ith a pregiven threshold, then the i th hidden neuron and weight are cancelled. If the computation loading is an important issue for the practical implementation, Ith and ρ are chosen as large values so more hidden neurons and weights can be cancelled.
3.2. Parameter Learning of DRBF Network
Substituting (3.2) into (2.5) and using (2.6) yield [figure omitted; refer to PDF] Multiplying both sides by s of (3.7) gives [figure omitted; refer to PDF] According to the gradient descent method, the weights αi are updated by the following equation [23]: [figure omitted; refer to PDF] where ηα is the learning rate. Moreover, the center and width of the hidden neurons can be adjusted in the following equation to increase the learning capability: [figure omitted; refer to PDF] where ηm and ησ are the learning rates. For given small values of the learning rates, the convergence can be guaranteed but the convergence speed of tracking error is slow. On the other hand, if the selection of learning rates is too large, the algorithm becomes unstable. To determine the learning rates of the parameter adaptive laws, a cost function is defined as [figure omitted; refer to PDF] According to the gradient descent method, the adaptive law of the weight can be represented as [figure omitted; refer to PDF] Comparing (3.9) with (3.12) yields the Jacobian term of the system ∂C/∂unc =-s . Then, the convergence analysis in the following theorem derives the variable learning rates to ensure convergence of the output tracking error.
Theorem 3.1.
Let ηα be the learning rate for the weight of the DRBF network and define Pαmax as Pαmax ≡max||Pα || , where Pα =∂unc /∂αi and ||·|| is the Euclidean norm. Then, the convergence of tracking error is guaranteed if ηα is chosen as [figure omitted; refer to PDF]
Theorem 3.2.
Let ηm and ησ be the learning rates of the center and width of the DRBF network, respectively. Define Pmmax and Pσmax as Pmmax ≡max||Pm || and Pσmax ≡max||Pσ || , respectively, where Pm =∂unc /∂mi and Pσ =∂unc /∂σi . The convergence of the tracking error is guaranteed if ηm and ησ are chosen as [figure omitted; refer to PDF] where αmax =max|αi | and σmin =min|σi | .
3.3. Stability Analysis
Since the number of hidden neurons in the DRBF network is finite for the real-time practical applications, the approximation error is inevitable. So the ideal controller can be reformulated as [figure omitted; refer to PDF] where unc* is the optimal neural controller and [straight epsilon] denotes an estimate approximation error between the ideal controller and optimal neural controller. This paper proposed a smooth compensator as [figure omitted; refer to PDF] where [straight epsilon]... denotes the estimated value of the approximation error and δ is a small positive constant. Substituting (3.15) and (3.16) into (3.7) yields [figure omitted; refer to PDF] Then, define a Lyapunov function candidate in the following form: [figure omitted; refer to PDF] where η[straight epsilon] is the learning rate with a positive constant and [straight epsilon]...=[straight epsilon]-[straight epsilon]... . Differentiating (3.18) with respect to time and using (3.17) obtain [figure omitted; refer to PDF] For achieving V...≤0 , the error estimation law is designed as [figure omitted; refer to PDF] then (3.19) can be rewritten as [figure omitted; refer to PDF] Since V...(s,[straight epsilon]...,t) is negative semidefinite, that is, V(s,[straight epsilon]...,t)≤V(s,[straight epsilon]...,0) , it implies that s and [straight epsilon]... are bounded. Let function Ω(τ)≡δs2 ≤-V...(s,[straight epsilon]...,t) and integrate Ω(t) with respect to time; then it is obtained [figure omitted; refer to PDF] Because V(s,[straight epsilon]...,0) is bounded and V(s,[straight epsilon]...,t) is nonincreasing and bounded, the following result can be obtained: [figure omitted; refer to PDF] Moreover, since Ω...(t) is bounded, by Barbalat's Lemma [21], lim t[arrow right]∞ Ω(t)=0 . That is, s[arrow right]0 as t[arrow right]∞ . As a result, the stability of the proposed ADNNC system can be guaranteed. In summary, the design steps of ADNNC are summarized as follows.
Step 1.
Initialize the predefined parameters of the DRBF network.
Step 2.
The tracking error e and the sliding surface s are given in (2.4) and (3.1), respectively.
Step 3.
Determine whether to add a new node by the θmax ≤θth condition and determine whether to cancel an existing node by significance index Ii .
Step 4.
The control law is designed in (3.2), in which the neural controller and the smooth compensator are given as (3.3) and (3.16), respectively.
Step 5.
Determine the variable learning rates ηα* , ηm* , and ησ* in (3.13) and (3.14), respectively.
Step 6.
Update the parameters of the neural controller by (3.9), (3.10), and update the parameter of the smooth compensator by (3.20).
Step 7.
Return to Step 2.
4. Simulation Results
In this section, the proposed ADNNC system is applied to synchronize two identical chaotic gyros. To investigate the effectiveness of the proposed ADNNC system, two simulation cases including parameter variation and initial variation are considered as follows.
Case 1.
(x,x...,y,y...)=(1,1,-1,-1) , fx =33 , and fy =33 .
Case 2.
(x,x...,y,y...)=(1,1,1,1) , fx =33 , and fy =36 .
According to Theorems 3.1 and 3.2, respectively, the variable learning rates are chosen as [figure omitted; refer to PDF] where λ=1 and γ=0.001 . The control parameters are chosen as k1 =2 , k2 =1 , η[straight epsilon] =0.1 , δ=0.5 , σ...=2.5 , θth =0.6 , τ=0.01 , ρ=0.2 , and Ith =0.01 . All the gains are chosen in consideration of the requirement of stability. The simulation results of the proposed ADNNC system with variable learning rates are shown in Figures 4 and 5 for Cases 1 and 2, respectively. The tracking responses of states (x,y) are shown in Figures 4(a) and 5(a); the tracking responses of states (x...,y...) are shown in Figures 4(b) and 5(b); the associated control efforts are shown in Figures 4(c) and 5(c); the numbers of hidden neurons are shown in Figures 4(d) and 5(d). The simulation results show that the proposed ADNNC system with variable learning rates not only can achieve favorable synchronization performance but also an appropriate network size of the DRBF network can be obtained because the proposed self-structuring mechanism and the online learning algorithms are applied. To demonstrate the robust control performance of the proposed ADNNC system with variable parameter learning rates, a coupling term F(x...,y...)=0.2[exp (x...-y...)-1] is examined here. The simulation results of the proposed ADNNC system with a coupling term are shown in Figures 6 and 7 for Cases 1 and 2, respectively. The tracking responses of states (x,y) are shown in Figures 6(a) and 7(a); the tracking responses of states (x...,y...) are shown in Figures 6(b) and 7(b); the associated control efforts are shown in Figures 6(c) and 7(c); the numbers of hidden neurons are shown in Figures 6(d) and 7(d). The simulation results show that the proposed ADNNC system with variable learning rates which can achieve favorable synchronization performance under a coupling term is examined.
Simulation results of the ADNNC system with variable learning rates for Case 1.
(a) [figure omitted; refer to PDF]
(b) [figure omitted; refer to PDF]
(c) [figure omitted; refer to PDF]
(d) [figure omitted; refer to PDF]
Simulation results of the ADNNC system with variable learning rates for Case 2.
(a) [figure omitted; refer to PDF]
(b) [figure omitted; refer to PDF]
(c) [figure omitted; refer to PDF]
(d) [figure omitted; refer to PDF]
Simulation results of the proposed ADNNC system for Case 1 with a coupling term.
(a) [figure omitted; refer to PDF]
(b) [figure omitted; refer to PDF]
(c) [figure omitted; refer to PDF]
(d) [figure omitted; refer to PDF]
Simulation results of the proposed ADNNC system for Case 2 with a coupling term.
(a) [figure omitted; refer to PDF]
(b) [figure omitted; refer to PDF]
(c) [figure omitted; refer to PDF]
(d) [figure omitted; refer to PDF]
In addition, since the selection of the learning rates (ηα , ηm , and ησ ) for the online training of the DRBF network has a significant effect on the network performance, the performance measures of various learning rates are summarized in Table 1. It shows that the proposed ADNNC system with variable parameter learning rates possesses the most accurate synchronization performance. To verify the effect of the varied learning rates outside the convergence range, the simulation results of the proposed ADNNC system with ηα =ηm =ησ =0.4 are shown in Figures 8 and 9 for Cases 1 and 2, respectively. The tracking responses of states (x,y) are shown in Figures 8(a) and 9(a); the tracking responses of states (x...,y...) are shown in Figures 8(b) and 9(b); the associated control efforts are shown in Figures 8(c) and 9(c); the numbers of hidden neurons are shown in Figures 8(d) and 9(d). From the simulation results, the unstable tracking responses are induced due to the selection of learning rates outside the convergence region.
Table 1: Performance measures.
(a)
Methods | Error | |
Average | Standard deviation | |
Adaptive sliding mode control [18] | 0.1082 | 0.2526 |
ADNNC with ηα =ηm =ησ =0.01 | 0.3021 | 0.2369 |
ADNNC with ηα =ηm =ησ =0.1 | 0.1186 | 0.1373 |
ADNNC with ηα =ηm =ησ =0.2 | 0.0958 | 0.1218 |
ADNNC with variable learning rates | 0.0930 | 0.1245 |
(b)
Methods | Error | |
Average | Standard deviation | |
Adaptive sliding mode control [18] | 0.1170 | 0.2740 |
ADNNC with ηα =ηm =ησ =0.01 | 0.3036 | 0.2680 |
ADNNC with ηα =ηm =ησ =0.1 | 0.1129 | 0.1068 |
ADNNC with ηα =ηm =ησ =0.2 | 0.0663 | 0.0810 |
ADNNC with variable learning rates | 0.0401 | 0.0851 |
Simulation results of the ADNNC system with large learning rates for Case 1.
(a) [figure omitted; refer to PDF]
(b) [figure omitted; refer to PDF]
(c) [figure omitted; refer to PDF]
(d) [figure omitted; refer to PDF]
Simulation results of the ADNNC system with large learning rates for Case 2.
(a) [figure omitted; refer to PDF]
(b) [figure omitted; refer to PDF]
(c) [figure omitted; refer to PDF]
(d) [figure omitted; refer to PDF]
5. Conclusion
In this paper, an adaptive dynamic neural network control (ADNNC) system is proposed to synchronize chaotic symmetric gyros with linear-plus-cubic damping. The proposed ADNNC system is composed of a neural controller and a smooth compensator. The neural controller uses a dynamic radial basis function (DRBF) network to mimic an ideal controller in which the DRBF network can automatically grow and prune the network structure. The smooth compensator is designed to dispel the approximation error between the ideal controller and neural controller. Moreover, to speed up the convergence of tracking error, a discrete-type Lyapunov function is utilized to determine the variable learning rates of the adaptation laws. Numerical simulations have verified the effectiveness of the proposed ADNNC method.
Acknowledgments
The authors appreciate the partial financial support from the National Science Council of Republic of China under Grant NSC 98-2221-E-216-040. The authors are grateful to the reviewers for their valuable comments.
[1] C. T. Lin, C. S. G. Lee Neural Fuzzy Systems: A Neuro-Fuzzy Synergism to Intelligent Systems , Prentice-Hall, Englewood Cliffs, NJ, USA, 1996.
[2] Y. Li, S. Qiang, X. Zhuang, O. Kaynak, "Robust and adaptive backstepping control for nonlinear systems using RBF neural networks," IEEE Transactions on Neural Networks , vol. 15, no. 3, pp. 693-701, 2004.
[3] S. Kumarawadu, T. T. Lee, "Neuroadaptive combined lateral and longitudinal control of highway vehicles using RBF networks," IEEE Transactions on Intelligent Transportation Systems , vol. 7, no. 4, pp. 500-512, 2006.
[4] Y. S. Yang, X. F. Wang, "Adaptive H tracking control for a class of uncertain nonlinear systems using radial-basis-function neural networks," Neurocomputing , vol. 70, no. 4-6, pp. 932-941, 2007.
[5] S. Wang, D. L. Yu, "Adaptive RBF network for parameter estimation and stable air-fuel ratio control," Neural Networks , vol. 21, no. 1, pp. 102-112, 2008.
[6] G. B. Huang, P. Saratchandran, N. Sundararajan, "A generalized growing and pruning RBF (GGAP-RBF) neural network for function approximation," IEEE Transactions on Neural Networks , vol. 16, no. 1, pp. 57-67, 2005.
[7] J. Lian, Y. Lee, S. D. Sudhoff, S. H. Zak, "Self-organizing radial basis function network for real-time approximation of continuous-time dynamical systems," IEEE Transactions on Neural Networks , vol. 19, no. 3, pp. 460-474, 2008.
[8] C. F. Hsu, "Adaptive growing-and-pruning neural network control for a linear piezoelectric ceramic motor," Engineering Applications of Artificial Intelligence , vol. 21, no. 8, pp. 1153-1163, 2008.
[9] M. Bortman, M. Aladjem, "A growing and pruning method for radial basis function networks," IEEE Transactions on Neural Networks , vol. 20, no. 6, pp. 1039-1045, 2009.
[10] C. M. Lin, Y. F. Peng, "Adaptive CMAC-based supervisory control for uncertain nonlinear systems," IEEE Transactions on Systems, Man, and Cybernetics, Part B , vol. 34, no. 2, pp. 1248-1260, 2004.
[11] C. H. Wang, C. S. Cheng, T. T. Lee, "Dynamical optimal training for interval type-2 fuzzy neural network (T2FNN)," IEEE Transactions on Systems, Man, and Cybernetics, Part B , vol. 34, no. 3, pp. 1462-1477, 2004.
[12] F. J. Lin, P. K. Huang, W. D. Chou, "Recurrent-fuzzy-neural-network-controlled linear induction motor servo drive using genetic algorithms," IEEE Transactions on Industrial Electronics , vol. 54, no. 3, pp. 1449-1461, 2007.
[13] R.-J. Wai, K.-L. Chuang, J.-D. Lee, "On-line supervisory control design for maglev transportation system via total sliding-mode approach and particle swarm optimization," IEEE Transactions on Automatic Control , vol. 55, no. 7, pp. 1544-1559, 2010.
[14] L. M. Pecora, T. L. Carroll, "Synchronization in chaotic systems," Physical Review Letters , vol. 64, no. 8, pp. 821-824, 1990.
[15] H. K. Chen, "Chaos and chaos synchronization of a symmetric gyro with linear-plus-cubic damping," Journal of Sound and Vibration , vol. 255, no. 4, pp. 719-740, 2003.
[16] H. K. Chen, Z. M. Ge, "Bifurcations and chaos of a two-degree-of-freedom dissipative gyroscope," Chaos, Solitons & Fractals , vol. 24, no. 1, pp. 125-136, 2005.
[17] W. D. Chang, J. J. Yan, "Adaptive robust PID controller design based on a sliding mode for uncertain chaotic systems," Chaos, Solitons & Fractals , vol. 26, no. 1, pp. 167-175, 2005.
[18] J. J. Yan, M. L. Hung, T. L. Liao, "Adaptive sliding mode control for synchronization of chaotic gyros with fully unknown parameters," Journal of Sound and Vibration , vol. 298, no. 1-2, pp. 298-306, 2006.
[19] J. H. Park, "Synchronization of Genesio chaotic system via backstepping approach," Chaos, Solitons & Fractals , vol. 27, no. 5, pp. 1369-1375, 2006.
[20] H. T. Yau, "Nonlinear rule-based controller for chaos synchronization of two gyros with linear-plus-cubic damping," Chaos, Solitons & Fractals , vol. 34, no. 4, pp. 1357-1365, 2007.
[21] J. J. E. Slotine, W. P. Li Applied Nonlinear Control , Prentice Hall, Englewood Cliffs, NJ, USA, 1991.
[22] C. F. Hsu, "Self-organizing adaptive fuzzy neural control for a class of nonlinear systems," IEEE Transactions on Neural Networks , vol. 18, no. 4, pp. 1232-1241, 2007.
[23] C. M. Lin, C. F. Hsu, "Supervisory recurrent fuzzy neural network control of wing rock for slender delta wings," IEEE Transactions on Fuzzy Systems , vol. 12, no. 5, pp. 733-742, 2004.
Appendices
A. Proof of Theorem 3.1
Since [figure omitted; refer to PDF] a discrete-type Lyapunov function is selected as [figure omitted; refer to PDF] The change in the Lyapunov function is expressed as [figure omitted; refer to PDF] Moreover, the sliding surface difference can be represented by [figure omitted; refer to PDF] where Δs(N) is the sliding surface change and Δαi represents the change of weights in the DRBF network. Using (3.11), (3.12), and (A.1), then [figure omitted; refer to PDF] Then (A.4) becomes [figure omitted; refer to PDF] Thus, [figure omitted; refer to PDF] From (A.3) and (A.7), ΔVA can be rewritten as [figure omitted; refer to PDF] If ηα is chosen as 0<ηα <2/(Pαmax )2 , then the discrete-type Lyapunov stability of VA >0 and ΔVA <0 is guaranteed so the output tracking error will converge to zero as t[arrow right]∞ . This completes the proof of Theorem 3.1.
B. Proof of Theorem 3.2
To prove Theorem 3.2, the following lemmas were used [9].
Lemma B.1.
Let f(r)=rexp (-r2 ) ; then |f(r)|<1 , for all r∈R .
Lemma B.2.
Let g(r)=r2 exp (-r2 ) ; then |g(r)|<1 , for all r∈R .
(1) According to Lemma B.1, |[(s-mi )/σi ]exp {-[(s-mi )/σi]2 }|<1 , since [figure omitted; refer to PDF] Moreover, the sliding surface difference can be represented by [figure omitted; refer to PDF] where Δmi represents a change of the center in the i th hidden neuron. Using (3.11), (3.12), and (B.1), then [figure omitted; refer to PDF] Then using (B.3) and (B.2), [figure omitted; refer to PDF] Thus, [figure omitted; refer to PDF] If 0<ηm <2/(Pmmax )2 =|σi |min 2 /[2|αi |max 2 ] , the term ||1-PmTηm s(N)Pm || in (B.5) is less than 1. Therefore, the discrete-type Lyapunov stability of VA >0 and ΔVA <0 by (A.2) and (A.3) is guaranteed.
(2) According to Lemma B.2, |[(s-mi )/σi]2 exp {-[(s-mi )/σi]2 }|<1 , since [figure omitted; refer to PDF] Moreover, the sliding surface difference can be represented by [figure omitted; refer to PDF] where Δσi represents a change of the width in the i th hidden neuron. Using (3.11), (3.12), and (B.6), then [figure omitted; refer to PDF] Then using (B.8) and (B.7) becomes [figure omitted; refer to PDF] Thus, [figure omitted; refer to PDF] If 0<ησ <2/(Pσmax )2 =|σi |min 2 /[2|αi |max 2 ] , the term ||1-Pσ (N)Tησ sPσ (N)|| in (B.10) is less than 1. Therefore, the discrete-type Lyapunov stability of VA >0 and ΔVA <0 by (A.2) and (A.3) is guaranteed.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2011 Chih-Hong Kao et al. Chih-Hong Kao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
This paper addresses the synchronization of chaotic gyros with unknown parameters and external disturbance via an adaptive dynamic neural network control (ADNNC) system. The proposed ADNNC system is composed of a neural controller and a smooth compensator. The neural controller uses a dynamic RBF (DRBF) network to online approximate an ideal controller. The DRBF network can create new hidden neurons online if the input data falls outside the hidden layer and prune the insignificant hidden neurons online if the hidden neuron is inappropriate. The smooth compensator is designed to compensate for the approximation error between the neural controller and the ideal controller. Moreover, the variable learning rates of the parameter adaptation laws are derived based on a discrete-type Lyapunov function to speed up the convergence rate of the tracking error. Finally, the simulation results which verified the chaotic behavior of two nonlinear identical chaotic gyros can be synchronized using the proposed ADNNC scheme.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer