1. Introduction
It is difficult to obtain an accurate mechanism model of a physical system when the production technologies and processes are very complex. Data-driven control (DDC) [1] relies on input/output (I/O) data of control systems and does not need to consider mechanism models of systems. After several years of development, some data-driven control techniques have been investigated, such as, proportional-integral derivative (PID) [2], fuzzy logic control [3], unfalsified control (UC) [4,5], model free adaptive control (MFAC) [6,7,8,9,10,11,12], iterative learning control (ILC) [13,14,15], iterative feedback tuning (IFT) [16,17], some control algorithms based on neural network [18,19,20,21,22,23] and so on.
MFAC is a class of DDC, and it builds a virtual equivalent dynamic linearized model [24] by using a dynamic linearization technique. The virtual equivalent dynamic linearized model contains some time-varying parameters. In practice, it is often not easy to obtain the resulting time-varying parameters, but they can be estimated by using historical data of control systems. Those time-varying parameters usually include high nonlinearity implicitly, and the performance will degrade if the nonlinearity of the resulting time-varying parameters is too severe [25]. The traditional methods of getting those time-varying parameters are projection algorithm, recursive least squares, and so on. Besides every time-varying parameter in the virtual equivalent dynamic linearized model can be considered as a nonlinear function, and in [25], radial basis function neural network (RBFNN) is used to estimate those time-varying parameters.
Extreme learning machine (ELM) is developed for offline learning and training a single-hidden-layer feed forward neural networks (SLFNs) [26]. Online sequential extreme learning machine (OS-ELM) [27] is an online learning algorithm, it adjusts the output weights online, besides the input weights and hidden biases are randomly chose. In recent years OS-ELM has gained a large amount of interests and been uesed to estimate unknown parameters of systems [28,29]. Some improved OS-ELM algorithms were introduced by some scholars, such as regularized online sequential extreme learning machine (REOS-ELM) [30], initial-training-free online extreme learning machine (ITF-OELM) [31], etc. Although, the learning speed of OS-ELM is extremely high, it could yield singular and ill-posed problens. To overcome adverse effects, which cuased by the noisy data of control systems, REOS-ELM, which is an improvement of OS-ELM, was investigated. In [30], REOS-ELM was introduced, however the stability of REOS-ELM for unknown discrete-time nonlinear systems was not analysed, and how to select the hidden node number of neural networks is unknown.
In this paper, first, a updating formula which contains dead-zone characteristics for REOS-ELM is introduced. Second, in order to obtain a more compact network structure, error minimized regularized online sequential extreme learning machine (EMREOS-ELM) is investigated, EMREOS-ELM is different from EMOS-ELM [28], and it updates the output weights using the pseudoinverse of a partitioned matrix during the growth of the networks. Finally, a novel model-free adaptive control method based on EMREOS-ELM is proposed. This paper is structured as follows, in Section 2 REOS-ELM is briefly introduced. Dynamic linearization technique and the updating formula which contains dead-zone characteristics for REOS-ELM are introduced in Section 3. The model-free adaptive control method based on EMREOS-ELM and the stability analysis of the proposed method for unknown discrete-time nonlinear systems are stated in Section 4. Section 5 shows simulation experiments. In Section 6 some conclusions are brought.
2. REOS-ELM
In [32], the universal approximation capability of ELM was analyzed. ELM is developed for offline learning, however training data could arrive one-by-one or chunk-by-chunk; OS-ELM is an online version of ELM, and it is sensitive to noise data; REOS-ELM is an improvement of OS-ELM. In this part REOS-ELM will be introduced.
Suppose that there is an initial training dataset withN0training patterns(xj,tj)∈Rn×Rwherexj=[xj1,xj1,⋯,xjn]T. For a single hidden layer neural network with n input nodes and one output node, the jth output corresponding to the jth input pattern is
oj=∑i=1LβiG(wi,bi,xj),j=1,⋯,N0
where L denotes the number of hidden nodes, and the subscript i indicates the ith hidden node;G(·)andojare the active function of hidden nodes and the output of the network, respectively,biandβiare the hidden bias of the ith hidden node and the output weight connecting the ith hidden layer and the output node, respectively.
OS-ELM is a novel online learning algorithm due to its highlighted features, and its highlighted features are that the input weights and hidden biases are randomly chose, and the main goal of training process is to determine the output weights. In order to obtain output weights, they minimize the error function defined by
∥H0 β0−T0 ∥2.
The collected data for training process often contains noise, and the only objective of empirical risk minimization as shown in (2) may lead to poor generalization and over-fitting [30]. REOS-ELM is different from OS-ELM, and REOS-ELM tries to minimize the empirical error and obtain small norm of network weight vector [30]. In [30] the followin cost function was considered, and the cost function is
∥H0 β0−T0 ∥2+λ∥β0∥2
whereλis a regularization factor.
The solution ofβ0is
β0=P0 H0T T0,
and
P0=(H0T H0+λIL)−1
hereILis an identity matrix, the size ofILis L,T0=t1,⋯,tN0 T, and
H0=G(w1,b1,x1)⋯G(wL,bL,x1)⋮⋯⋮G(w1,b1,xN0 )⋯G(wL,bL,xN0 )N0×L.
When the kth chunk of data is recerived
Nk={(xi,ti)}i=∑j=0k−1 Nj+1i=∑j=0k Nj
The weight updating algorithm, which is used in REOS-ELM, takes a similar form to recursive least squares (RLS) algorithm, and
Pk=Pk−1−Pk−1 HkT Hk Pk−1INk +Hk Pk−1 HkT,
and
βk=βk−1+Pk HkT(Tk−Hk βk−1)
whereINk is an identity matrix, and the size ofINk isNk;TkandHkare the target of kth arriving training data and the hidden layer outputs for the kth arriving training data, respectively.
3. Dynamic Linearization Technique and the New Updating Formula for REOS-ELM 3.1. Dynamic Linearization Technique MFAC builds a virtual equivalent dynamic linearized model by using a dynamic linearization technique.
The following unknown discrete-time nonlinear system is considered
yk+1=f(yk,⋯,yk−Ly,uk,⋯,uk−Lu)
wheref(·)represents an unknown nonlinear function, and hereLyandLuindicate the orders of the system input and the system output, respectively.
To make further study, the following assumptions are used.
Assumption A1.
The system (10) is observable and controllable.
Assumption A2.
Thef(·)is a smooth nonlinear function, and the partial derivative off(·)with respect toukis continuous.
Assumption A3.
Suppose that for allk∈0,1,⋯,T, and if▵uk≠0 , then system (10) satisfies the generalized Lipschitz condition along the iteration axis, that is
|▵yk+1|≤Lb|▵uk|,
where▵yk+1=yk+1−yk,▵uk=uk−uk−1,Lbis a constant, andLb>0.
Remark 1.
This assumption means that bounded change of the input can not cause unbounded change of the system output, which can be guaranteed by many industrial processes [25].
Theorem 1.
For the unknown discrete-time nonlinear system (10), satisfying Assumptions (Remarks 1–3), there must beψkcalled pseudo-partial-derivative (PPD), when▵uk≠0 , the system (10) can be rewritten as
▵yk+1=ψk▵uk
where|ψk|≤Lb.
Proof.
See ([33] theorem 4.1). □
Assumption A4.
For the unknown discrete-time nonlinear system (10), satisfying Assumptions (Remarks 1–3), there isβ*, when▵uk≠0 , the system (10) can be rewritten as
yk+1=yk+Hk β*▵uk+▵k▵uk
where
Hk β*=∑i=1Lβi*G(wi,bi,xk),
sup|▵k|≤▵, and ▵ is a given upper bound;wi,biare randomly generated, then they kept in constant.
3.2. The Updating Formula Based on Dead-Zone Characteristics
The discrete time nonlinear system (10) is a class of special system where the training data comes one by one, andNk=1.
WhenHk β^k≠0, and|yk+1*−ykHk β^k|≤M, the control law is
uk=yk+1*−ykHk β^k+uk−1
where M is a constant,M>0,yk+1*is the desired signal andβ^kis defined in below.
WhenHk β^k=0, or|yk+1*−ykHk β^k|>M, or▵uk=0, a new re-learning process begins.
The updating formula, which contains dead-zone characteristics, for REOS-ELM is
Pk−1=Pk−1−1+HkT Hk σk,
and
β^k+1=β^k+σk Pk HkT ek+1*,
where
σk=1(|ek+1*|1+σk Hk Pk−1 HkT)2>▵20otherwise
and
ek+1*=yk+1−yk+1*▵uk.
Lemma 1.
If
Pk−1=Pk−1−1+HkT Hk σk
where the scalarσk>0, thenPkis related toPk−1via
Pk=Pk−1−σk Pk−1 HkT Hk Pk−11+σk Hk Pk−1 HkT,
and
Pk HkT=Pk−1 HkT1+σk Hk Pk−1 HkT.
Proof.
It is a direct conclusion of [33]. □
4. The MFAC Method Based on EMREOS-ELM
In this section, we will introduce the MFAC method based on EMREOS-ELM, and the MFAC method based on EMREOS-ELM is performed in three main phases, and they are parameter initialization, parameter learning and the adjustment of network structure. EMREOS-ELM is an online learning algorithm and it can train a single hidden layer neural network, besides it can adjust the network structure; When the system is initialized or the network structure is adjusted, EMREOS-ELM need much training data, which is used for the training network structure. In this paper, the size of those training data is 200, than is,Km=200Im=200 ; The maximum number of hidden nodes and the minimum number of hidden nodes are usually selected based on the complexity of unknown discrete-time nonlinear systems. Figure 1 illustrates the process of the proposed algorithm.
4.1. Initialization Phase
-
u1andu2are two random values,|u1|≤1|u2|≤1, and setk=3.
-
SetM=100,Lm=20, andL1=L2=8.Lmdenotes the maximum number of hidden nodes, andL1andL2denote the minimum number of hidden nodes.
-
Measure the outputy2andy3 of system (10).
-
Assign random parameters of hidden nodes(wi,bi)wherei=1,⋯,L2,λ=0.1.
-
Using the first sample datax1=y2,u1, initializeP1andβ^2, and
P0=(λIL2 )−1,
P1=(λIL2 +H1T H1)−1,
and
β^1=P1 H1T e3*.
Define
β^2≜β^1.
-
Using the kth sample dataxk=yk,uk−1, calculateHk, and
whenHk β^k≠0, and|yk+1*−ykHk β^k|≤M, then
uk=yk+1*−ykHk β^k+uk−1.
whenHk β^k=0, or|yk+1*−ykHk β^k|>Mor|▵uk|=0, a new re-learning process begins.
-
Measure the outputyk+1 of system (10).
-
SetEm=0.05,Ev=50, and calculateEk
Ek=∑i=k−EVk−1(y*(i)−y(i))2.
whereEVandEmare two values, and they are important for adjusting network structure; The appropriateEVandEmcan improve the tracking effect of systems, and too smallEVor too bigEmmake EMREOS-ELM become invalid; We can choose those valuesEVandEmbased on performance requirements of systems.
-
When the tracking error does not meet the requirements of systems, network structure will be adjusted. The core of the proposed EMREOS-ELM is the adjustment of network structure. ifEk≥Em∧Lk≤Lm∧I≥Im∧k≥Kmthen execute II; or I.
(I)
Using the kth training dataxk=yk,uk−1andek+1*, update the output weights
Pk=Pk−1−σk Pk−1 HkT Hk Pk−11+σk Hk Pk−1 HkT
and
β^k+1=β^k+σk Pk HkT ek+1*,
where
σk=1(|ek+1*|1+σk Hk Pk−1 HkT)2>▵20otherwise,
and
ek+1*=yk+1−yk+1*▵uk.
-
k⇐k+1,Lk=Lk−1,I⇐I+1 , and go to Section 4.2.
(II)
Go to Section 4.3.
-
SetLk=Lk−1+1, and assign random parameters of theLkth hidden node(wLk ,bLk ).
-
When network structure is adjusted, it is equivalent to add a new column toHk, and
▵δ=G(wLk ,bLk ,xk).
Then
Hk*=Hk|▵δ.
-
The pseudo inverse of the new(Hk*)†is
(Hk*)†=(Hk)+−dbb
where
d=(Hk)+▵δ
and
b=c−1ifc≠0(1+dTd)−1 dT (Hk)†ifc=0,
and
c=▵δ−Hkd.
-
Calculateβ^k+1, and
β^k+1=β^k−db▵δb▵δ.
-
InitializePkand
Pk=λ×ILk
whereILk denotesLk×Lkidentify matrix.
-
Setukas a random number, and|uk|≤1.
-
Setk⇐k+1,I=1 , and go to Section 4.2.
Theorem 2.
For the system (10), if the updating formulas described by (16) and (17) are adopted, the following results can be obtained.
(1)
∥β*−β^k ∥2≤ι1 ∥β*−β^1∥2whereι1=λmax(P0−1)λmin(P0−1), andλmax(P0−1)andλmin(P0−1)are the maximum eigenvalue and the minimum eigenvalue of the matrixP0−1, respectively.
(2)
limN→∞∑k=1N σk[(|ek+1*|1+σk Hk Pk−1 HkT)2−▵2]<∞
and this implies
(a)
limk→∞σk[(|ek+1*|1+σk Hk Pk−1 HkT)2−▵2]=0
(b)
limsupk→∞(|ek+1* |2(1+σk Hk Pk−1 HkT)2)≤▵2
Proof.
Define
γk≜11+σk Hk Pk−1 HkT
Equation (42) can be obtained from (22) and (17), and
β˜k+1=β˜k−σk γk Pk−1 HkT ek+1*
whereβ˜k+1=β*−β^k+1 . Equation (43) can be obtained from Assumption A4 and (42), and
Hk β˜k+1+▵k=Hk β˜k+1+yk+1−yk▵uk−Hk β*=Hk β˜k−Hk σk γk Pk−1 HkT ek+1*+yk+1−yk▵uk−Hk β*=yk+1−yk▵uk−Hk β^k−Hk σk γk Pk−1 HkT ek+1*=ek+1*−Hk σk Pk−1 HkT ek+1*1+σk Hk Pk−1 HkT=γk ek+1*
Introducing Lyapunov candidate asVk+1=β˜k+1T Pk−1 β˜k+1 , taking (20) into it, hence, we can obtain
Vk+1=β˜k+1T Pk−1 β˜k+1=β˜k+1T Pk−1−1 β˜k+1+σk β˜k+1T HkT Hk β˜k+1=[β˜k−σk γk Pk−1 HkT ek+1*]T Pk−1−1[β˜k−σk γk Pk−1 HkT ek+1*]+σk β˜k+1T HkT Hk β˜k+1=Vk−2σk γk Hk β˜k ek+1*+σk2 γk2 (ek+1*)2 Hk Pk−1 HkT+σk (Hk β˜k+1)2
Substitute (43) in the above equation then
Vk+1=Vk−2σk γk Hk β˜k ek+1*+σk2 γk2 (ek+1*)2 Hk Pk−1 HkT+σk (Hk β˜k+1)2=Vk−2σk Hk β˜k(Hk β˜k+1+▵k)+σk2 Hk Pk−1 HkT (Hk β˜k+1+▵k)2+σk (Hk β˜k+1)2=Vk−2σk Hk(β˜k+1+σk γk Pk−1 HkT ek+1*)(Hk β˜k+1+▵k)+σk2 Hk Pk−1 HkT (Hk β˜k+1+▵k)2+σk (Hk β˜k+1)2=Vk−σk (Hk β˜k+1)2−2σk Hk β˜k+1 ▵k−σk2 Hk Pk−1 HkT (Hk β˜k+1+▵k)2≤Vk−σk (Hk β˜k+1)2−2σk Hk β˜k+1 ▵k=Vk+σk ▵k2−σk (Hk β˜k+1+▵k)2
then
Vk+1≤Vk+σk ▵k2−σk (Hk β˜k+1+▵k)2=Vk+σk ▵k2−σk γk2 (ek+1*)2=Vk−σk(γk2 (ek+1*)2−▵k2)≤Vk−σk(γk2 (ek+1*)2−▵2)
Vk+1−Vk≤−σk(γk2 (ek+1*)2−▵2)≤0,
andVkis nonnegative.
Then
β˜k+1T Pk−1 β˜k+1≤β˜kT Pk−1−1 β˜k≤β˜1T P0−1 β˜1.
The following inequalities can be obtained from the matrix inversion lemma,
λmin(Pk−1)≥λmin(Pk−1−1)≥λmin(P0−1),
and this implies
λmin(P0−1)∥β˜k+1∥2≤λmin(Pk−1)∥β˜k+1∥2≤β˜k+1T Pk−1 β˜k+1≤β˜1T P0−1 β˜1≤λmax(P0−1)∥β˜1∥.
This establishes part (1).
Summing both sides of (47) from 1 to N withVkbeing nonnegative, we have
Vk+1≤V1−limN→∞∑k=1Nσk(γk2 (ek+1*)2−▵2),
and we immediately get
limN→∞∑k=1Nσk[(ek+1*)2 (1+σk Hk Pk−1 HkT)2−▵2]≤∞
(2) holds, then
limk→∞σk[(ek+1*)2 (1+σk Hk Pk−1 HkT)2−▵2]=0
(a) and (b) hold. □
We also immediately have
limsupk→∞[ek+1*(1+σk Hk Pk−1 HkT)]2≤▵2.
Because the activation functionG(wi,bi,xj)is bounded,ek+1*is bounded. Because▵uk=|yk+1*−ykHk β^k|≤M,ek+1=yk+1−yk+1*is bounded.
5. Analysis of Experimental Results
In this paper, EMREOS-ELM is investigated, and EMREOS-ELM is adopted for estimating the valueψk; EMREOS-ELM is an online learning algorithm and can train a single hidden layer neural network; besides it can adjust the network structure. In order to further show the effectiveness of the MFAC method based on EMREOS-ELM, the MFAC method based on another online learning algorithm is compared, and the online learning algorithm adjusts network structure using an incremental form. In order to describe this online learning algorithm easily, in this paper, this online learning algorithm is named as IREOS-ELM. The MFAC method based on IREOS-ELM also has three phases, parameter initialization, parameter learning and the adjustment of network structure. During the adjustment of network structure, all input weights and all hidden biases are reinitialized, and the network is retrained by using thek−1th training data. The MFAC method based on EMREOS-ELM is different from the MFAC method based on IREOS-ELM, it only initializes the input weight of the increased hidden node and the hidden bias of the increased hidden node, and it updates the output weights using the pseudoinverse of a partitioned matrix.
In this part, a simulation experiment is did, besides the MFAC algorithm based on EMREOS-ELM is compared with the MFAC algorithm based on RLS ([33] Equation 5.49), the MFAC method based on IREOS-ELM and RBF algorithm ([34] Equation 10.22).
In this numerical simulation experiment, the following single-input single-output unknown discrete-time nonlinear system is considered, and
yk+1=yk1+yk2+uk3.
The discrete-time nonlinear system (55) comes from [35].
The reference signal is
yk+1*=4+2sin(πTsk)+1.5cos(πTsk)
whereTsis the sampling time, andTs=0.01.
The parameters of MFAC algorithm based on RLS areρk=1,λ1=0.01, anda0=0.95. The activation function of EMREOS-ELM is sigmoid,λ=0.1, and▵=0.002 . Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6 show the main results of this numerical simulation experiment. Figure 2 plots the tracking curves, and in Figure 3, the curve denotes the tracking error change trend. Figure 4 shows the change trends ofEkandLk . Figure 4 indicates that the final value ofLkis 9, whenLk=9, the tracking error satisfiesEk≤Em , and the result of Figure 4 shows that EMEROS-ELM is effective in obtaining a compact network structure. Figure 5 indicates the change curve ofψ^k,ψ^k=Hk β^k, andψ^kis the estimated value ofψk . In Section 4.4, the stability analysis of the MFAC algorithm based on EMREOS-ELM for unknown discrete-time nonlinear was stated, and we get the inequality (47). In order to verify the inequality (47), a new variable as
Δvk≜−σk(γk2 (ek+1*)2−▵2).
Figure 6 shows the change curve of ofΔvk, and it indicates thatΔvk≤0.
Besides the error change curves ofek(1),ek(2),ek(3)andek(4) are plotted in Figure 7, Figure 8, Figure 9 and Figure 10, respectively. Hereek(1)indicates the error change curve of the REOS-ELM which has a fixed network structure, and the number of hidden nodes is 8;ek(2)indicates the error change curve of the REOS-ELM which has a fixed network structure, and the number of hidden nodes is 20;ek(3)is the error change curve of the MFAC algorithm based on RLS; andek(4) indicates the error change curve of RBF which has a fixed network structure, and the number of hidden nodes is 9. Figure 7, Figure 8, Figure 9 and Figure 10 indicate that the tracking effect of EMREOS-ELM is better than four other algorithms.
In order to further show the effectiveness of algorithms, the integral square error (ISE) index of predicted output is introduced, and
eISE=∑l=500010000(yl*−yl)2.
Twenty groups of simulation experiments were done, and the ISE of those six control algorithms are shown in Table 1. The final hidden node numbers of simulation experiments are showed in Table 2, and Table 2 shows that the MFAC method based on EMREOS-ELM is effective in adjusting network structure. The average values of ISE indicate that compared with five other control algorithms, the proposed algorithm has an improvement in the performance of control systems.
6. Conclusions In order to analyse the stability of the MFAC algorithm based on REOS-ELM for unknown discrete-time nonlinear systems, an updating formula, which contains dead-zone characteristics, is introduced, and EMREOS-EM is investigated for the purpose of getting a more compact network structure and improving the performances of control systems. The proposed MFAC method based on EMREOS-ELM is compared with five other algorithms, and the simulation results show that the proposed algorithm has an improvement in the performance of control systems. In this paper, the MFAC method based on EMREOS-ELM is introduced for single-input single-output unknown discrete-time nonlinear systems. In the future, we plan to do some work for multiple-input, multiple-output unknown discrete-time nonlinear systems based on the proposed algorithm, and in order to improve the performance of robot control systems, we will also apply this algorithm to robot control systems which are complex nonlinear systems.
The MFAC Algorithm Based on REOS-ELM(L = 8) | The MFAC Algorithm Based on REOS-ELM(L = 20) | The MFAC Algorithm Based on EMREOS-ELM | RBFNN (L = 9) | The MFAC Algorithm Based on RLS | The MFAC Algorithm Based on IREOS-ELM | |
---|---|---|---|---|---|---|
1 | 0.150672 | 0.002515 | 0.001562 | 0.550053 | 4.449964 | 0.012889 |
2 | 0.019459 | 0.040914 | 0.001705 | 0.502426 | 2.534129 | 0.433985 |
3 | 0.703282 | 0.014989 | 0.005451 | 0.564361 | 2.274406 | 0.016331 |
4 | 0.172610 | 0.150750 | 0.027222 | 0.504968 | 1.909120 | 0.017123 |
5 | 0.083149 | 0.244061 | 0.004806 | 0.402855 | 1.824217 | 0.007621 |
6 | 0.696841 | 0.009112 | 0.006475 | 0.411857 | 2.548398 | 0.004997 |
7 | 1.568039 | 0.051220 | 0.014730 | 0.591908 | 2.222298 | 0.182836 |
8 | 0.322919 | 0.024096 | 0.00295 | 0.584721 | 1.839572 | 0.009986 |
9 | 0.051606 | 0.576331 | 0.00295 | 0.524870 | 1.799829 | 0.048388 |
10 | 0.158293 | 0.678617 | 0.003559 | 0.696158 | 2.231448 | 0.001695 |
11 | 0.055251 | 0.004448 | 0.014255 | 0.551650 | 2.409947 | 0.015570 |
12 | 0.008733 | 0.004131 | 0.019071 | 0.493070 | 1.913370 | 0.046355 |
13 | 0.376944 | 0.008184 | 0.004649 | 0.553147 | 2.946602 | 0.012967 |
14 | 0.297452 | 0.01459 | 0.002863 | 0.633631 | 1.906357 | 0.054175 |
15 | 0.132094 | 0.329650 | 0.002116 | 0.730532 | 2.456959 | 0.016178 |
16 | 0.099835 | 0.194384 | 0.036917 | 0.779908 | 3.446312 | 0.017376 |
17 | 0.222907 | 0.002091 | 0.011495 | 0.431327 | 1.952456 | 0.049845 |
18 | 0.021288 | 0.002629 | 0.005704 | 0.561705 | 2.045576 | 0.049984 |
19 | 0.481443 | 0.004517 | 0.025093 | 0.582250 | 2.380352 | 0.007150 |
20 | 0.018625 | 0.212689 | 0.001724 | 0.664968 | 2.012102 | 0.016439 |
average value | 0.282072 | 0.128496 | 0.009729 | 0.565818 | 2.354829 | 0.051094 |
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
The MFAC algorithm based on EMREOS-ELM | 9 | 9 | 10 | 11 | 10 | 14 | 9 | 9 | 10 | 10 | 10 | 10 | 9 | 11 | 10 | 11 | 11 | 11 | 10 | 9 |
The MFAC algorithm based on IREOS-ELM | 10 | 11 | 10 | 9 | 12 | 11 | 11 | 11 | 10 | 10 | 12 | 10 | 9 | 11 | 10 | 14 | 11 | 9 | 9 | 10 |
Author Contributions
Conceptualization and Methodology, X.Z. and H.M.; Writing-draft version, X.Z.; Writing-review editing, X.Z. and H.M.
Funding
This work is partially supported by Ministry of Science and Technology of the People's Republic of China under Grant No. 2017YFF0205306. This work is also partially supported by National natural Science Foundation of China under Grant No. 91648117.
Conflicts of Interest
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript:
MFAC | model free adaptive control |
I/O | input/output |
DDC | data-driven control |
PID | proportional-integral derivative |
UC | unfalsified control |
RBFNN | radial basis function neural network |
EMREOS-ELM | error minimized regularized online sequential extreme learning machine |
ELM | extreme learning machine |
OS-ELM | online sequential extreme learning machine |
REOS-ELM | regularized online sequential extreme learning machine |
EMOS-ELM | error minimized online sequential extreme learning machine |
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2019. This work is licensed under http://creativecommons.org/licenses/by/3.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Model-free adaptive control (MFAC) builds a virtual equivalent dynamic linearized model by using a dynamic linearization technique. The virtual equivalent dynamic linearized model contains some time-varying parameters, time-varying parameters usually include high nonlinearity implicitly, and the performance will degrade if the nonlinearity of these time-varying parameters is high. In this paper, first, a novel learning algorithm named error minimized regularized online sequential extreme learning machine (EMREOS-ELM) is investigated. Second, EMREOS-ELM is used to estimate those time-varying parameters, a model-free adaptive control method based on EMREOS-ELM is introduced for single-input single-output unknown discrete-time nonlinear systems, and the stability of the proposed algorithm is guaranteed by theoretical analysis. Finally, the proposed algorithm is compared with five other control algorithms for an unknown discrete-time nonlinear system, and simulation results show that the proposed algorithm can improve the performance of control systems.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer