1. Introduction
Entropy is a measure of the complexity of time series [1,2,3,4,5], among which the entropies based on Shannon entropy [6] are the most widely used, including permutation entropy (PE) [7], dispersion entropy (DE) [8], etc. The definition of PE is based on the sequential relationship among the time series. Moreover, the concept of PE is simple, and its calculation speed is fast [9], but its stability is not very good. Therefore, as an improved algorithm of PE, dispersion entropy (DE) is proposed, which has the advantages of little influence by burst signals and good stability [10]. These two kinds of entropies and their improved entropies have shown good results in various fields [11,12,13].
PE is one of the most commonly used time series complexity estimators. PE has clearly proved its usefulness in mechanical engineering, mainly in the field of fault diagnosis. Taking the research on fault diagnosis for rolling bearing as an example, the advantage of PE is that it is not limited by the bearing signals and the length of permutation samples [14]. However, PE does not take the difference between the amplitude values. In order to consider the amplitude information of time series, the complexity of time series is analyzed by weighted permutation entropy (WPE) [15]. It is concluded that WPE not only has the same advantages as PE, but also can detect the complexity of dynamic mutation by quantifying amplitude information. Concurrently, other application fields of PE and WPE have also received great attention [16,17,18,19].
As an improved algorithm of PE, DE also introduces amplitude information, and has the advantage of distinguishing different types of signals easily and calculating fast. Regarding the applications of DE, it has been used widely in bearing signals classification. In the feature extraction experiment of bearing fault diagnosis, DE can classify bearing faults through short data, and has high recognition accuracy in the case of small samples [20]. However, DE is impossible to evaluate the fluctuation of time series. Thus, the fluctuation information is combined with DE to obtain fluctuating dispersion entropy (FDE) [21]. FDE takes into account the fluctuation of time series, which can discriminate deterministic from stochastic time series. And relative to DE, FDE reduces all possible dispersion modes to speed up the calculation of entropy. After that, DE and FDE have also made great achievements in the fields of medicine and underwater acoustics [22,23]. In order to make the feature of DE more significant, fractional calculus is proposed to combine with DE [24], where fractional calculus can introduce fractional information into entropy [25]. Similarly, there is the existence of fractional fuzzy entropy (FE), which is the combination of fractional calculus and FE [26].
Slope entropy (SlEn) is a new time series complexity estimator proposed in recent years. Its concept is simple, which is only based on the amplitude of time series and five modules. Since it was proposed, it has been used in the fields of medical, hydroacoustics, and fault diagnosis. In 2019, SlEn was first proposed by David Cuesta-Frau, and successively applied it to the classification of electroencephalographic (EEG) records and electromyogram (EMG) records [27], classify the activity records of patients with bipolar disorder [28], and the features extraction of fever time series [29]. Then SlEn is also used to extract the features of ship radiated noise signals [30] and bearing fault signals [31].
SlEn has proven to have strong superiority as features. However, SlEn has not received the attention it deserves. A big factor that leads to this situation is the influence of the two threshold parameter settings on its effect. Therefore, in order to solve this problem, we introduce particle swarm optimization (PSO) algorithm to optimize these two threshold parameters. Another factor is that there is still room for improvement in the basic SlEn. Therefore, in order to improve the significance of features, we combine fractional calculus with SlEn. Finally, a new algorithm named PSO fractional SlEn (PSO-FrSlEn) is proposed in this paper, which is an improved time series complexity indicator of SlEn.
The structure of this paper is divided as follows. Section 2 introduces the algorithm steps of the proposed method in detail. Section 3 exhibits the experimental process of this paper briefly. Section 4 and Section 5 demonstrate the experiment and analysis of single feature extraction and double feature extraction separately. Finally, the innovations of this paper and the conclusions of the experiments are drawn in Section 6.
2. Algorithms
2.1. Slope Entropy Algorithm
For a given time series , SlEn is calculated according to the following steps.
-
Step 1:
set an embedding dimension m, which can divide the time series into subsequences, where m is greater than two and much less than . The disintegrate form is as follows:
(1)
Here, all subsequences contain elements, such as .
-
Step 2:
subtract the latter of the two adjacent elements in all the subsequences obtained in Step 1 from the former to obtain k new sequences. The new form is as follows:
(2)
Here, the element , and all sequences contain elements, such as .
-
Step 3:
lead into the two threshold parameters and of SlEn, where , and compare all elements in the sequences obtained from Step 2 with the positive and negative values of these two threshold parameters. The positive and negative values of these two threshold parameters and serve as the dividing lines, they divide the number field into five modules , and . If , the module is ; if , the module is ; if , the module is ; if , the module is ; if , the module is ; if , the module is . The intuitive module division principle is shown by the coordinate axis in Figure 1 below:
The form of the sequences is below:
(3)
Here, each element in is 2, 1, 0, 1, or 2, and there will be the exactly the same sequence.
-
Step 4:
the number of modules is 5, so all types of the sequences are counted as . Such as when m is 3, there will be at most 25 types of , which are , , …, , , …, , . The number of each type records as , and the frequency of each type is calculated as follows:
(4)
-
Step 5:
based on the classical Shannon entropy, the formula of SlEn is defined as follows:
(5)
2.2. Fractional Slope Entropy Algorithm
In this paper, the concept of fractional order is introduced into SlEn for the first time, and the calculation formula of the improved algorithm of SlEn (FrSlEn) is obtained through the following steps.
-
Step 1:
Shannon entropy is the first entropy to consider fractional calculus, and its generalized expression is as follows:
(6)
Here, is the order of fractional derivative, and are the gamma and digamma functions.
-
Step 2:
extract the fractional order information of order from Equation (6):
(7)
-
Step 3:
combine the fractional order with SlEn, which is to replace with Equation (7). Therefore, the formula of FrSlEn is defined as follows:
(8)
2.3. Particle Swarm Optimization and Algorithm Process
In order to get a better effect of FrSlEn, we use particle swarm optimization (PSO) algorithm to optimize the two threshold parameters and of SlEn. Considering all the above algorithm steps and conditions, the algorithm flowchart of SlEn and three kinds of improved SlEn is as follows in Figure 2:
3. Proposed Feature Extraction Methods
The experiment of this paper is divided into two parts: single feature extraction and double feature extraction. The specific experimental process of single feature extraction is as follows.
-
Step 1:
the 10 kinds of bearing signals are normalized, which can make the signals neat and regular, the threshold parameters and less than 1, where is less than 0.2 in most cases.
-
Step 2:
the five kinds of single features of these 10 kinds of normalized bearing signals are extracted separately under seven different fractional orders.
-
Step 3:
the distribution of the features is obtained and the hybrid degrees between the feature points are observed.
-
Step 4:
these features are classified into one of the 10 bearing signals by K-Nearest Neighbor (KNN).
-
Step 5:
the classification accuracies of the features are calculated.
The experimental process of double feature extraction is roughly the same as that of single feature extraction. In the Step 2 of double feature extraction, combine any two different fractional orders of the seven fractional orders as double features, which can obtain 21 double feature combinations. The experimental process flowchart is as follows in Figure 3:
4. Single Feature Extraction
4.1. Bearing Signals
The object of this paper is bearing signal. Ten kinds of bearing signals with different faults and fault diameter sizes under the same working state are randomly selected and downloaded for this paper, and the 10 kinds of bearing signals come from the same website [32].
The signal data are measured when the motor load is three horsepower. First of all, it is essential to have a normal bearing signal, which is coded as N-100. Then, the bearing fault signals are divided into three types: inner race fault signals, ball fault signals and outer race fault signals, where for the outer race fault signals, the relative position coincidence area of the outer race is the central direction (six o’clock direction). Finally, there are three kinds of fault diameter sizes, which are 0.007 in, 0.014 in, and 0.021 in. According to the three different types of faults and the three different sizes of fault diameter, the fault signals are divided into nine categories, and they are coded as IR-108, B-121, OR-133, IR-172, B-188, OR-200, IR-212, B-225, and OR-237.
The data files are in MATLAB format, and each file contains the acceleration time series data of drive end, fan end and base. The drive end acceleration time series data are chosen as the experimental data in this paper. The signals data are normalized, and the normalized signals are shown in Figure 4.
4.2. Feature Distribution
In this paper, five kinds of entropies based on Shannon entropy are selected as the features of the above 10 bearing signals for feature extraction. The five kinds of entropies are PE, WPE, DE, FDE, and SlEn, which are renamed FrPE, FrWPE, FrDE, FrFDE, and FrSlEn after combining with the fractional orders.
The parameters shared by different entropies are necessary to be set to the same value. There are three parameters of FrPE and FrWPE, five parameters of FrDE and FrFDE, and four parameters of FrSlEn, where the two same parameters of them are embedding dimension () and fractional order (). So set all the m as 4, and take all the from 0.3 to 0.3, where is the case without fractional order. The same parameter of FrPE, FrWPE, FrDE, and FrFDE is time lag (), and all the τ are set as 1. The two proprietary parameters of FrDE and FrFDE are number of classes () and mapping approach, we set the c of them as and the mapping approach of them is normal cumulative distribution function (NCDF). There are two threshold parameters of FrSlEn are proprietary, which are large threshold () and small threshold (). They are non-negative and optimized by PSO in this paper, so FrSlEn is renamed as PSO-FrSlEn.
According to the sampling point lengths of the above signals, most of which are just more than , every 4000 sample points are taken as one sample, so there are 30 samples for each kind of bearing signals. Combined with all the parameter settings mentioned above, the single features of the 30 samples of each kind of signals are extracted. PSO-FrSlEn under different is taken as an example, the feature distribution of PSO-FrSlEn is shown in Figure 5 below:
As can be seen from Figure 5, the feature points of B-121, B-225, and OR-237 are obviously mixed under ; under , , and , the feature points of N-100, B-121, IR-212, B-225, and OR-237 are mixed with each other; all feature points except those of OR-200 are mixed to varying degrees under and ; under , no kind of feature points is isolated. According to the distribution and confusion degree of these kinds of feature points, we can judge whether each entropy under different is a notable feature of the signals. In order to intuitively show whether the features are distinguishing, we also undertake classification experiments.
4.3. Classification Effect Verification
KNN is selected as a classifier for the features, which can classify all the features into the corresponding positions of the signals after being trained. The number of nearest samples is set as , and 15 samples are taken as training samples and 15 as test samples from the 30 samples of each type of the signals. Also take PSO-FrSlEn as an example. The final classification results and distribution of PSO-FrSlEn are shown in Figure 6 below:
It can be seen from the distribution of sample points in Figure 6, for N-100, IR-108, OR-133, OR-200, IR-212, and B-225, at most five sample points are misclassified; for B-121, more than half of the sample points are misclassified when , but at most only four sample points are misclassified when the is another value, and the classification is completely correct when ; for IR-172, most of the sample points are misclassified when and , but the classification is basically correct when the is another value; for B-188, the classification effect of sample points is very poor no matter what value takes except when ; for OR-237, all sample points can be classified correctly when . It can be concluded that the classification ability of the same entropy is different under different values of .
The classification accuracies of each entropy under different fractional orders are obtained after calculation. All the accuracies obtained are recorded in the Table 1 below, and a line graph is drawn in Figure 7 for comparison.
The following information can be obtained from Table 1 and Figure 7, all the classification accuracies are less than 90%; the classification accuracies of PSO-FrSlEn under any fractional orders are greater than those of other arbitrary entropies, and are greater than 80%; the classification accuracies of DE and SlEn with fractional orders are higher than those without fractional orders, which proves that fractional order can make entropy have higher classification accuracy. In order to further improve the classification accuracy and prove the superiority of PSO-FrSlEn, we add a double feature extraction experiment.
5. Double Feature Extraction
5.1. Feature Distribution
There are 7 values of , and the classification accuracies of the sample points vary greatly under different . Therefore, combine any two different of the same entropy as a fractional order combination. Each entropy can get 21 groups of fractional order combinations. Define the 21 groups of fractional order combinations as 21 double feature combinations, which are . There are also 30 samples for each signal in the double feature extraction experiment. Each entropy has 21 double feature combinations, so there are 105 double feature combinations in total. Double feature distribution of the nine highest classification accuracies is shown in Figure 8, in which there is only one highest classification accuracy for FrPE, FrWPE, FrDE, and FrFDE, while there are five highest classification accuracy for PSO-FrSlEn.
We can obtain the following information from Figure 8, for FrPE, most feature points of IR-108, OR-133, IR-172, B-188, B-225, and OR-237 are mixed together; for FrWPE, the feature points of each signal are mixed with each other expect those of N-100, OR-200, and IR-212; for FrDE, only a few feature points of IR-108, IR-172, and B-188 have mixed phenomenon, most feature points of the other signals are connected into lines and parallel to each other; for FrFDE, only the feature points of B-121, IR-172, and B-188 are mixed, but the degree of the mixing is great; for PSO-FrSlEn, only one or two feature points of IR-172 are mixed into the feature points range of B-188, and those of other signals are in their respective own piles and not mixed into the others’ range.
The mixing degree and the kind number of the feature points determine whether these features are significant. The greater the mixing degree of feature points or the more kinds of mixed feature points, the less significant the features are. Therefore, it can be concluded from the above information, the features of FrWPE are the least significant, and those of PSO-FrSlEn are the most significant. In order to confirm this conclusion, we carried out feature classification experiment to verify.
5.2. Classification Effect Verification
KNN is used as a classifier to classify these double features, and the parameter settings are the same as those in the single feature experiment. The highest classification accuracy of each entropy calculated by the program is shown in Table 2 below. A line graph is drawn in Figure 9 for comparison, where 1 to 21 on the abscissa axis represent the double feature combinations from to respectively.
As can be seen from the data in Table 2, there are five double feature combinations of PSO-FrSlEn, which can make the feature classification accuracy up to be the highest, and the highest accuracy is 100%; the highest classification accuracy of the other four kinds of entropies is only 96% of FrDE, and the lowest one is 72.67% of FrWPE. The line graph in Figure 9 shows that most double feature combinations of PSO-FrSlEn has higher classification accuracy than the other kinds of entropies. These conclusions are sufficient to prove that the features of PSO-FrSlEn are the most significant to distinguish the 10 kinds of bearing signals.
6. Conclusions
In this paper, the fractional order is combined with the five kinds of entropies, which are PE, WPE, DE, FDE and SlEn, and the features of these five kinds of entropies are extracted for the 10 kinds of bearing signals. Single feature experiment and double feature experiment are carried out respectively. KNN is used to classify these features to verify the significant degree of various features. The main innovations and experimental comparison results are as follows:
(1). As an algorithm proposed in 2019, SlEn has not been proposed any improved algorithm. It is proposed for the first time to combine the concept of fractional information with SlEn, and get an improved algorithm of SlEn named FrSlEn.
(2). In order to solve the influence of the two threshold parameters of SlEn on feature significance, PSO is selected to optimize the two threshold parameters, which assists FrSlEn to make the extracted features more significant.
(3). In the experiment of single feature extraction, under any values of α, the classification accuracies of PSO-FrSlEn are the highest. The classification accuracies of PSO-FrSlEn are higher than that of PSO-SlEn, where 88% is the highest classification accuracy of PSO-FrSlEn under α = −0.3. The highest classification accuracy of PSO-FrSlEn is at least 5.33% higher than FrPE, FrWPE, FrDE, and FrFDE.
(4). In the experiment of double feature extraction, the classification accuracies of PSO-FrSlEn under five double feature combinations are 100%. The highest classification accuracies of FrPE, FrWPE, FrDE, and FrFDE are at least 4% less than PSO-FrSlEn, where the highest classification accuracy of FrWPE is 27.33% less than PSO-FrSlEn.
Conceptualization, Y.L.; Data curation, L.M.; Formal analysis, Y.L.; Methodology, Y.L.; Project administration, L.M.; Resources, Y.L.; Supervision, P.G.; Validation, Y.L.; Writing—review & editing, P.G. All authors have read and agreed to the published version of the manuscript.
Not applicable.
Not applicable.
The data used to support the findings of this study are available from the corresponding author upon request.
The authors declare no conflict of interests.
PE | Permutation entropy |
WPE | Weighted permutation entropy |
DE | Dispersion entropy |
FDE | Fluctuation dispersion entropy |
SlEn | Slope entropy |
PSO-SlEn | Particle swarm optimization slope entropy |
FrPE | Fractional permutation entropy |
FrWPE | Fractional weighted permutation entropy |
FrDE | Fractional dispersion entropy |
FrFDE | Fractional fluctuation dispersion entropy |
FrSlEn | Fractional slope entropy |
PSO-FrSlEn | Particle swarm optimization fractional slope entropy |
|
Fractional order |
|
Embedding dimension |
|
Time lag |
|
Number of classes |
NCDF | Normal cumulative distribution function |
|
Large threshold |
|
Small threshold |
N-100 | Normal signals |
IR-108 | Inner race fault signals (fault diameter size: 0.007 inch) |
B-121 | Ball fault signals (fault diameter size: 0.007 inch) |
OR-133 | Outer race fault signals (fault diameter size: 0.007 inch) |
IR-172 | Inner race fault signals (fault diameter size: 0.014 inch) |
B-188 | Ball fault signals (fault diameter size: 0.014 inch) |
OR-200 | Outer race fault signals (fault diameter size: 0.014 inch) |
IR-212 | Inner race fault signals (fault diameter size: 0.021 inch) |
B-225 | Ball fault signals (fault diameter size: 0.021 inch) |
OR-237 | Outer race fault signals (fault diameter size: 0.021 inch) |
KNN | K-Nearest Neighbor |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Figure 2. Algorithm flowchart: (a) SlEn; (b) PSO-SlEn; (c) FrSlEn; (d) PSO-FrSlEn.
Figure 2. Algorithm flowchart: (a) SlEn; (b) PSO-SlEn; (c) FrSlEn; (d) PSO-FrSlEn.
Figure 4. The normalized 10 bearing signals: (a) N-100; (b) IR-108; (c) B-121; (d) OR-133; (e) IR-172; (f) B-188; (g) OR-200; (h) IR-212; (i) B-225; (j) OR-237.
Figure 4. The normalized 10 bearing signals: (a) N-100; (b) IR-108; (c) B-121; (d) OR-133; (e) IR-172; (f) B-188; (g) OR-200; (h) IR-212; (i) B-225; (j) OR-237.
Figure 5. Single feature distribution of PSO-FrSlEn: (a) [Forumla omitted. See PDF.]; (b) [Forumla omitted. See PDF.]; (c) [Forumla omitted. See PDF.]; (d) [Forumla omitted. See PDF.]; (e) [Forumla omitted. See PDF.]; (f) [Forumla omitted. See PDF.]; (g) [Forumla omitted. See PDF.].
Figure 5. Single feature distribution of PSO-FrSlEn: (a) [Forumla omitted. See PDF.]; (b) [Forumla omitted. See PDF.]; (c) [Forumla omitted. See PDF.]; (d) [Forumla omitted. See PDF.]; (e) [Forumla omitted. See PDF.]; (f) [Forumla omitted. See PDF.]; (g) [Forumla omitted. See PDF.].
Figure 6. Classification results and distribution of PSO-FrSlEn: (a) [Forumla omitted. See PDF.]; (b) [Forumla omitted. See PDF.]; (c) [Forumla omitted. See PDF.]; (d) [Forumla omitted. See PDF.]; (e) [Forumla omitted. See PDF.]; (f) [Forumla omitted. See PDF.]; (g) [Forumla omitted. See PDF.].
Figure 6. Classification results and distribution of PSO-FrSlEn: (a) [Forumla omitted. See PDF.]; (b) [Forumla omitted. See PDF.]; (c) [Forumla omitted. See PDF.]; (d) [Forumla omitted. See PDF.]; (e) [Forumla omitted. See PDF.]; (f) [Forumla omitted. See PDF.]; (g) [Forumla omitted. See PDF.].
Figure 8. Double feature distribution of the nine highest classification accuracies: (a) FrPE, [Forumla omitted. See PDF.]; (b) FrWPE,[Forumla omitted. See PDF.]; (c) FrDE, [Forumla omitted. See PDF.]; (d) FrFDE,[Forumla omitted. See PDF.]; (e) PSO-FrSlEn, [Forumla omitted. See PDF.]; (f) PSO-FrSlEn, [Forumla omitted. See PDF.]; (g) PSO-FrSlEn, [Forumla omitted. See PDF.]; (h) PSO-FrSlEn, [Forumla omitted. See PDF.]; (i) PSO-FrSlEn,[Forumla omitted. See PDF.].
Figure 8. Double feature distribution of the nine highest classification accuracies: (a) FrPE, [Forumla omitted. See PDF.]; (b) FrWPE,[Forumla omitted. See PDF.]; (c) FrDE, [Forumla omitted. See PDF.]; (d) FrFDE,[Forumla omitted. See PDF.]; (e) PSO-FrSlEn, [Forumla omitted. See PDF.]; (f) PSO-FrSlEn, [Forumla omitted. See PDF.]; (g) PSO-FrSlEn, [Forumla omitted. See PDF.]; (h) PSO-FrSlEn, [Forumla omitted. See PDF.]; (i) PSO-FrSlEn,[Forumla omitted. See PDF.].
Figure 9. The classification accuracies under different double feature combinations.
The classification accuracy of each entropy under different fractional orders.
Figure | FrPE Accuracy (%) | FrWPE Accuracy (%) | FrDE Accuracy (%) | FrFDE Accuracy (%) | PSO-FrSlEn Accuracy (%) |
---|---|---|---|---|---|
−0.3 | 64.67 | 46.67 | 82.67 | 77.33 | 88 |
−0.2 | 78.67 | 60.67 | 81.33 | 73.33 | 84 |
−0.1 | 76.67 | 66.67 | 80.67 | 79.33 | 83.33 |
0 | 76.67 | 69.33 | 69.33 | 79.33 | 81.33 |
0.1 | 75.33 | 69.33 | 80 | 79.33 | 86 |
0.2 | 75.33 | 69.33 | 82 | 80.67 | 85.33 |
0.3 | 66 | 72.76 | 82.67 | 80 | 83.33 |
The highest accuracy of each entropy under different double feature combinations.
Entropy | Fractional Order Combinations | Accuracy (%) |
---|---|---|
FrPE |
|
78.67 |
FrWPE |
|
72.67 |
FrDE |
|
96 |
FrFDE |
|
89.33 |
PSO-FrSlEn |
|
100 |
|
100 | |
|
100 | |
|
100 | |
|
100 |
References
1. Tsallis, C. Possible generalization of Boltzmann-Gibbs statistics. J. Stat. Phys.; 1988; 52, pp. 479-487. [DOI: https://dx.doi.org/10.1007/BF01016429]
2. Rényi, A. On measures of entropy and information. Virology; 1985; 142, pp. 158-174.
3. Yin, Y.; Sun, K.; He, S. Multiscale permutation Rényi entropy and its application for EEG signals. PLoS ONE; 2018; 13, 0202558. [DOI: https://dx.doi.org/10.1371/journal.pone.0202558]
4. Richman, J.S.; Moorman, J.R. Physiological time-series analysis using approximate entropy and sample entropy. Am. J. Physiol.-Heart Circ. Physiol.; 2000; 6, pp. 2039-2049. [DOI: https://dx.doi.org/10.1152/ajpheart.2000.278.6.H2039]
5. Zair, M.; Rahmoune, C.; Benazzouz, D. Multi-fault diagnosis of rolling bearing using fuzzy entropy of empirical mode decomposition, principal component analysis, and SOM neural network. Proc. Inst. Mech. Eng. Part C; 2019; 233, pp. 3317-3328. [DOI: https://dx.doi.org/10.1177/0954406218805510]
6. Lin, J.; Lin, J. Divergence measures based on the Shannon entropy. IEEE Trans. Inf. Theory; 1991; 37, pp. 145-151. [DOI: https://dx.doi.org/10.1109/18.61115]
7. Bandt, C.; Pompe, B. Permutation entropy: A natural complexity measure for time series. Phys. Rev. Lett.; 2002; 88, 174102. [DOI: https://dx.doi.org/10.1103/PhysRevLett.88.174102]
8. Rostaghi, M.; Azami, H. Dispersion Entropy: A Measure for Time Series Analysis. IEEE Signal Process. Lett.; 2016; 23, pp. 610-614. [DOI: https://dx.doi.org/10.1109/LSP.2016.2542881]
9. Tylová, L.; Kukal, J.; Hubata-Vacek, V.; Vyšata, O. Unbiased estimation of permutation entropy in EEG analysis for Alzheimer’s disease classification. Biomed. Signal Process. Control.; 2018; 39, pp. 424-430. [DOI: https://dx.doi.org/10.1016/j.bspc.2017.08.012]
10. Rostaghi, M.; Ashory, M.R.; Azami, H. Application of dispersion entropy to status characterization of rotary machines. J. Sound Vib.; 2019; 438, pp. 291-308. [DOI: https://dx.doi.org/10.1016/j.jsv.2018.08.025]
11. Qu, J.; Shi, C.; Ding, F.; Wang, W. A novel aging state recognition method of a viscoelastic sandwich structure based on permutation entropy of dual-tree complex wavelet packet transform and generalized Chebyshev support vector machine. Struct. Health Monit.; 2020; 19, pp. 156-172. [DOI: https://dx.doi.org/10.1177/1475921719838342]
12. Azami, H.; Escudero, J. Improved multiscale permutation entropy for biomedical signal analysis: Interpretation and application to electroencephalogram recordings. Biomed. Signal Process. Control.; 2016; 23, pp. 28-41. [DOI: https://dx.doi.org/10.1016/j.bspc.2015.08.004]
13. Zhang, W.; Zhou, J. A Comprehensive Fault Diagnosis Method for Rolling Bearings Based on Refined Composite Multiscale Dispersion Entropy and Fast Ensemble Empirical Mode Decomposition. Entropy; 2019; 21, 680. [DOI: https://dx.doi.org/10.3390/e21070680] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33267394]
14. Feng, F.; Rao, G.; Jiang, P.; Si, A. Research on early fault diagnosis for rolling bearing based on permutation entropy algorithm. Proceedings of the IEEE Prognostics and System Health Management Conference; Beijing, China, 23–25 May 2012; Volume 10, pp. 1-5.
15. Fadlallah, B.; Chen, B.; Keil, A. Weighted-permutation entropy: A complexity measure for time series incorporating amplitude information. Phys. Rev. E; 2013; 87, 022911. [DOI: https://dx.doi.org/10.1103/PhysRevE.87.022911] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/23496595]
16. Xie, D.; Hong, S.; Yao, C. Optimized Variational Mode Decomposition and Permutation Entropy with Their Application in Feature Extraction of Ship-Radiated Noise. Entropy; 2021; 23, 503. [DOI: https://dx.doi.org/10.3390/e23050503]
17. Li, D.; Li, X.; Liang, Z.; Voss, L.J.; Sleigh, J.W. Multiscale permutation entropy analysis of EEG recordings during sevoflurane anesthesia. J. Neural Eng.; 2010; 7, 046010. [DOI: https://dx.doi.org/10.1088/1741-2560/7/4/046010]
18. Deng, B.; Cai, L.; Li, S.; Wang, R.; Yu, H.; Chen, Y. Multivariate multi-scale weighted permutation entropy analysis of EEG complexity for Alzheimer’s disease. Cogn. Neurodyn.; 2017; 11, pp. 217-231. [DOI: https://dx.doi.org/10.1007/s11571-016-9418-9]
19. Zhenya, W.; Ligang, Y.; Gang, C.; Jiaxin, D. Modified multiscale weighted permutation entropy and optimized support vector machine method for rolling bearing fault diagnosis with complex signals. ISA Trans.; 2021; 114, pp. 470-480.
20. Li, R.; Ran, C.; Luo, J.; Feng, S.; Zhang, B. Rolling bearing fault diagnosis method based on dispersion entropy and SVM. Proceedings of the International Conference on Sensing, Diagnostics, Prognostics, and Control (SDPC); Beijing, China, 15–17 August 2019; Volume 10, pp. 596-600.
21. Azami, H.; Escudero, J. Amplitude- and Fluctuation-Based Dispersion Entropy. Entropy; 2018; 20, 210. [DOI: https://dx.doi.org/10.3390/e20030210]
22. Zami, H.; Rostaghi, M.; Abásolo, D.; Javier, E. Refined Composite Multiscale Dispersion Entropy and its Application to Biomedical Signals. IEEE Trans. Biomed. Eng.; 2017; 64, pp. 2872-2879.
23. Li, Z.; Li, Y.; Zhang, K. A Feature Extraction Method of Ship-Radiated Noise Based on Fluctuation-Based Dispersion Entropy and Intrinsic Time-Scale Decomposition. Entropy; 2019; 21, 693. [DOI: https://dx.doi.org/10.3390/e21070693] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33267407]
24. Zheng, J.; Pan, H. Use of generalized refined composite multiscale fractional dispersion entropy to diagnose the faults of rolling bearing. Nonlinear Dyn.; 2021; 101, pp. 1417-1440. [DOI: https://dx.doi.org/10.1007/s11071-020-05821-1]
25. Ali, K. Fractional order entropy: New perspectives. Opt.-Int. J. Light Electron Opt.; 2016; 127, pp. 9172-9177.
26. He, S.; Sun, K. Fractional fuzzy entropy algorithm and the complexity analysis for nonlinear time series. Eur. Phys. J. Spec. Top.; 2018; 227, pp. 943-957. [DOI: https://dx.doi.org/10.1140/epjst/e2018-700098-x]
27. Cuesta-Frau, D. Slope Entropy: A New Time Series Complexity Estimator Based on Both Symbolic Patterns and Amplitude Information. Entropy; 2019; 21, 1167. [DOI: https://dx.doi.org/10.3390/e21121167]
28. Cuesta-Frau, D.; Dakappa, P.H.; Mahabala, C.; Gupta, A.R. Fever Time Series Analysis Using Slope Entropy. Application to Early Unobtrusive Differential Diagnosis. Entropy; 2020; 22, 1034. [DOI: https://dx.doi.org/10.3390/e22091034]
29. Cuesta-Frau, D.; Schneider, J.; Bakštein, E.; Vostatek, P.; Spaniel, F.; Novák, D. Classification of Actigraphy Records from Bipolar Disorder Patients Using Slope Entropy: A Feasibility Study. Entropy; 2020; 22, 1243. [DOI: https://dx.doi.org/10.3390/e22111243]
30. Li, Y.; Gao, P.; Tang, B. Double Feature Extraction Method of Ship-Radiated Noise Signal Based on Slope Entropy and Permutation Entropy. Entropy; 2022; 24, 22. [DOI: https://dx.doi.org/10.3390/e24010022]
31. Shi, E. Single Feature Extraction Method of Bearing Fault Signals Based on Slope Entropy. Shock. Vib.; 2022; 2022, 6808641. [DOI: https://dx.doi.org/10.1155/2022/6808641]
32. Case Western Reserve University. Available online: https://engineering.case.edu/bearingdatacenter/pages/welcome-case-western-reserve-university-bearing-data-center-website (accessed on 17 October 2021).
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Slope entropy (SlEn) is a time series complexity indicator proposed in recent years, which has shown excellent performance in the fields of medical and hydroacoustics. In order to improve the ability of SlEn to distinguish different types of signals and solve the problem of two threshold parameters selection, a new time series complexity indicator on the basis of SlEn is proposed by introducing fractional calculus and combining particle swarm optimization (PSO), named PSO fractional SlEn (PSO-FrSlEn). Then we apply PSO-FrSlEn to the field of fault diagnosis and propose a single feature extraction method and a double feature extraction method for rolling bearing fault based on PSO-FrSlEn. The experimental results illustrated that only PSO-FrSlEn can classify 10 kinds of bearing signals with 100% classification accuracy by using double features, which is at least 4% higher than the classification accuracies of the other four fractional entropies.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details

1 School of Automation and Information Engineering, Xi’an University of Technology, Xi’an 710048, China;
2 School of Automation and Information Engineering, Xi’an University of Technology, Xi’an 710048, China;