[ProQuest: [...] denotes non US-ASCII text; see PDF]
Zongxi Qu 1 and Kequan Zhang 1 and Jianzhou Wang 2 and Wenyu Zhang 1 and Wennan Leng 1
Academic Editor:Ferhat Bingol
1, Key Laboratory of Arid Climatic Change and Reducing Disaster of Gansu Province, College of Atmospheric Sciences, Lanzhou University, Lanzhou 730000, China
2, School of Statistics, Dongbei University of Finance and Economics, Dalian 116025, China
Received 26 February 2016; Revised 10 July 2016; Accepted 4 August 2016
This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
The world's current sources of fossil fuels will eventually be depleted, mainly due to high demand and, in some situations, extravagant consumption [1]. The recently posted Energy Outlook 2035 of British Petroleum predicts that primary energy consumption will increase by 37% between 2013 and 2035, with growth averaging 1.4% per year. Approximately 96% of the expected growth will be in countries that are not members of the Organization for Economic Cooperation and Development (OECD), with energy consumption growing at 2.2% per year [2]. According to some statistics, energy demand worldwide will grow rapidly by one-third from 2010 to 2035, and China and India will become the largest contributors, accounting for 50 percent of the growth during that period. Moreover, China is expected to be the largest oil importer by 2020 [2, 3]. To cope with the growing demand for energy, countries such as China can look to renewable energy sources to provide an opportunity for sustainable development. The significance of renewable sources was recently underpinned by a plethora of advocates and reports, which have mostly focused on wind energy studied by the related institutions and energy commissions of several countries [2, 4-7]. According to reports from the China National Renewable Energy Center (CNREC), wind resources in China are rich and promising prospects, carrying a potential of more than 3.0 TW, mostly in the Three North Areas, with an onshore potential of more than 2.6 TW. Before 2020, land-based wind power will dominate, with offshore wind power in the demonstration status. Furthermore, the annual discharge of carbon dioxide will be reduced to 1.5 billion tons and 3.0 billion tons in 2050 in the conservative and aggressive scenarios, and an estimated 720 000 jobs and 1 440 000 jobs will be created, respectively [4, 5]. Based on these figures, wind energy should be regarded as an appealing energy option because it is both abundant and environmentally friendly; as such, wind energy will be able to satisfy the growing demand for electricity.
Wind energy has great influence on power grid security, power system operation, and market economics due to its intermittent nature, especially in areas with high wind power penetration. Thus, the analysis and assessment of wind energy are a meaningful but markedly difficult task for research. Because wind power generation hinges on wind speed, obtaining accurate wind speeds is important. To improve the precision of wind speed predictions, numerous methods have been proposed and developed in recent decades. These methods can be divided into three general types: physical models, conventional statistical models, and artificial intelligence models [8-11]. Physical models use weather prediction data, such as temperature, pressure, orography, obstacles, and surface roughness, for the best forecasting accuracy but are poor at short-term wind speed simulation. Conventional statistical models, in contrast, draw on vast historical data based on mathematical models usually involving conventional time series analysis, such as ARMA, ARIMA, or seasonal ARIMA models [12, 13], and achieve more accurate short-term wind speed predictions than physical models. However, conventional statistical models are imperfect. The fluctuating and intermittent characteristics of wind speed sequences require more complicated functions to capture the nonlinear relationships rather than assuming a linear correlation structure [14]. Given the development of statistical models along with the advent of artificial intelligence techniques, artificial intelligence models, including artificial neural networks (ANNs) and other mixed methods, have been proposed and are used in the field of wind speed forecasting [15-20]. For instance, because of the chaotic nature of wind time series, Alanis et al. [15] proposed a higher order neural network (HONN) based on an extended Kalman filter for model training, which provides accurate one-step-ahead predictions. Guo et al. [20] proposed a hybrid wind speed forecasting method employing a backpropagation (BP) neural network and seasonal exponential adjustment to remove seasonal effects from actual wind speed datasets. Wang et al. [21] exploited a radial basis function (RBF) neural network for wind speed prediction, and the effectiveness of this method was proved by a practical case. Zhou et al. [17] proposed a prediction method based on a support vector machine (SVM), for short-term wind speed prediction. De Giorgi et al. [19] adopted the ANNs to forecast wind speeds and compared them to the linear time-series-based model, with the ANNs providing a robust approach for wind prediction. All of these methods have improved the precision of wind speed predictions to some extent.
However, wind speed time series are highly noisy and unstable; therefore, using the primary wind speed series directly to establish prediction models is subject to large errors [22-24]. To build an effective prediction model, the features of original wind speed datasets must be fully analyzed and considered. The ensemble empirical mode decomposition (EEMD) [25] is an advanced, effective technology, which makes up for the deficiency of EMD [26] and has certain advantages over other typical decomposition approaches such as the wavelet decomposition and the Fourier decomposition [27]. With direct, intuitive, empirical, and adaptive data processing, EEMD was especially devised for nonlinear and complicated signal sequences, such as wind speed series. For example, Hu et al. [22] proposed a hybrid method based on the EEMD to disassemble the original wind speed datasets into a series of independent Intrinsic Mode Functions (IMFs) and use SVM to predict the values for IMFs in different frequencies. Jiang et al. [28] also proposed a hybrid model for high-speed rail demand forecasting based on EEMD, in which the original series are decomposed into certain signals with different frequencies and then the grey support vector machine (GSVM) is employed for forecasting. Zhou et al. [29] additionally proposed a hybrid method based on EEMD and the generalized regression neural network (GRNN). In this method, the original data are decomposed into different IMFs with corresponding frequencies and the residue component by EEMD, and then each component is taken as an input to establish GRNN forecasting model.
Each of the aforementioned models only employs a single ANN model to predict all of the signal sequences decomposed by EEMD; nevertheless, different signals have different characteristics, meaning that a simple individual model can no longer adapt to all properties of the data. Moreover, previous literature has not addressed which features are best suited for choosing the most appropriate approach. Thus, in our study, we propose a hybrid model based on a model selector that combines RBF, GRNN, and SVR to address signal data series with different characteristics to further improve forecasting accuracy.
In existing neural network training structures, model parameters are very vital factors affecting prediction precision, and different types of data require different parameters. The genetic algorithm (GA) and particle swarm optimization (PSO) algorithms are the most common approaches to optimize the parameters of neural network structures. Liu et al. [30] used the genetic algorithm to determine the weight coefficients of a combined model for wind speed forecasting. Zhao et al. [31] developed a combined model for energy consumption prediction based on model parameters optimization with the genetic algorithm. Ren et al. [32] applied the particle swarm optimization to set weight coefficients of a forecasting model for 6-hour wind speed forecasting. However, these meta-heuristic algorithms have the drawbacks of being hard to understand and achieving the global optimal solution slowly. The fruit fly optimization algorithm (FOA) [33] was a new optimization and evolutionary computation technique, which has distinct advantages in its simple computational process, fewer parameters to be fine-tuned, and stronger ability to search for global optimal solutions and outperforms other metaheuristic algorithms [34, 35]. In our study, we introduce the FOA algorithm to automatically determine the necessary parameters of the RBF, GRNN, and SVR models to achieve better performance.
The rest of the paper is organized as follows. Section 2 briefly introduces related methods while Section 3 describes the proposed hybrid approach in detail. Section 4 describes the dataset used for this study and discusses the forecasting results of proposed model compared with other prediction models. Section 5 concludes the work.
2. Related Methodology
This section briefly introduces EEMD, FOA, and three classical forecasting models: RBF, GRNN, and SVR, which will be used in our research.
2.1. RBF
The radial basis function (RBF) neural network is a type of feedforward network developed by Broomhead and Lowe [36]. This type of neural network is based on a supervised algorithm and has been widely applied to interpolation regression, prediction, and classification [37-39]. It has three layers of architecture, where there are no weights between the input hidden layers, and each hidden unit implements a radial-activated function. The Gaussian activation function is used in each neuron at the hidden layer, which can be formulated as [figure omitted; refer to PDF] where x i is the i th input sample, μ j is the mean value of the j th hidden unit presenting the center vector, θ j is the covariance of the j th hidden unit denoting the width of the RBF kernel function, and M is the number of training samples.
The network output layer is linear so that the k th output is an affine function that can be expressed as [figure omitted; refer to PDF] where w j k is the weight between the k th output and j th hidden unit, ρ k is the biased weight of the k th output, and L is the number of hidden nodes.
2.2. GRNN
The general regression neural network (GRNN), first proposed by Specht [40], is a very powerful computational technique used to solve nonlinear approximation problems based on nonlinear regression theory. The advantages of GRNNs include its good feasibility, simple structure, and fast convergence rate. It consists of four layers, and its basic principles are presented in Figure 1.
Figure 1: A structure schematic chart of GRNN (where j = 1,2 , ... , n , X is the input variable of the network, X j is a training vector of the j th neuron in the pattern layer, σ denotes the smoothing parameter (also called spread parameter), y j is the measured value of the output variable, P j is the pattern Gaussian function, w S 1 and w S 2 are the network weights, S 1 and S 2 are the signals from summation neurons, and Y is the network output).
[figure omitted; refer to PDF]
2.3. Support Vector Regression (SVR)
SVR is a version of an SVM for regression and was introduced by Lasala et al. [41]. In the model, a regression function y = ( x ) is applied to a forecast based on an input set. Attempts are made to minimize the generalization error that will impact generalization performance. Figure 2 illustrates the basic rules of SVR, and the more detailed information can be referenced in [42].
Figure 2: A schematic diagram of SVR architecture.
[figure omitted; refer to PDF]
2.4. EEMD
The empirical mode decomposition (EMD) method, as an adaptive data analysis technique, has proven to be effective in analyzing nonlinear and nonstationary time series, such as wind speed series. It decomposes complex signals into IMFs that satisfy the following conditions.
(1) In the whole data sequence, the number of extrema and the number of zero crossings in the entire sampled dataset must either be equal or differ at most by one.
(2) The mean value at any point of the envelope defined by the local maxima and the envelope defined by the local minima is zero. With the hypothesis of decomposition and the definition of the IMF above, the EMD process of a raw data series x ( t ) ( t = 1,2 , ... , T ) can be formulated as [figure omitted; refer to PDF] where x ( t ) denotes any nonlinear and nonstationary signal, i m f m ( t ) is the m th IMF of the signal, and r m ( t ) is the residual item, which can be a constant or the signal mean trend.
However, the EMD method is imperfect, and the mode-mixing problem [43] is encountered frequently in practical application. Due to the mentioned drawback of EMD, the advent of the EEMD method was proposed by Wu and Huang [25], and the procedures of EEMD can be presented as follows.
Step (a) . Add a white noise series to the original data.
Step (b) . Decompose the data with added white noise to IMFs through the EMD algorithm.
Step (c) . Repeat the abovementioned two steps, but add white noise series at different scales each time.
Step (d) . Calculate the means of each IMF of the decomposition to constitute the final IMFs.
As a result, the white noise series incorporated into the original signal can provide a uniform reference scale to facilitate the EMD process and, consequently, help extract the true IMFs. The relationship between the ensemble number, the error tolerance, and the added noise level can be described according to the well-established statistical rule proved by Wu and Huang: [figure omitted; refer to PDF] where [straight epsilon] is the amplitude of the added noise, [straight epsilon] n is the final standard deviation of error, and N [straight epsilon] is the value of ensemble members. Generally, it is suggested that an amplitude fixed at 0.2 will result in an exact result. In this study, we set the value of ensemble members to 100 and select the optimal standard deviation of white noise series from 0.1 to 0.2 with a k -fold cross-validation method.
2.5. Fruit Fly Optimization Algorithm (FOA)
The fruit fly optimization algorithm (FOA), imitated by the food-finding behavior of the fruit fly, is a new swarm intelligence algorithm that was put forward by Pan in 2012 [33]. It is an interactive evolutionary computation method for finding global optimization and has been shown to perform better than traditional metaheuristic algorithms. The FOA succeeds in solving optimization challenges and has received significant attention in multiple scientific and academic fields.
The fruit fly, a type of insect, is superior to other species in visual and olfactory sensory abilities. It can make the most of its instinctive advantages to find food, even capable of smelling a food source from 40 km away. The fruit fly's method of searching for food starts by using the olfactory organ to smell food odors in the air and then flies towards that location. Upon getting closer to the food location, it continues to seek food and the company's flocking location using its keen eyesight, and then it flies to that position too. Figure 3 shows the iterative process of food searching of a fruit fly swarm.
Figure 3: The process of food-seeking of a fruit fly swarm.
[figure omitted; refer to PDF]
A rudimentary FOA algorithm is outlined as shown in Algorithm 1.
Algorithm 1: FOA.
Objective :
Maxmize smell concentration
Output :
The best smell concentration (Smellbest )
Parameters :
Iteration number (Maxgen ). Population size (sizepop ). Location range (LR ). Random fly direction and distance zone of fruit fly
(Smellbest )
( 1 ) / [low *] Initialization [low *] /
( 2 ) / [low *] Set Maxgen, sizepop [low *] /
( 3 ) / [low *] Initialization swarm location LR and fly range FR [low *] /
( 4 ) Iter = 0
( 5 ) X _axis = rand (LR), Y _axis = rand (LR)
( 6 ) / [low *] Calculate initial smell concentration [low *] /
( 7 ) Smellbest = Function ( X _axis, Y _axis).
( 8 ) Repeat
( 9 ) While i = 1,2 , ... , M a x g e n
( 10 ) / [low *] Osphresis searching process. [low *] /
( 1 1 ) / [low *] Given the random direction and distance for food searching of any individual fruit fly. [low *] /
( 1 2 ) X i = X _axis + rand (FR), Y i = Y _axis + rand (FR)
( 1 3 ) / [low *] Calculate the distance of food source to the initialization location. [low *] /
( 1 4 ) D i s t i = X i 2 + Y i 2 .
( 1 5 ) / [low *] Calculate the smell concentration judgment value. [low *] /
( 1 6 ) S i = 1 / D i s t i .
( 1 7 ) / [low *] Calculate the smell concentration . [low *] /
( 1 8 ) S m e l l i = F u n c t i o n ( S i )
( 1 9 ) / [low *] Find out the fruit fly with maximal smell concentration among the swarm . [low *] /
( 20 ) ( b e s t S m e l l , b e s t I n d e x ) = max [...] ( S m e l l )
( 21 ) / [low *] Vision searching process [low *] /
( 22 ) If bestSmell > Smellbest then Smellbest = bestSmell;
( 23 ) X _axis = X (bestIndex), Y _axis = Y (bestIndex)
( 24 ) Iter = Iter + 1
( 25 ) Until Iter = Maxgen
3. Combined Model
The combined model first applies the EEMD technique to decompose the original time series into a collection of relatively stationary subseries, and the model selection is used to select the optimal model above artificial neural networks based on FOA optimization for predicting each subseries. The prediction results are then aggregated to obtain the final prediction values of wind speed series.
3.1. Model Selection
Through the process of EEMD, distinct information scales in the original wind speed series can be determined and decomposed into a set of IMFs. Additionally, different IMFs exhibit different frequency characteristics, and the instantaneous frequency of each IMF has its meaning at any point. Moreover, no clear theory exists to determine which characteristic is best suited for choosing the most suitable approach. Thus, we must describe some performance metrics to comprehensively measure the strengths of different models. To evaluate the forecast capacity of the proposed models, three evaluation criteria are applied in model selection. They are the mean absolute error (MAE), root mean-square error (RMSE), and index of agreement (IA), as shown in Table 1.
Table 1: Four evaluation rules.
Metric | Equation | Definition |
MAE | M A E = 1 N ∑ n = 1 N ( y n - y ^ n ) | The average absolute forecast error of n times forecast results |
| ||
RMSE | R M S E = ( 1 N ∑ n = 1 N ( y n - y ^ n ) 2 ) 1 / 2 | The root mean-square forecast error |
| ||
IA | I A = 1 - ∑ t = 1 T ( y t - y ^ t ) 2 ∑ t = 1 T ( ( y ¯ - y ^ t ) + ( y ¯ - y t ) ) 2 ∑ t = 1 T ( y t - y ^ t ) 2 X ∑ t = 1 T ( ( y ¯ - y ^ t ) + ( y ¯ - y t ) ) 2 X | The index of agreement |
Here, y n and y ^ n denote the real and predicted values at time n , respectively. N is the sample size. The IA is a dimensionless indicator that portrays the similarity between the observed and forecasted tendencies. The range of IA is from 0 to 1 and for a "perfect" model the value of IA is close to 1 while the MAE and RMSE are equivalent to 0.
The main processes of the proposed hybrid model are demonstrated in Figure 4. The detailed steps of the hybrid model are as follows.
Figure 4: The procedures of wind speed forecasting using the hybrid model.
[figure omitted; refer to PDF]
Step 1 (EEMD process).
The raw data series are decomposed into 7 different IMFs and a residue R . Because the first IMF with high frequency is evoked by noise, it is removed directly, and the rest are used for forecasting.
Step 2 (model selection and optimization of model parameters).
First, select the appropriate parameter from the RBF, GRNN, and SVR models by the FOA. Next, the abovementioned models are then selected by model selection to forecast IMFs and a residual R.
Step 3 (ensemble forecast).
Combine the forecasting results of each signal component to obtain the final result.
4. Results and Analysis
In this section, the process descriptions of RBF, GRNN, and SVR models optimized by the FOA are presented firstly and then followed by the process descriptions of the model selection. Results conclude with the final forecasting results of the hybrid model compared to other different forecasting models.
4.1. Data Selection
Shandong Province located in eastern China has abundant wind energy resources. In our study, the wind speed series from the wind farm in Weihai was used to examine the performance of the combined model. Figures 5(a) and 5(b) present the statistical measures and visual graphs of four wind speed datasets, which show apparent differences between the four seasons. Thus, the original wind speed data, picked randomly corresponding to the four seasons of the year, are used to test whether the proposed models can be applied on different occasions. The wind speed data were sampled at an interval of 15 min, so there are 96 data records per day. Data from 4 days, providing a total of 384 points of 15 min data, were selected for model training, and the next 48 of the 15 min data values were used to test the effectiveness of the developed hybrid model (as shown in Figure 5(b)).
Figure 5: Specific location of the study sites and the statistical measures of original wind speed datasets in Weihai.
(a) [figure omitted; refer to PDF]
(b) [figure omitted; refer to PDF]
4.2. The Performance Metric
Forecasting accuracy is an important criterion in the evaluation of forecasting models. In this paper, three metric rules were applied to evaluate the accuracy of forecasting models, as shown in Table 1. In addition, two benchmark models and bias-variance framework are used to test the hybrid model.
4.2.1. Persistence Model
The persistence model as a simple statistical model, which has simple calculation and provides accurate prediction in a very short time, has been widely used as benchmark model to evaluate the accuracy of more advanced forecasting model. The persistence model can be given by [figure omitted; refer to PDF] where p ^ is the forecasting value, t is a time index, and k is the look-ahead time.
4.2.2. Autoregressive Integrated Moving Average (ARIMA)
ARIMA model is widely used because it can characterize nonlinear data. A general ARIMA model is known as ARIMA ( p , d , q ) , where p is the order of the autoregressive part, d is the number of differences from the original time series data to make it stationary, and q is the order of the moving average portion. The general equation for ARIMA models is [figure omitted; refer to PDF] where y k is the observed value at time k , f m is the m th autoregressive parameter, σ n is the n th moving average parameter, and [straight epsilon] k is the error at time k .
4.2.3. Bias-Variance Framework
To estimate the availability of the wind speed forecasting models, bias-variance framework [44] was employed to evaluate accuracy and stability of the proposed hybrid model and single models. Let x t - x ^ t be the difference between observed value x t and predicted value x ^ t , and the average difference over all points is [figure omitted; refer to PDF] where t is the t th data for performance evaluation and T is all the forecasting data used for performance evaluation. The expectation of the total number of forecasting values is E ( x ^ ) = ( 1 / T ) ∑ t = 1 T x ^ t , and the expectation of the actual value is x = ( 1 / T ) ∑ t = 1 T x t . The bias-variance framework can be decomposed as follows: [figure omitted; refer to PDF] where B i a s 2 ( x ^ ) indicates the prediction accuracy of the forecasting model and V a r ( x ^ ) demonstrates the stability.
4.3. Process of Parameter Optimization
Selecting the appropriate parameter is very critical to improving the accuracy of model prediction; thus, the abovementioned FOA is used to optimize the parameters of the RBF, GRNN, and SVR models (as shown in Figure 6(a)). First, in the RBF model, the centers and widths [ μ , θ ] of the basic functions should be substituted by the smell concentration judgment value ( S i ) of the FOA and other experiment parameters of RBF are shown in Table 2. The smoothing parameter ( σ ) of the GRNN, the penalty parameter ( C ), and loss function parameter ( [straight epsilon] ) of the SVR are also represented by ( S i ) of the FOA. After that, the offspring is entered into the three models, and the smell concentration value is calculated again. Then, smell concentration ( S m e l l i ), replacing S i with the smell concentration judgment function (also called the fitness function), is calculated; with the smaller value of fitness function, the better results will be found. Through the fruit fly's random food searching using its sensitive sense of smell and flocking to the location of the highest smell concentration using its vision, the optimal parameters of the three models are obtained.
Table 2: Experiment parameters of RBF.
Experimental parameters | Default value |
The learning velocity | 0.05 |
Training requirements precision | 0.0001 |
Figure 6: The procedures of RBF, GRNN, and SVR optimized by FOA.
(a) [figure omitted; refer to PDF]
(b) [figure omitted; refer to PDF]
To test the effect of the model parameters optimized by the FOA, the four seasons of wind speed data were selected. The three criterions were employed to evaluate the performance of the three models optimized by the FOA. Results of the comparison are shown in Table 3 and Figure 6(b). It can be clearly observed that the FOARBF, FOAGRNN, and FOASVR consistently have the least statistical error as indicated by the MAE, RMSE, and IA. One can conclude that the FOA optimization can effectively improve the prediction performance of the traditional neural network model.
Table 3: Comparison between RBF, GRNN, and SVR and FOARBF, FOAGRNN, and FOASVR forecast for wind speed in four seasons.
| Error criteria | Spring | Summer | Fall | Winter |
RBF | MAE | 1.2798 | 0.9270 | 1.1633 | 0.9849 |
RMSE | 1.4989 | 1.1825 | 1.6560 | 1.4428 | |
IA | 0.78923 | 0.6460 | 0.7761 | 0.8151 | |
| |||||
FOARBF | MAE | 0.7584 | 0.6693 | 0.7583 | 0.7340 |
RMSE | 0.9144 | 0.8072 | 1.0817 | 1.0174 | |
IA | 0.8653 | 0.8837 | 0.9211 | 0.9016 | |
| |||||
GRNN | MAE | 0.8321 | 0.9842 | 1.3096 | 1.3101 |
RMSE | 1.0964 | 1.2857 | 1.5960 | 1.7048 | |
IA | 0.7684 | 0.6164 | 0.6470 | 0.5339 | |
| |||||
FOAGRNN | MAE | 0.7371 | 0.6912 | 0.7296 | 0.7186 |
RMSE | 0.8881 | 0.8404 | 1.0394 | 0.9933 | |
IA | 0.8738 | 0.8669 | 0.9245 | 0.9016 | |
| |||||
SVR | MAE | 1.0776 | 1.0346 | 1.3319 | 2.6280 |
RMSE | 1.2551 | 1.3142 | 1.8932 | 4.2264 | |
IA | 0.8033 | 0.7448 | 0.7526 | 0.5128 | |
| |||||
FOASVR | MAE | 0.7440 | 0.6319 | 0.6941 | 0.6798 |
RMSE | 0.8755 | 0.7812 | 0.9697 | 0.9799 | |
IA | 0.8740 | 0.8914 | 0.9346 | 0.9097 |
4.4. The Process of Model Selection
Given the complexity and chaos of the original wind speed series, the tendency of wind speed is very difficult to directly predict by using the abovementioned individual models. As such, the original wind speed datasets are decomposed into several IMFs and a residue R ( n ) by EEMD, which make the raw datasets easier to simulate. The FOARBF, FOAGRNN, and FOASVR models are used to forecast each IMF and the residue R ( n ) as the input nodes, hidden nodes, and output nodes of the three neural networks are set to 4, 9, and 1, respectively. The rolling operation method was used in this paper, and the wind speed data in four seasons were selected to test the proposed models.
The selection process of the hybrid model is shown in Figure 7 and its results are shown in Tables 4-7, and it can be clearly observed that each individual model exhibits the best performance at a specific IMF. Nevertheless, no single model can perform best in all situations. For example, Table 4 shows the forecasting results in springtime and reveals that the FOARBF provides the best results at the IMF5 and IMF7. The FOASVR, however, exhibits the lowest MAE and RMSE values among all individual models at IMF2, IMF3, and IMF6 while the lowest value at IMF4 and R ( n ) is achieved by the FOAGRNN. The analysis of three other seasons can be seen in the Appendix.
Table 4: The forecasting results of model selection among the FOARBF, FOAGRNN, and FOASVR in spring.
Components | Error criteria | FOARBF | FOAGRNN | FOASVR |
IMF2 | MAE | 0.1679 | 0.1330 | 0.0769 |
RMSE | 0.1935 | 0.1653 | 0.0945 | |
IA | 0.9013 | 0.9307 | 0.9808 | |
| ||||
IMF3 | MAE | 0.0879 | 0.0762 | 0.0452 |
RMSE | 0.1089 | 0.0947 | 0.0599 | |
IA | 0.9872 | 0.9900 | 0.9963 | |
| ||||
IMF4 | MAE | 0.1297 | 0.0603 | 0.0766 |
RMSE | 0.1604 | 0.0717 | 0.0878 | |
IA | 0.9321 | 0.9867 | 0.9751 | |
| ||||
IMF5 | MAE | 0.0422 | 0.1298 | 0.1514 |
RMSE | 0.0595 | 0.1602 | 0.1727 | |
IA | 0.9992 | 0.9949 | 0.9932 | |
| ||||
IMF6 | MAE | 0.4546 | 0.2836 | 0.0052 |
RMSE | 0.6196 | 0.3994 | 0.0103 | |
IA | 0.7801 | 0.9034 | 1.0000 | |
| ||||
IMF7 | MAE | 0.0429 | 0.1394 | 0.1276 |
RMSE | 0.0433 | 0.1399 | 0.1354 | |
IA | 0.9976 | 0.9754 | 0.9794 | |
| ||||
R ( n ) | MAE | 0.2081 | 0.0025 | 0.0178 |
RMSE | 0.2081 | 0.0026 | 0.0304 | |
IA | 0.4322 | 0.9998 | 0.9614 |
Table 5: The forecasting results of model selection among the FOARBF, FOAGRNN, and FOASVR in summer.
Components | Error criteria | FOARBF | FOAGRNN | FOASVR |
IMF2 | MAE | 0.0617 | 0.1521 | 0.0807 |
RMSE | 0.0756 | 0.1857 | 0.1161 | |
IA | 0.9883 | 0.9206 | 0.9718 | |
| ||||
IMF3 | MAE | 0.1470 | 0.0874 | 0.0670 |
RMSE | 0.1919 | 0.1021 | 0.0772 | |
IA | 0.9296 | 0.9825 | 0.9904 | |
| ||||
IMF4 | MAE | 0.2023 | 0.0419 | 0.0681 |
RMSE | 0.2355 | 0.0513 | 0.0759 | |
IA | 0.9387 | 0.9978 | 0.9952 | |
| ||||
IMF5 | MAE | 0.0571 | 0.0397 | 0.0228 |
RMSE | 0.0656 | 0.0491 | 0.0256 | |
IA | 0.9670 | 0.9824 | 0.9949 | |
| ||||
IMF6 | MAE | 0.0136 | 0.4352 | 0.0904 |
RMSE | 0.0148 | 0.4580 | 0.1027 | |
IA | 0.9977 | 0.3439 | 0.8650 | |
| ||||
IMF7 | MAE | 0.0024 | 0.0022 | 0.0024 |
RMSE | 0.0025 | 0.0026 | 0.0027 | |
IA | 0.9871 | 0.9864 | 0.9849 | |
| ||||
R ( n ) | MAE | 0.0501 | 0.0366 | 0.0672 |
RMSE | 0.0595 | 0.0376 | 0.0701 | |
IA | 0.9026 | 0.9682 | 0.8874 |
Table 6: The forecasting results of model selection among the FOARBF, FOAGRNN, and FOASVR in autumn.
Components | Error criteria | FOARBF | FOAGRNN | FOASVR |
IMF2 | MAE | 0.1206 | 0.2141 | 0.0884 |
RMSE | 0.1647 | 0.2888 | 0.1049 | |
IA | 0.9640 | 0.8839 | 0.9874 | |
| ||||
IMF3 | MAE | 0.0755 | 0.0662 | 0.0435 |
RMSE | 0.0984 | 0.0838 | 0.0535 | |
IA | 0.9798 | 0.9849 | 0.9940 | |
| ||||
IMF4 | MAE | 0.2501 | 0.0549 | 0.0247 |
RMSE | 0.2873 | 0.0639 | 0.0305 | |
IA | 0.9396 | 0.9974 | 0.9994 | |
| ||||
IMF5 | MAE | 0.0488 | 0.1090 | 0.0722 |
RMSE | 0.0553 | 0.1252 | 0.0777 | |
IA | 0.9996 | 0.9977 | 0.9991 | |
| ||||
IMF6 | MAE | 0.0745 | 0.0677 | 0.0275 |
RMSE | 0.0999 | 0.0685 | 0.0279 | |
IA | 0.9761 | 0.9909 | 0.9985 | |
| ||||
IMF7 | MAE | 0.0217 | 0.0194 | 0.0273 |
RMSE | 0.0244 | 0.0196 | 0.0273 | |
IA | 0.9852 | 0.9889 | 0.9773 | |
| ||||
R ( n ) | MAE | 0.1185 | 0.0756 | 0.0055 |
RMSE | 0.1281 | 0.0803 | 0.0068 | |
IA | 0.2589 | 0.4183 | 0.9875 |
Table 7: The forecasting results of model selection among the FOARBF, FOAGRNN, and FOASVR in winter.
Components | Error criteria | FOARBF | FOAGRNN | FOASVR |
IMF2 | MAE | 0.1980 | 0.1564 | 0.0736 |
RMSE | 0.2516 | 0.1936 | 0.0954 | |
IA | 0.8183 | 0.8868 | 0.9802 | |
| ||||
IMF3 | MAE | 0.1191 | 0.0475 | 0.0286 |
RMSE | 0.1494 | 0.0617 | 0.0351 | |
IA | 0.9481 | 0.9907 | 0.9972 | |
| ||||
IMF4 | MAE | 0.1802 | 0.0631 | 0.0173 |
RMSE | 0.2120 | 0.0775 | 0.0212 | |
IA | 0.9224 | 0.9921 | 0.9994 | |
| ||||
IMF5 | MAE | 0.0399 | 0.0661 | 0.0928 |
RMSE | 0.0491 | 0.0722 | 0.1013 | |
IA | 0.9982 | 0.9958 | 0.9921 | |
| ||||
IMF6 | MAE | 0.1175 | 0.0144 | 0.1348 |
RMSE | 0.1207 | 0.0162 | 0.1424 | |
IA | 0.9902 | 0.9998 | 0.9853 | |
| ||||
IMF7 | MAE | 0.3543 | 0.0066 | 0.0571 |
RMSE | 0.4067 | 0.0066 | 0.0889 | |
IA | 0.4432 | 0.9998 | 0.9394 | |
| ||||
R ( n ) | MAE | 0.0775 | 0.0024 | 0.0086 |
RMSE | 0.0810 | 0.0025 | 0.0101 | |
IA | 0.3960 | 0.9982 | 0.9655 |
Figure 7: The process of the hybrid model.
[figure omitted; refer to PDF]
4.5. Forecasting Results and Comparative Analysis
In the abovementioned process, the six independent IMFs and one residual decomposed by EEMD are predicted by three different models: FOARBF, FOAGRNN, and FOASVR. The optimal model corresponding to each IMF and R ( n ) is then selected through model selection. In Step 3, each IMF is predicted by the selected optimal methods, and the final results are obtained by assembling the forecasting results of each IMF.
4.5.1. Forecasting Comparison Results
To evaluate the performance accuracy of the proposed hybrid model based on model selector, three single models and two benchmark models are employed to compare with the hybrid model. Single models include the FOARBF, FOAGRNN, and FOASVR, each of which is used for forecasting all of the signals decomposed by EEMD. Two benchmark models include persistence model and ARIMA model. The comparison results for forecasting ability are as shown in Table 8. Detailed analyses are elaborated as follows:
(1) By comparing the hybrid model with the other five models, the lowest MAE and RMSE values are achieved by hybrid model. In particular, the IA values of the hybrid model were improved by 10.84%, 11.40%, 5.82%, 7.93%, and 3.04% on four seasons compared with the persistence model, ARIMA model, EEMD-FOARBF, EEMD-FOAGRNN, and EEMD-FOASVR.
(2) When compared to benchmark model, the EEMD-FOARBF, EEMD-FOAGRNN, EEMD-FOASVR, and the hybrid model show optimal forecasting results according to MAE, RMSE, and IA, likely because EEMD technology is effective in improving the forecasting accuracy as a data preprocess step.
(3) When compared to the EEMD-FOARBF, EEMD-FOAGRNN, and EEMD-FOASVR, the hybrid method also shows better prediction results, indicating that the hybrid method can take advantages of each individual model to obtain more complete information.
Table 8: The typical results of the hybrid model and the results of the other models for the four seasons.
Case | Errors | Persistence model | ARIMA model | EEMD-FOARBF | EEMD-FOAGRNN | EEMD-FOASVR | Hybrid model |
Spring | MAE | 0.7741 | 0.7285 | 0.3675 | 0.5690 | 0.3692 | 0.0976 |
RMSE | 0.9023 | 0.8769 | 0.4714 | 0.7505 | 0.4783 | 0.1308 | |
IA | 0.8638 | 0.8684 | 0.9647 | 0.9019 | 0.9617 | 0.9973 | |
| |||||||
Summer | MAE | 0.7208 | 0.7111 | 0.4312 | 0.5280 | 0.3940 | 0.1032 |
RMSE | 0.8589 | 0.8615 | 0.5287 | 0.6472 | 0.4920 | 0.1280 | |
IA | 0.8716 | 0.8682 | 0.9374 | 0.8965 | 0.9496 | 0.9964 | |
| |||||||
Fall | MAE | 0.6708 | 0.7879 | 0.6917 | 0.4197 | 0.3169 | 0.1113 |
RMSE | 0.8585 | 1.0181 | 1.0098 | 0.6322 | 0.4604 | 0.1453 | |
IA | 0.9554 | 0.9326 | 0.9294 | 0.9732 | 0.9874 | 0.9987 | |
| |||||||
Winter | MAE | 0.7833 | 0.7017 | 0.6117 | 0.6211 | 0.4171 | 0.0875 |
RMSE | 1.0450 | 0.9779 | 0.7548 | 0.7955 | 0.5301 | 0.1164 | |
IA | 0.9098 | 0.9133 | 0.9399 | 0.9264 | 0.9749 | 0.9988 | |
| |||||||
Average | MAE | 0.7373 | 0.7323 | 0.5255 | 0.5345 | 0.3743 | 0.0999 |
RMSE | 0.9162 | 0.9336 | 0.6912 | 0.7064 | 0.4902 | 0.1301 | |
IA | 0.9002 | 0.8956 | 0.9429 | 0.9245 | 0.9684 | 0.9978 |
Above all, the proposed hybrid model has been verified as an effective approach for improving the forecasting performance through the analysis of the prediction results.
4.5.2. Tested with Bias-Variance Framework
Table 9 shows the results of the bias-variance test: the values of bias indicate the prediction accuracy of the forecasting model and values of variance demonstrate the stability. The results reveal the following:
(1) The absolute values of the biases of the hybrid model are less than those of the other models, which indicates that the hybrid model has a higher accuracy in wind speed forecasting. The variance results also show that the hybrid model is more stable.
(2) The results of bias and variance values of the EEMD-FOARBF, EEMD-FOAGRNN, EEMD-FOASVR, and hybrid model are less than the persistence model and ARIMA; this reveals EEMD and FOA are effective approaches for improving the accuracy and stability of forecasting models.
Table 9: Bias-variance test of seven models for the mean value in four seasons.
Model | Bias variance | |
Bias | Var. | |
Hybrid model | 0.016168 | 0.000178 |
EEMD-FOASVR | 0.057193 | 0.051961 |
EEMD-FOAGRNN | 0.099827 | 0.192708 |
EEMD-FOARBF | 0.063177 | 0.143495 |
ARIMA | 0.117167 | 0.244263 |
Persistence model | 0.165100 | 0.216753 |
Thus, it is clear that the hybrid model has a higher accuracy and stability in wind speed forecasting, and it performs much better than individual models in forecasting.
5. Conclusions
Reliable and precise wind speed forecasting is vital for wind power generation systems. However, wind speed shows nonlinearity and nonstationarity, which pose great challenges to the task of predicting wind speed precisely. Regarding the currently available forecasting models, the single model applied for forecasting wind speed has limited capacity and is not suitable for all situations. The appropriate selection approach of the hybrid model can give full play to the strengths of each of the individual models and make each individual model perform in its specific manner. For these reasons, we proposed a hybrid model based on EEMD that combines three commonly used neural networks optimized by the FOA. The main contributions of this model are summarized as follows. (1) Due to the instability of wind series, EEMD technique is utilized as a preprocessing approach to decompose the original time series into a collection of relatively stationary subseries for forecasting. (2) To overcome the drawbacks of the unstable forecasting results of the RBF, GRNN, and SVR, the FOA optimization is applied to improve the prediction performance of the traditional forecasting model. (3) Because the IMF signals with different characteristics are hard to forecast by a single model, a model selection combining FOARBF, FOAGRNN, and FOASVR is proposed to further improve forecasting accuracy. The experimental results indicate that the proposed hybrid model has minimum statistical error in terms of MAE, RMSE, IA, and bias variance, and it proved that the proposed hybrid method performs better than single models and is superior to other hybrid models as well, such as the EEMD-FOARBF, EEMD-FOAGRNN, and EEMD-FOASVR. Based on the abovementioned analysis, we conclude that the proposed hybrid model can not only take full advantage of several single ANNs to improve prediction accuracy but also easily implement the task in wind parks.
Acknowledgments
This research was supported by the National Natural Science Foundation Project (41225018) and Arid Meteorology Research Fund (IAM201305).
[1] A. Kumar, K. Kumar, N. Kaushik, S. Sharma, S. Mishra, "Renewable energy in India: current status and future potentials," Renewable and Sustainable Energy Reviews , vol. 14, no. 8, pp. 2434-2442, 2010.
[2] "Energy Outlook 2035," 2015, http://www.bp.com/content/dam/bp/pdf/energy-economics/energy-outlook-2016/bp-energy-outlook-2016.pdf
[3] S. Ahmed, M. T. Islam, M. A. Karim, N. M. Karim, "Exploitation of renewable energy for sustainable development and overcoming power crisis in Bangladesh," Renewable Energy , vol. 72, pp. 223-235, 2014.
[4] CNREC http://www.cnrec.org.cn/english/publication/2014-12-25-457.html China Wind, Solar and Bioenergy Roadmap 2050 Short Version , 2014.
[5] China Renewable Energy Technology Catalogue 2014, http://www.cnrec.org.cn/english/publication/2014-12-29-461.html
[6] A. B. Awan, Z. A. Khan, "Recent progress in renewable energy--remedy of energy crisis in Pakistan," Renewable and Sustainable Energy Reviews , vol. 33, pp. 236-253, 2014.
[7] S. Salcedo-Sanz, A. Pastor-Sánchez, J. Del Ser, L. Prieto, Z. W. Geem, "A Coral Reefs Optimization algorithm with Harmony Search operators for accurate wind speed prediction," Renewable Energy , vol. 75, pp. 93-101, 2015.
[8] G. Giebel, R. Brownsword, G. Kariniotakis, M. Denhard, C. Draxl, "The state-of-the-art in short-term prediction of wind power. A literature overview,", no. 6470de79-5287-45a9-8e4f-b629919aff7a/Paper/p5443, ANEMOS.plus, 2011.
[9] G. Giebel, L. Landberg, "State-of-the-Art on Methods and Software Tools for Short-Term Prediction of Wind Energy Production," Energy, 2010, https://www.researchgate.net/publication/47549887_State-of-the-art_Methods_and_software_tools_for_short-term_prediction_of_wind_energy_production
[10] G. Kariniotakis, P. Pinson, N. Siebert, G. Giebel, R. Barthelmie, "The state of the art in short-term prediction of wind power-from an offshore perspective," in Proceedings of the French Sea Tech Week Conference, pp. 20-21, Brest, France, 2004.
[11] D. Version The State-of-the-Art in Short-Term Prediction of Wind Power , 2011.
[12] S. Qin, F. Liu, J. Wang, Y. Song, "Interval forecasts of a novelty hybrid model for wind speeds," Energy Reports , vol. 1, pp. 8-16, 2015.
[13] J. L. Torres, A. García, M. De Blas, A. De Francisco, "Forecast of hourly average wind speed with ARMA models in Navarre (Spain)," Solar Energy , vol. 79, no. 1, pp. 65-77, 2005.
[14] J. Wang, S. Qin, Q. Zhou, H. Jiang, "Medium-term wind speeds forecasting utilizing hybrid models for three different sites in Xinjiang, China," Renewable Energy , vol. 76, pp. 91-101, 2015.
[15] A. Y. Alanis, L. J. Ricalde, E. N. Sanchez, "High Order Neural Networks for wind speed time series prediction," in Proceedings of the International Joint Conference on Neural Networks (IJCNN '09), pp. 76-80, IEEE, Atlanta, Ga, USA, June 2009.
[16] S. A. Pourmousavi Kani, M. M. Ardehali, "Very short-term wind speed prediction: a new artificial neural network-Markov chain model," Energy Conversion and Management , vol. 52, no. 1, pp. 738-745, 2011.
[17] J. Zhou, J. Shi, G. Li, "Fine tuning support vector machines for short-term wind speed forecasting," Energy Conversion and Management , vol. 52, no. 4, pp. 1990-1998, 2011.
[18] G. Li, J. Shi, "On comparing three artificial neural networks for wind speed forecasting," Applied Energy , vol. 87, no. 7, pp. 2313-2320, 2010.
[19] M. G. De Giorgi, A. Ficarella, M. G. Russo, "Short-term wind forecasting using artificial neural networks (ANNs)," Energy Sustain , pp. 197-208, 2009.
[20] Z.-H. Guo, J. Wu, H.-Y. Lu, J.-Z. Wang, "A case study on a hybrid wind speed forecasting method using BP neural network," Knowledge-Based Systems , vol. 24, no. 7, pp. 1048-1056, 2011.
[21] J. Wang, W. Zhang, J. Wang, T. Han, L. Kong, "A novel hybrid approach for wind speed prediction," Information Sciences , vol. 273, pp. 304-318, 2014.
[22] J. Hu, J. Wang, G. Zeng, "A hybrid forecasting approach applied to wind speed time series," Renewable Energy , vol. 60, pp. 185-194, 2013.
[23] J. Wang, W. Zhang, Y. Li, J. Wang, Z. Dang, "Forecasting wind speed using empirical mode decomposition and Elman neural network," Applied Soft Computing , vol. 23, pp. 452-459, 2014.
[24] W. Zhang, J. Wang, J. Wang, Z. Zhao, M. Tian, "Short-term wind speed forecasting based on a hybrid model," Applied Soft Computing Journal , vol. 13, no. 7, pp. 3225-3233, 2013.
[25] Z. Wu, N. E. Huang, "Ensemble empirical mode decomposition: a noise-assisted data analysis method," Advances in Adaptive Data Analysis , vol. 1, no. 1, pp. 6281-6284, 2009.
[26] N. E. Huang, Z. Shen, S. R. Long, M. C. Wu, H. H. Shih, Q. Zheng, N.-C. Yen, C. C. Tung, H. H. Liu, "The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis," Proceedings of the Royal Society of London, Series A: Mathematical and Physical Sciences , vol. 454, no. 1971, pp. 903-995, 1998.
[27] E. Haven, X. Liu, L. Shen, "De-noising option prices with the wavelet method," European Journal of Operational Research , vol. 222, no. 1, pp. 104-112, 2012.
[28] X. Jiang, L. Zhang, M. X. Chen, "Short-term forecasting of high-speed rail demand: a hybrid approach combining ensemble empirical mode decomposition and gray support vector machine with real-world applications in China," Transportation Research Part C: Emerging Technologies , vol. 44, pp. 110-127, 2014.
[29] Q. Zhou, H. Jiang, J. Wang, J. Zhou, "A hybrid model for PM2.5 forecasting based on ensemble empirical mode decomposition and a general regression neural network," Science of the Total Environment , vol. 496, pp. 264-274, 2014.
[30] D. Liu, D. Niu, H. Wang, L. Fan, "Short-term wind speed forecasting using wavelet transform and support vector machines optimized by genetic algorithm," Renewable Energy , vol. 62, pp. 592-597, 2014.
[31] H. Zhao, R. Liu, Z. Zhao, C. Fan, "Analysis of energy consumption prediction model based on genetic algorithm and wavelet neural network," in Proceedings of the 3rd International Workshop on Intelligent Systems and Applications (ISA '11), pp. 1-4, IEEE, Wuhan, China, 2011.
[32] C. Ren, N. An, J. Wang, L. Li, B. Hu, D. Shang, "Optimal parameters selection for BP neural network based on particle swarm optimization: A Case Study of Wind Speed Forecasting," Knowledge-Based Systems , vol. 56, pp. 226-239, 2014.
[33] W. Pan, "A new fruit fly optimization algorithm: taking the financial distress model as an example," Knowledge-Based Systems , vol. 26, pp. 69-74, 2012.
[34] H.-Z. Li, S. Guo, C.-J. Li, J.-Q. Sun, "A hybrid annual power load forecasting model based on generalized regression neural network with fruit fly optimization algorithm," Knowledge-Based Systems , vol. 37, pp. 378-387, 2013.
[35] Y. Cong, J. Wang, X. Li, "Traffic flow forecasting by a least squares support vector machine with a fruit fly optimization algorithm," Procedia Engineering , vol. 137, pp. 59-68, 2016.
[36] D. S. Broomhead, D. Lowe, "Radial basis functions, multi-variable functional interpolation and adaptive networks," https://www.researchgate.net/publication/233783084_Radial_basis_functions_multi-variable_functional_interpolation_and_adaptive_networks , no. 2, 1988.
[37] H. B. Celikoglu, "Application of radial basis function and generalized regression neural networks in non-linear utility function specification for travel mode choice modelling," Mathematical and Computer Modelling , vol. 44, no. 7-8, pp. 640-658, 2006.
[38] S. Chen, X. Hong, C. J. Harris, L. Hanzo, "Fully complex-valued radial basis function networks: orthogonal least squares regression and classification," Neurocomputing , vol. 71, no. 16-18, pp. 3421-3433, 2008.
[39] Z. J. Tamboli, S. R. Khot, "Estimated analysis of radial basis function neural network for induction motor fault detection," International Journal of Engineering and Advanced Technology , vol. 2, pp. 41-43, 2013.
[40] D. F. Specht, "A general regression neural network," IEEE Transactions on Neural Networks , vol. 2, no. 6, pp. 568-576, 1991.
[41] J. M. Lasala, R. Mehran, J. W. Moses, J. J. Popma, J. S. Reiner, S. K. Sharma, G. W. Vetrovec, "Evidence based management of patients undergoing PCI. Conclusion,", supplement 1 Catheterization and Cardiovascular Interventions , vol. 75, pp. S43-S45, 2010.
[42] W.-C. Hong, Y. Dong, W. Y. Zhang, L.-Y. Chen, B. K. Panigrahi, "Cyclic electric load forecasting by seasonal SVR with chaotic genetic algorithm," International Journal of Electrical Power and Energy Systems , vol. 44, no. 1, pp. 604-614, 2013.
[43] T. Wang, M. Zhang, Q. Yu, H. Zhang, "Comparing the applications of EMD and EEMD on time-frequency analysis of seismic signal," Journal of Applied Geophysics , vol. 83, pp. 29-34, 2012.
[44] L. Xiao, W. Shao, T. Liang, C. Wang, "A combined model based on multiple seasonal patterns and modified firefly algorithm for electrical load forecasting," Applied Energy , vol. 167, pp. 135-153, 2016.
Appendix
To further prove that the proposed hybrid model can select the best model for different cases, the forecasting results in other seasons can be seen in Tables 4-6. For example, Table 4 shows the experimental results from three single models in the summer. Among all the single models, when the FOARBF was applied, the value of IA was higher than those of the other methods at IMF2 and IMF6. At IMF4, IMF7, and [figure omitted; refer to PDF] , the FOAGRNN provides the optimal results. At other signals, the results from the FOASVR are the best. Table 5 shows the results in autumn. Among all the models, at IMF2, IMF3, IMF4, and IM6, the FOASVR performs the best while the FOAGRNN performs better than the others at IMF7 and [figure omitted; refer to PDF] . Meanwhile, the FOARBF provides the optimal results at other signals. The forecasting results of three single models in winter are presented in Table 6. At IMF6, IMF7, and [figure omitted; refer to PDF] , the most accurate results belong to the FOAGRNN. When the FOASVR is used, the results are more accurate from IMF2 to IMF4. Results show that the FOARBF only performs desirably at IMF5. From Tables 3-6, we find that FOASVR always performs well at high frequency signals, FOAGRNN works well at low frequency signals, and FOARBF usually provides optimal results at middle frequency signals. Consequently, no single model provides the best results for all of the signals, but each model has its strengths at special IMFs. Therefore, the best-suited model is chosen based on different conditions.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2016 Zongxi Qu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
As a type of clean and renewable energy, the superiority of wind power has increasingly captured the world's attention. Reliable and precise wind speed prediction is vital for wind power generation systems. Thus, a more effective and precise prediction model is essentially needed in the field of wind speed forecasting. Most previous forecasting models could adapt to various wind speed series data; however, these models ignored the importance of the data preprocessing and model parameter optimization. In view of its importance, a novel hybrid ensemble learning paradigm is proposed. In this model, the original wind speed data is firstly divided into a finite set of signal components by ensemble empirical mode decomposition, and then each signal is predicted by several artificial intelligence models with optimized parameters by using the fruit fly optimization algorithm and the final prediction values were obtained by reconstructing the refined series. To estimate the forecasting ability of the proposed model, 15 min wind speed data for wind farms in the coastal areas of China was performed to forecast as a case study. The empirical results show that the proposed hybrid model is superior to some existing traditional forecasting models regarding forecast performance.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer