Zhaolu Guo 1 and Haixia Huang 2 and Changshou Deng 3 and Xuezhi Yue 1 and Zhijian Wu 4
Academic Editor:Rafik Aliyev
1, Institute of Medical Informatics and Engineering, School of Science, Jiangxi University of Science and Technology, Ganzhou 341000, China
2, School of Literature and Law, Jiangxi University of Science and Technology, Ganzhou 341000, China
3, School of Information Science and Technology, Jiujiang University, Jiujiang 332005, China
4, State Key Laboratory of Software Engineering, Wuhan University, Wuhan 430072, China
Received 8 October 2014; Accepted 27 April 2015; 24 August 2015
This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
Numerous problems in science and engineering can be converted into optimization problems. Therefore, it is of significance both in theory and in engineering applications to develop effective and efficient optimization algorithms for solving complex problems of science and engineering. Differential evolution (DE), proposed by Storn and Price in 1997 [1], is a simple yet effective global optimization algorithm. According to frequently reported theoretical and experimental studies, DE has exhibited competitive performance than many other evolutionary algorithms in terms of both convergence speed and solution precision over several benchmark functions and real-life problems [2-4]. Due to its simplicity, easy implementation, and efficiency, DE has stimulated many researchers' interests since its development. Therefore, it has become a hot research topic in evolutionary computation over the past decades [5-7].
However, its search ability should be further enhanced to obtain better solutions when DE is used to solve various real-life optimization problems [2, 8, 9]. Particularly, DE may suffer from premature convergence and/or slow convergence when solving complex multimodal optimization problems. In order to improve the performance of the conventional DE, a number of DE variants have been proposed in recent decades [2, 6, 10]. Recognizing that the performance of DE depends on the control parameters, Brest et al. [11] presented a self-adaptive DE (jDE), in which both [figure omitted; refer to PDF] and [figure omitted; refer to PDF] are created independently for each individual by an adaptive mechanism. Specifically, the new [figure omitted; refer to PDF] is created by a random value from 0.1 to 0.9 with a probability 0.1 during the search process. Meanwhile, the new [figure omitted; refer to PDF] obtains a random value from 0.0 to 1.0 with a probability 0.1. Unlike jDE, JADE, proposed by Zhang and Sanderson [12], utilizes a distinct parameter adaptation mechanism, in which the new [figure omitted; refer to PDF] and [figure omitted; refer to PDF] are created for each individual by a normal distribution and a Cauchy distribution, respectively. In addition, JADE learns knowledge from the recent successful [figure omitted; refer to PDF] and [figure omitted; refer to PDF] and applies the learned knowledge for creating new [figure omitted; refer to PDF] and [figure omitted; refer to PDF] . Identifying that both the mutation strategies and their associated control parameters can directly influence the performance of DE, Qin et al. [7] proposed a novel self-adaptive DE, SaDE, which adaptively tunes the trial vector generation strategies and their associated control parameter values by extracting knowledge from the previous search process in generating promising solutions. Mallipeddi et al. [13] introduced an improved DE with ensemble of parameters and mutation strategies (EPSDE), which employs a pool of diverse trial vector generation strategies and a pool of values for the control parameters [figure omitted; refer to PDF] and [figure omitted; refer to PDF] . By incorporating an opposition-based learning strategy into the traditional DE for population initialization and generating new solutions, Rahnamayan et al. [14] proposed an opposition-based DE (ODE). The experimental results confirmed that the opposition-based learning strategy can improve the convergence speed and the solution accuracy of DE. Further, Wang et al. [15] improved the opposition-based learning strategy, proposed a generalized opposition-based learning strategy, and presented an enhanced DE with generalized opposition-based learning strategy (GODE). Jia et al. [16] presented an effective memetic DE algorithm, DECLS, which utilizes a chaotic local search with a shrinking strategy to improve the search ability. Experimental results indicated that the performance of the canonical DE is significantly improved by the chaotic local search. Recently, Wang et al. [17] proposed a composite DE, called CoDE, the main idea of which is to randomly combine several well studied trial vector generation strategies with a number of control parameter settings highly recommended by other researchers at each generation to create new trial vectors. Experimental results on all the CEC2005 contest test instances show that CoDE is very competitive.
Although there already exist many DE variants for solving complex optimization problems, according to the no free lunch (NFL) theory [18], the performance of DE for some benchmark functions and real-life problems should be further enhanced to obtain better solutions. Moreover, many studies have revealed that embedding local search strategy can greatly enhance the search ability of DE [14, 16, 19]. Motivated by these considerations, in order to promote the performance of DE on complex optimization problems, this study proposes an enhanced differential evolution with elite chaotic local search, called DEECL. In DEECL, we utilize a chaotic search strategy based on the heuristic information from the elite individuals to promote the exploitation power. Further, we also design a simple and effective parameter adaptation mechanism to enhance the robustness.
The rest of the paper is organized as follows. The conventional DE is introduced in Section 2. Section 3 presents the enhanced DE. Numerical experiments are presented in Section 4 for the comparison and analysis. Finally, the paper is concluded in Section 5.
2. Differential Evolution
Without loss of generality, only minimization problems are considered in this study. We suppose that the objective function to be minimized is Min [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and the search space is [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is the number of dimensions of the problem, [figure omitted; refer to PDF] and [figure omitted; refer to PDF] denote the lower and upper boundaries of the search space, respectively.
Similar to other evolutionary algorithms, DE also has a simple structure, only including three simple operators, namely, mutation, crossover, and selection operators [2]. In the initial phase, DE creates an initial population [figure omitted; refer to PDF] , which is randomly generated from the search space, where [figure omitted; refer to PDF] , [figure omitted; refer to PDF] ; [figure omitted; refer to PDF] is the population size and [figure omitted; refer to PDF] is the generation. After initialization, the mutation and crossover operators are performed to create the trial vectors, and then the selection operator is utilized to select the better one between the offspring individual and the parent individual for the next generation. DE performs these steps repeatedly to converge toward the global optima until the terminating criterion is reached [20]. In the following subsections, the evolutionary operators of DE will be introduced in detail.
2.1. Mutation Operator
In the mutation operator, a mutant vector [figure omitted; refer to PDF] is created by using a predetermined mutation strategy for each individual [figure omitted; refer to PDF] , namely, target vector, in the current population [17]. DE has many mutation strategies used in its implementations, such as DE/rand/1, DE/best/1, DE/rand-to-best/1, DE/best/2 and DE/rand/2 [2]. Among these mutation strategies, DE/rand/1 is the most frequently used mutation strategy, which is expressed as follows [1]: [figure omitted; refer to PDF] where [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] are randomly selected from the set [figure omitted; refer to PDF] , and they are mutually different from each other. [figure omitted; refer to PDF] is called as scaling factor, amplifying the difference vector [figure omitted; refer to PDF] .
2.2. Crossover Operator
Following mutation, a trial vector [figure omitted; refer to PDF] is generated by executing the crossover operator for each pair of target vector [figure omitted; refer to PDF] and its corresponding mutant vector [figure omitted; refer to PDF] [2]. Binomial crossover is the most commonly used crossover operators in current popular DE. The binomial crossover is described as follows [1]: [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is generated for each [figure omitted; refer to PDF] and takes a value from 0.0 to 1.0 in a uniformly random manner, and [figure omitted; refer to PDF] is the crossover probability, which limits the number of parameters inherited from the mutant vector [figure omitted; refer to PDF] . The integer [figure omitted; refer to PDF] is randomly chosen from the range [figure omitted; refer to PDF] , which guarantees that at least one parameter of the trial vector [figure omitted; refer to PDF] is inherited from the mutant vector [figure omitted; refer to PDF] [7].
2.3. Selection Operator
Like the genetic algorithm, the selection process of DE is also based on the Darwinian law of survival of the fittest. The selection process is performed in order to choose the more excellent individuals for the next generation. For minimization problems, the selection operator can be defined in the following form [1]: [figure omitted; refer to PDF] where [figure omitted; refer to PDF] and [figure omitted; refer to PDF] indicate the fitness values of the target vector [figure omitted; refer to PDF] and its corresponding trial vector [figure omitted; refer to PDF] , respectively.
2.4. Algorithmic Framework of DE
Based on the above elaborate introduction of the DE's operators, we present the framework of DE with DE/rand/1/bin strategy in Algorithm 1, where [figure omitted; refer to PDF] is the number of fitness evaluations, [figure omitted; refer to PDF] is the maximum number of evaluations, rand(0,1) indicates a random real number in the range [figure omitted; refer to PDF] , [figure omitted; refer to PDF] represents a random integer in the range [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] is the global best individual found so far.
Algorithm 1: DE algorithm.
[figure omitted; refer to PDF] ;
[figure omitted; refer to PDF] ;
[figure omitted; refer to PDF] Initialize the population [figure omitted; refer to PDF]
for [figure omitted; refer to PDF] to [figure omitted; refer to PDF] do
for [figure omitted; refer to PDF] to [figure omitted; refer to PDF] do
[figure omitted; refer to PDF] + rand [figure omitted; refer to PDF] ;
end for
Evaluate individual [figure omitted; refer to PDF] ;
FES = FES + 1;
end for
while FES < MAX_FES do
for [figure omitted; refer to PDF] to [figure omitted; refer to PDF] do
Choose three mutually different integers [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , [figure omitted; refer to PDF] from
the set [figure omitted; refer to PDF] in a random manner;
[figure omitted; refer to PDF] = randint(1, [figure omitted; refer to PDF] );
for [figure omitted; refer to PDF] to [figure omitted; refer to PDF] do
if rand [figure omitted; refer to PDF] or [figure omitted; refer to PDF] then
[figure omitted; refer to PDF] = [figure omitted; refer to PDF] + [figure omitted; refer to PDF] ;
else
[figure omitted; refer to PDF] = [figure omitted; refer to PDF] ;
end if
end for
[figure omitted; refer to PDF] Selection step [figure omitted; refer to PDF]
if [figure omitted; refer to PDF] then
[figure omitted; refer to PDF] ;
if [figure omitted; refer to PDF] then
[figure omitted; refer to PDF] ;
end if
else
[figure omitted; refer to PDF] ;
end if
[figure omitted; refer to PDF] ;
end for
[figure omitted; refer to PDF] ;
end while
3. Proposed Approach
3.1. Motivations
DE has been demonstrated to yield superior performance for solving various real-world optimization problems [21-23]. However, it tends to suffer from premature convergence and/or slow convergence when solving complex optimization problems [6, 24]. To enhance the performance of DE, many researchers have proposed various improved DE algorithms during the past decade [25-27]. Among the DE variations, memetic method is a promising approach to improve the performance of the traditional DE, which utilizes various local search strategies, such as chaotic search strategy [16], simplex crossover search strategy [19], and orthogonal search strategy [28], to strengthen the exploitation ability of the traditional DE and consequently accelerate the convergence speed. Among the local search strategies commonly used in memetic DE, chaotic search strategy is inspired by the chaos phenomenon in nature. Chaos is a classic nonlinear dynamical system, which is widely known as a system with the properties of ergodicity, randomicity, and sensitivity to its initial conditions [16, 29, 30]. Due to its ergodicity and randomicity, a chaotic system can randomly generate a long-time sequence which is able to traverse through every state of the system and every state is generated only once if given a long enough time period [16, 31]. Taking advantage of the well-known characteristics of the chaotic systems, researchers have proposed many chaotic search strategies for optimizing various problems [16, 32-34]. However, to the best of our knowledge, among many chaotic search strategies, they pay more attention to the characteristics of the ergodicity and randomicity of the chaotic system. Therefore, the exploration capacity can be indeed improved. However, in order to maintain a balance between exploration and exploitation, the exploitation ability of the chaotic search strategy should be further enhanced. Thus, when designing a relatively comprehensive chaotic search strategy, we should further integrate more heuristic information into the chaotic search strategy to promote its exploitation power. Generally, the elite individuals in the current population known as a promising search direction toward the optimum are the favorable source that can be employed to enhance the exploitation ability. Based on these considerations, we present an elite chaotic search strategy, which not only utilizes the characteristics of the ergodicity and randomicity of the chaotic system, but also merges the superior information of the current population into the chaotic search process.
3.2. Elite Chaotic Search
In many chaotic search strategies, the Logistic chaotic function is utilized to generate a chaotic sequence, which is formulated as follows [16]: [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is the initial value of the chaotic system, which is randomly generated from the range [figure omitted; refer to PDF] , but cannot be equal to 0.25, 0.5, or 0.75. [figure omitted; refer to PDF] is the [figure omitted; refer to PDF] th state of the chaotic system. As known, the initial state [figure omitted; refer to PDF] of the chaotic system is randomly produced. Due to its ergodicity and sensitivity to the initial state [figure omitted; refer to PDF] , [figure omitted; refer to PDF] is a random long-time sequence, which can traverse through every state of the system and every state is generated only once if [figure omitted; refer to PDF] is large enough.
In order to enhance the exploitation ability of the traditional chaotic search strategy, we integrate the heuristic information learned from the elite individuals into the chaotic search strategy to promote the exploitation power. The proposed elite chaotic search strategy is defined by [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is an individual to be performed the elite chaotic search, which is randomly chosen from the current population. [figure omitted; refer to PDF] is the chaotic sequence, where [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] is an elite individual, which is randomly chosen from the top [figure omitted; refer to PDF] individuals in the current population with [figure omitted; refer to PDF] .
In the proposed elite chaotic search operator, an individual [figure omitted; refer to PDF] is randomly selected from the current population to undergo the elite chaotic search strategy. After that, the initial value of the chaotic system takes a value from range [figure omitted; refer to PDF] in a uniformly random manner. Then, an elite chaotic search procedure for individual [figure omitted; refer to PDF] is repeatedly performed until finding a better solution than individual [figure omitted; refer to PDF] or the number of iterations [figure omitted; refer to PDF] is equal to [figure omitted; refer to PDF] . The framework of the elite chaotic search operator is described in Algorithm 2.
Algorithm 2: Elite chaotic search operator.
[figure omitted; refer to PDF] ;
[figure omitted; refer to PDF] ;
Randomly choose an individual [figure omitted; refer to PDF] from the current population;
[figure omitted; refer to PDF] ;
[figure omitted; refer to PDF] ;
while [figure omitted; refer to PDF] do
Randomly choose an individual [figure omitted; refer to PDF] from the top [figure omitted; refer to PDF]
individuals in the current population;
for [figure omitted; refer to PDF] = 1 to [figure omitted; refer to PDF] do
[figure omitted; refer to PDF] ;
if [figure omitted; refer to PDF] or [figure omitted; refer to PDF] then
[figure omitted; refer to PDF] rand( [figure omitted; refer to PDF] );
end if
end for
if [figure omitted; refer to PDF] then
[figure omitted; refer to PDF] ;
break;
end if
[figure omitted; refer to PDF] ;
[figure omitted; refer to PDF] ;
[figure omitted; refer to PDF] ;
end while
3.3. Parameter Adaptation
Since the setting of control parameters can significantly influence the performance of DE, parameter adaptation mechanism is essential for an efficient DE [7, 11, 12]. To this end, we design a simple and effective parameter adaptation mechanism inspired by [11] into DEECL. In DEECL, each individual is independently associated with its own mutation factor [figure omitted; refer to PDF] and crossover probability [figure omitted; refer to PDF] . For individual [figure omitted; refer to PDF] , its control parameters [figure omitted; refer to PDF] and [figure omitted; refer to PDF] are initialized to 0.5 and 0.9, respectively. Generally, a normal distribution with mean value 0.5 and standard deviation 0.3 is a promising adaptive approach for the mutation factor of DE [7], whereas Cauchy distribution is more favorable to diversify the mutation factors and thus avoid premature convergence [12]. Based on these considerations, at each generation, the new mutation factor [figure omitted; refer to PDF] associated with individual [figure omitted; refer to PDF] is generated by a Cauchy distribution random real number with location parameter 0.5 and scale parameter 0.3 with probability 0.1. Additionally, following the suggestions in [11], the new crossover probability [figure omitted; refer to PDF] associated with individual [figure omitted; refer to PDF] acquires a random value from 0.0 to 1.0 with probability 0.1. Mathematically, the new control parameters [figure omitted; refer to PDF] and [figure omitted; refer to PDF] associated with individual [figure omitted; refer to PDF] for generating its corresponding trial vector [figure omitted; refer to PDF] are obtained by [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is a Cauchy distribution random real number with location parameter 0.5 and scale parameter 0.3 and [figure omitted; refer to PDF] is a uniformly random number within the range [figure omitted; refer to PDF] . After obtaining the new control parameters [figure omitted; refer to PDF] and [figure omitted; refer to PDF] , the corresponding trial vector [figure omitted; refer to PDF] are created by using the new control parameters [figure omitted; refer to PDF] and [figure omitted; refer to PDF] . It is widely acknowledged that better control parameter values tend to produce better individuals that have a greater chance to survive and thus these values should be propagated to the next generations [12]. Therefore, in the selection step, the control parameters [figure omitted; refer to PDF] and [figure omitted; refer to PDF] associated with individual [figure omitted; refer to PDF] for the next generation are updated by [figure omitted; refer to PDF] From the above designed parameter adaptation mechanism, we can infer that the better control parameters of DEECL can be propagated to the next generations. Therefore, the control parameters of DEECL can be adaptively tuned according to the feedback from the search process.
4. Numerical Experiments
4.1. Experimental Setup
In order to assess the performance of the proposed DEECL, we use 13 classical test functions ( [figure omitted; refer to PDF] ) that are widely used in the evolutionary computation community [8, 12, 35] to verify the effectiveness of the proposed DEECL. We describe these test functions in Table 1. Among these test functions [35], [figure omitted; refer to PDF] are continuous unimodal functions. [figure omitted; refer to PDF] is the Rosenbrock function which is unimodal for [figure omitted; refer to PDF] and 3; however, it may have multiple minima in high dimension cases [36]. [figure omitted; refer to PDF] is a discontinuous step function, and [figure omitted; refer to PDF] is a noisy function. [figure omitted; refer to PDF] are multimodal functions and they exist many local minima [35].
Table 1: The 13 classical test functions.
Function | Name | Initial range | [figure omitted; refer to PDF] |
[figure omitted; refer to PDF] | Sphere Problem | [figure omitted; refer to PDF] | 0 |
[figure omitted; refer to PDF] | Schwefel's Problem 2.22 | [figure omitted; refer to PDF] | 0 |
[figure omitted; refer to PDF] | Schwefel's Problem 1.2 | [figure omitted; refer to PDF] | 0 |
[figure omitted; refer to PDF] | Schwefel's Problem 2.21 | [figure omitted; refer to PDF] | 0 |
[figure omitted; refer to PDF] | Rosenbrock's Function | [figure omitted; refer to PDF] | 0 |
[figure omitted; refer to PDF] | Step Function | [figure omitted; refer to PDF] | 0 |
[figure omitted; refer to PDF] | Quartic Function with Noise | [figure omitted; refer to PDF] | 0 |
[figure omitted; refer to PDF] | Schwefel's Problem 2.26 | [figure omitted; refer to PDF] | 0 |
[figure omitted; refer to PDF] | Rastrigin's Function | [figure omitted; refer to PDF] | 0 |
[figure omitted; refer to PDF] | Ackley's Function | [figure omitted; refer to PDF] | 0 |
[figure omitted; refer to PDF] | Griewank Function | [figure omitted; refer to PDF] | 0 |
[figure omitted; refer to PDF] | Penalized Function 1 | [figure omitted; refer to PDF] | 0 |
[figure omitted; refer to PDF] | Penalized Function 2 | [figure omitted; refer to PDF] | 0 |
In all experiments, we set the number of dimensions [figure omitted; refer to PDF] to 30 for all these test functions. We carry out 30 independent runs for each algorithm and each test function with 150,000 function evaluations (FES) as the termination criterion. Moreover, we record the average and standard deviation of the function error value [figure omitted; refer to PDF] for estimating the performance of the algorithms, as recommended by [17], where [figure omitted; refer to PDF] is the best solution gained by the algorithm in a run and [figure omitted; refer to PDF] is the global optimum of the test function.
4.2. Benefit of the Two Components
There are two important components in the proposed DEECL: the proposed elite chaotic search strategy and the designed parameter adaptation mechanism. Accordingly, it is interesting to recognize the benefit of the two components of the proposed DEECL. For this purpose, we conduct experiments to compare the proposed DEECL with the traditional DE with DE/rand/1 strategy and two variants of DEECL, namely, DE with the proposed elite chaotic search strategy (DEwEC) and DE with the designed parameter adaptation mechanism (DEwPA). In the experiments, we set the population size of all the algorithms to 100. For the other parameters of DE and DEwEC, we set [figure omitted; refer to PDF] and [figure omitted; refer to PDF] , following the suggestions in [11].
We present the experimental results of the above mentioned algorithms in Table 2. The best results among the four algorithms are highlighted in boldface . "Mean Error" and "Std Dev" indicate the mean and standard deviation of the function error values achieved in 30 independent runs, respectively. From the results of comparison between DE and DEwEC, DEwEC performs better than DE on all test functions with the exception of [figure omitted; refer to PDF] . On test function [figure omitted; refer to PDF] , both DE and DEwEC exhibit similar performance. In total, DEwEC is better than DE on twelve test functions. The results of comparison between DE and DEwEC indicate that our introduced elite chaotic search strategy is effective to enhance the performance of the traditional DE.
Table 2: Experimental results of DE, DEwEC, DEwPA, and DEECL over 30 independent runs for the 13 test functions.
Function | DE | DEwEC | DEwPA | DEECL |
Mean ± Std Dev | Mean ± Std Dev | Mean ± Std Dev | Mean ± Std Dev | |
[figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
[figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
[figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
[figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
[figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
[figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
[figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
[figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
[figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
[figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
[figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
[figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
[figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
From the comparison of DE with DEwPA, DEwPA surpasses DE on all test functions except for [figure omitted; refer to PDF] and [figure omitted; refer to PDF] . On test function [figure omitted; refer to PDF] , both DE and DEwPA demonstrate similar performance, whereas DE is better than DEwPA on test function [figure omitted; refer to PDF] . In summary, DEwPA outperforms DE on eleven test functions. The comparison of DE with DEwPA reveals that our designed parameter adaptation mechanism is capable of improving the efficiency of the traditional DE.
By incorporation of both the proposed elite chaotic search strategy and the designed parameter adaptation mechanism, DEECL achieves promising performance, which is better than other three DE algorithms on the majority of the test functions. To be specific, DEECL is better than DE, DEwEC, and DEwPA on eleven, nine, and ten test functions, respectively. DE, DEwEC, and DEwPA can outperform DEECL only on one test function. Comparison results suggest that both the introduced elite chaotic search strategy and the designed parameter adaptation mechanism demonstrate positive effect on the performance of DEECL. In addition, the comparison results confirm that the introduced elite chaotic search strategy and the designed parameter adaptation mechanism can help DE with both outperform DE with either or neither one on the majority of the test functions. Moreover, the introduced elite chaotic search strategy and the designed parameter adaptation mechanism work together to improve the performance of the traditional DE rather than contradict each other. The evolution of the average function error values derived from DE, DEwEC, DEwPA, and DEECL versus the number of FES is plotted in Figure 1 for some typical test functions. As can be seen from Figure 1, DEECL converges faster than DE, DEwEC, and DEwPA.
Figure 1: Evolution of the average function error values derived from DE, DEwEC, DEwPA, and DEECL versus the number of FES.
[figure omitted; refer to PDF]
4.3. Comparison with Other DE Variants
In order to verify the effectiveness of the proposed DEECL algorithm, we compare DEECL with the traditional DE and three other DE variants, namely, jDE [11], ODE [14], and DECLS [16]. In addition, jDE is a self-adaptive DE, in which both parameters [figure omitted; refer to PDF] and [figure omitted; refer to PDF] are generated independently for each individual by an adaptive mechanism [11]. ODE is proposed by Rahnamayan et al. [14], which incorporates the opposition-based learning strategy into the traditional DE for population initialization and creating new solutions. DECLS is an effective memetic DE algorithm [16], which utilizes the chaotic local search strategy and an adaptive parameter control approaches similar to jDE [11] to improve the search ability. In the experiments, in order to have a fair comparison, we set the population size of all the algorithms to 100. The other parameter settings of these three DE variants are the same as in their original papers.
The mean and standard deviation of the function error values achieved by each algorithm for the 13 classical test functions are presented in Table 3. For convenience of analysis, the best results among the four DE algorithms are highlighted in boldface . In order to gain statistically significant conclusions, we conduct two-tailed [figure omitted; refer to PDF] -tests at the significance level of 0.05 [28, 35] on the experimental results. The summary comparison results are described in the last three rows of Table 3. "+," "-," and " [figure omitted; refer to PDF] " suggest that DEECL is better than, worse than, and similar to the corresponding algorithm in terms of the two-tailed [figure omitted; refer to PDF] -tests at the significance level of 0.05, respectively.
Table 3: Experimental results of DE, jDE, ODE, DECLS, and DEECL over 30 independent runs for the 13 test functions.
Function | DE | jDE | ODE | DECLS | DEECL |
Mean ± Std Dev | Mean ± Std Dev | Mean ± Std Dev | Mean ± Std Dev | Mean ± Std Dev | |
[figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
[figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
[figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] [approximate] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
[figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] [approximate] | [figure omitted; refer to PDF] |
[figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
[figure omitted; refer to PDF] | [figure omitted; refer to PDF] [approximate] | [figure omitted; refer to PDF] [approximate] | [figure omitted; refer to PDF] [approximate] | [figure omitted; refer to PDF] [approximate] | [figure omitted; refer to PDF] |
[figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
[figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] [approximate] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] [approximate] | [figure omitted; refer to PDF] |
[figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] [approximate] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] [approximate] | [figure omitted; refer to PDF] |
[figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
[figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] [approximate] | [figure omitted; refer to PDF] [approximate] | [figure omitted; refer to PDF] [approximate] | [figure omitted; refer to PDF] |
[figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
[figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
| |||||
- | 1 | 1 | 2 | 2 |
|
+ | 11 | 7 | 9 | 6 |
|
[figure omitted; refer to PDF] | 1 | 5 | 2 | 5 |
|
From Table 3, we can infer that DEECL achieves the better results than all the other four algorithms on the majority of the 13 classical test functions. Specifically, DEECL is significantly better than DE, jDE, ODE, and DECLS on eleven, seven, nine, and six test functions according to the two-tailed [figure omitted; refer to PDF] -test, respectively. In addition, DEECL is similar to DE, jDE, ODE, and DECLS on one, five, two, and five test functions, respectively. DE and jDE surpasses DEECL only on one test function. Additionally, ODE and DECLS perform better than DEECL only on two test functions.
Overall, DEECL performs better than the traditional DE, jDE, ODE, and DECLS on the majority of the test functions. This can be because the proposed elite chaotic search strategy learning the heuristic information from the elite individuals can promote the exploitation power, and the designed parameter adaptation mechanism can enhance the robustness. The evolution of the average function error values derived from DE, jDE, ODE, DECLS, and DEECL versus the number of FES is plotted in Figure 2 for some typical test functions. It can be known from Figure 2 that DEECL converges faster than DE, jDE, ODE, and DECLS.
Figure 2: Evolution of the average function error values derived from DE, jDE, ODE, DECLS, and DEECL versus the number of FES.
[figure omitted; refer to PDF]
In order to compare the total performance of the five DE algorithms on the all 13 classical test functions, we carry out the average ranking of Friedman test on the experimental results following the suggestions in [37-39]. Table 4 presents the average ranking of the five DE algorithms on the all 13 classical test functions. We can sort these five DE algorithms by the average ranking into the following order: DEECL, DECLS, jDE, ODE, and DE. Therefore, DEECL obtains the best average ranking, and its total performance is better than that of the other four algorithms on the all 13 test instances.
Table 4: Average rankings of the five algorithms for the 13 test functions achieved by Friedman test.
Algorithm | Ranking |
DEECL | 2.04 |
DECLS | 2.27 |
jDE | 2.65 |
ODE | 3.50 |
DE | 4.54 |
5. Conclusions
DE is a popular evolutionary algorithm for the continuous global optimization problems, which has a simple structure yet exhibits efficient performance on various real-world engineering problems. However, according to the no free lunch (NFL) theory, the performance of DE should be further enhanced to obtain better solutions in some cases. In this paper, we propose an enhanced differential evolution with elite chaotic local search, called DEECL, which uses a chaotic search strategy based on the heuristic information from the elite individuals to promote the exploitation power and employs a simple and effective parameter adaptation mechanism to enhance the robustness. In the experiments, we use 13 classical test functions that are widely used in the evolutionary computation community to evaluate the performance of DEECL. The experimental results show that DEECL can outperform the conventional DE, jDE, ODE, and DECLS on the majority of the test functions.
In the future, we will apply DEECL to handle more complex optimization problems, such as high-dimensional optimization problems and multiobjective optimization problems.
Acknowledgments
This work was supported in part by the National Natural Science Foundation of China (nos. 61364025, 61462036, and 41261093), by the Natural Science Foundation of Jiangxi, China (nos. 20151BAB217010 and 20151BAB201015), by the Education Department Youth Scientific Research Foundation of Jiangxi Province, China (nos. GJJ14456 and GJJ13378).
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
[1] R. Storn, K. Price, "Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces," Journal of Global Optimization , vol. 11, no. 4, pp. 341-359, 1997.
[2] S. Das, P. N. Suganthan, "Differential evolution: a survey of the state-of-the-art," IEEE Transactions on Evolutionary Computation , vol. 15, no. 1, pp. 4-31, 2011.
[3] J. Vesterstrøm, R. Thomsen, "A comparative study of differential evolution, particle swarm optimization, and evolutionary algorithms on numerical benchmark problems," in Proceedings of the Congress on Evolutionary Computation (CEC '04), vol. 2, pp. 1980-1987, IEEE, June 2004.
[4] L. Wang, L.-P. Li, "Fixed-structure H∞ controller synthesis based on differential evolution with level comparison," IEEE Transactions on Evolutionary Computation , vol. 15, no. 1, pp. 120-129, 2011.
[5] F. Martin, L. Moreno, M. L. Muñoz, D. Blanco, "Initial population size estimation for a differential-evolution-based global localization filter," International Journal of Robotics and Automation , vol. 29, no. 3, 2014.
[6] F. Neri, V. Tirronen, "Recent advances in differential evolution: a survey and experimental analysis," Artificial Intelligence Review , vol. 33, no. 1-2, pp. 61-106, 2010.
[7] A. K. Qin, V. L. Huang, P. N. Suganthan, "Differential evolution algorithm with strategy adaptation for global numerical optimization," IEEE Transactions on Evolutionary Computation , vol. 13, no. 2, pp. 398-417, 2009.
[8] W. Gong, Z. Cai, C. X. Ling, C. Li, "Enhanced differential evolution with adaptive strategies for numerical optimization," IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics , vol. 41, no. 2, pp. 397-413, 2011.
[9] H. Wang, S. Rahnamayan, H. Sun, M. G. H. Omran, "Gaussian bare-bones differential evolution," IEEE Transactions on Cybernetics , vol. 43, no. 2, pp. 634-647, 2013.
[10] A. Deb, J. S. Roy, B. Gupta, "Performance comparison of differential evolution, particle swarm optimization and genetic algorithm in the design of circularly polarized microstrip antennas," IEEE Transactions on Antennas and Propagation , vol. 62, no. 8, pp. 3920-3928, 2014.
[11] J. Brest, S. Greiner, B. Boskovic, M. Mernik, V. Zumer, "Self-adapting control parameters in differential evolution: a comparative study on numerical benchmark problems," IEEE Transactions on Evolutionary Computation , vol. 10, no. 6, pp. 646-657, 2006.
[12] J. Zhang, A. C. Sanderson, "Jade: adaptive differential evolution with optional external archive," IEEE Transactions on Evolutionary Computation , vol. 13, no. 5, pp. 945-958, 2009.
[13] R. Mallipeddi, P. N. Suganthan, Q. K. Pan, M. F. Tasgetiren, "Differential evolution algorithm with ensemble of parameters and mutation strategies," Applied Soft Computing Journal , vol. 11, no. 2, pp. 1679-1696, 2011.
[14] R. S. Rahnamayan, H. R. Tizhoosh, M. M. A. Salama, "Opposition-based differential evolution," IEEE Transactions on Evolutionary Computation , vol. 12, no. 1, pp. 64-79, 2008.
[15] H. Wang, Z. Wu, S. Rahnamayan, "Enhanced opposition-based differential evolution for solving high-dimensional continuous optimization problems," Soft Computing , vol. 15, no. 11, pp. 2127-2140, 2011.
[16] D. Jia, G. Zheng, M. Khurram Khan, "An effective memetic differential evolution algorithm based on chaotic local search," Information Sciences , vol. 181, no. 15, pp. 3175-3187, 2011.
[17] Y. Wang, Z. Cai, Q. Zhang, "Differential evolution with composite trial vector generation strategies and control parameters," IEEE Transactions on Evolutionary Computation , vol. 15, no. 1, pp. 55-66, 2011.
[18] D. H. Wolpert, W. G. Macready, "No free lunch theorems for optimization," IEEE Transactions on Evolutionary Computation , vol. 1, no. 1, pp. 67-82, 1997.
[19] N. Noman, H. Iba, "Accelerating differential evolution using an adaptive local search," IEEE Transactions on Evolutionary Computation , vol. 12, no. 1, pp. 107-125, 2008.
[20] Q.-K. Pan, P. N. Suganthan, L. Wang, L. Gao, R. Mallipeddi, "A differential evolution algorithm with self-adapting strategy and control parameters," Computers and Operations Research , vol. 38, no. 1, pp. 394-408, 2011.
[21] D. Kranjcic, G. Stumberger, "Differential evolution-based identification of the nonlinear kaplan turbine model," IEEE Transactions on Energy Conversion , vol. 29, no. 1, pp. 178-187, 2014.
[22] Q.-K. Pan, L. Wang, L. Gao, W. D. Li, "An effective hybrid discrete differential evolution algorithm for the flow shop scheduling with intermediate buffers," Information Sciences , vol. 181, no. 3, pp. 668-685, 2011.
[23] L. X. Tang, Y. Zhao, J. Y. Liu, "An improved differential evolution algorithm for practical dynamic scheduling in steelmaking-continuous casting production," IEEE Transactions on Evolutionary Computation , vol. 18, no. 2, pp. 209-225, 2014.
[24] S. Das, A. Abraham, U. K. Chakraborty, A. Konar, "Differential evolution using a neighborhood-based mutation operator," IEEE Transactions on Evolutionary Computation , vol. 13, no. 3, pp. 526-553, 2009.
[25] M. G. Epitropakis, D. K. Tasoulis, N. G. Pavlidis, V. P. Plagianakos, M. N. Vrahatis, "Enhancing differential evolution utilizing proximity-based mutation operators," IEEE Transactions on Evolutionary Computation , vol. 15, no. 1, pp. 99-119, 2011.
[26] S. Kundu, S. Das, A. V. Vasilakos, S. Biswas, "A modified differential evolution-based combined routing and sleep scheduling scheme for lifetime maximization of wireless sensor networks," Soft Computing , vol. 19, no. 3, pp. 637-659, 2014.
[27] T. Bhadra, S. Bandyopadhyay, "Unsupervised feature selection using an improved version of differential evolution," Expert Systems with Applications , vol. 42, no. 8, pp. 4042-4053, 2015.
[28] Y. Wang, Z. Cai, Q. Zhang, "Enhancing the search ability of differential evolution through orthogonal crossover," Information Sciences , vol. 185, no. 1, pp. 153-177, 2012.
[29] B. Alatas, "Chaotic bee colony algorithms for global numerical optimization," Expert Systems with Applications , vol. 37, no. 8, pp. 5682-5687, 2010.
[30] B. Li, W. S. Jiang, "Optimizing complex functions by chaos search," Cybernetics & Systems , vol. 29, no. 4, pp. 409-419, 1998.
[31] Y. He, Q. Xu, S. Yang, L. Liao, "Reservoir flood control operation based on chaotic particle swarm optimization algorithm," Applied Mathematical Modelling , vol. 38, no. 17, pp. 4480-4492, 2014.
[32] B. Liu, L. Wang, Y.-H. Jin, F. Tang, D.-X. Huang, "Improved particle swarm optimization combined with chaos," Chaos, Solitons & Fractals , vol. 25, no. 5, pp. 1261-1271, 2005.
[33] T. Xiang, X. Liao, K.-W. Wong, "An improved particle swarm optimization algorithm combined with piecewise linear chaotic map," Applied Mathematics and Computation , vol. 190, no. 2, pp. 1637-1645, 2007.
[34] X. F. Yan, D. Z. Chen, S. X. Hu, "Chaos-genetic algorithms for optimizing the operating conditions based on RBF-PLS model," Computers & Chemical Engineering , vol. 27, no. 10, pp. 1393-1404, 2003.
[35] X. Yao, Y. Liu, G. Lin, "Evolutionary programming made faster," IEEE Transactions on Evolutionary Computation , vol. 3, no. 2, pp. 82-102, 1999.
[36] Y.-W. Shang, Y.-H. Qiu, "A note on the extended Rosenbrock function," Evolutionary Computation , vol. 14, no. 1, pp. 119-126, 2006.
[37] S. Garcia, F. Herrera, "An extension on statistical comparisons of classifiers over multiple data sets for all pairwise comparisons," Journal of Machine Learning Research , vol. 9, pp. 2677-2694, 2008.
[38] S. Garcia, D. Molina, M. Lozano, F. Herrera, "A study on the use of non-parametric tests for analyzing the evolutionary algorithms' behaviour: a case study on the CEC'2005 Special Session on Real Parameter Optimization," Journal of Heuristics , vol. 15, no. 6, pp. 617-644, 2009.
[39] H. Wang, H. Sun, C. Li, S. Rahnamayan, J.-S. Pan, "Diversity enhanced particle swarm optimization with neighborhood search," Information Sciences , vol. 223, pp. 119-135, 2013.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2015 Zhaolu Guo et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
Differential evolution (DE) is a simple yet efficient evolutionary algorithm for real-world engineering problems. However, its search ability should be further enhanced to obtain better solutions when DE is applied to solve complex optimization problems. This paper presents an enhanced differential evolution with elite chaotic local search (DEECL). In DEECL, it utilizes a chaotic search strategy based on the heuristic information from the elite individuals to promote the exploitation power. Moreover, DEECL employs a simple and effective parameter adaptation mechanism to enhance the robustness. Experiments are conducted on a set of classical test functions. The experimental results show that DEECL is very competitive on the majority of the test functions.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer