Chun-Feng Wang 1,2 and Kui Liu 1
Academic Editor:Massimo Panella
1, Department of Mathematics, Henan Normal University, Xinxiang 453007, China
2, Henan Engineering Laboratory for Big Data Statistical Analysis and Optimal Control, School of Mathematics and Information Sciences, Henan Normal University, Xinxiang 453007, China
Received 17 August 2015; Revised 1 December 2015; Accepted 8 December 2015
This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
This paper considers the following global optimization problem: [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is a continuous variable vector with domain [figure omitted; refer to PDF] defined by the bound constraint [figure omitted; refer to PDF] , [figure omitted; refer to PDF] . The function [figure omitted; refer to PDF] is a continuous real-valued function.
Many real-world problems, such as engineering and related areas, can be reduced to formulation (1). This problem usually has many local optima, so it is difficult to find its global optimum. For solving such problem, researchers have presented many methods during the past years, which can be divided into two groups: deterministic and stochastic algorithms. Most deterministic algorithms usually effective for unimodal functions have one global optimum and need gradient information. However, stochastic algorithms do not require any properties of the objective function. Therefore, more attention has been paid to stochastic algorithms recently, and many effective algorithms have been presented, including Simulated Annealing (SA) [1], Genetic Algorithm (GA) [2, 3], Differential Evolution (DE) [4], Particle Swarm Optimization (PSO) [5], Ant Colony Optimization (ACO) [6], Artificial Bee Colony (ABC) [7], and Harmony Search (HS) [8].
Among these stochastic algorithms, PSO is a population-based and intelligent method, which is inspired by the emergent motion of a flock of birds searching for food [5]. In PSO, a population of potential solutions is evolved through successive iterations. Since PSO algorithm has a number of desirable properties, including simplicity of implementation, scalability in dimension, and good empirical performance, it has been applied to solve many real-world problems, such as capacitor placement problem [9], short term load forecasting [10], soft sensor [11], the voltage stability of the electric power distribution systems [12, 13], the orbits of discrete chaotic dynamical systems towards desired target region [14], and the permutation flow-shop sequencing problem [15].
Although PSO algorithm has been applied successfully in solving many difficult optimization problems, it also has difficulties in keeping balance between exploration and exploitation when solving complex multimodal problems. In order to get a better performance for PSO algorithm, many variants of PSO have been developed. For example, by using random value of inertia weight, Eberhart and Shi proposed a modified PSO, which can track the optima in a dynamic environment [16]. By utilizing the success rate of the particles, a new adaptive inertia weight strategy was presented [17]. Through using Cauchy mutation, a hybrid PSO (HPSO) was developed [18]. In [19], a hybrid PSO with a wavelet mutation (HWPSO) was given, in which the mutation incorporates with a wavelet function. To avoid premature convergence, a novel parameter automation strategy is proposed [20]. Valdez et al. introduced an improved FPSO + FGA hybrid method by combining the advantages of PSO and GA [21]. Based on mutation operator and different local search techniques, a superior solution guided PSO (SSG-PSO) was developed [22]. By using the second personal best and the second global best particle, two modified PSO algorithms are presented [23]. In order to avoid being trapped in local optima in the convergence process, some other improved PSO have been proposed, such as crossover [24], orthogonal learning strategy [25], chaos [26], and elitist learning strategy [27].
In this paper, by utilizing the information of the the best neighbor of each particle and the best particle of the entire population in the current iteration, a new Particle Swarm Optimization algorithm is proposed, which is named NPSO. To avoid premature, an abandoned mechanism is presented in our algorithm. Furthermore, for improving the global convergence speed, a chaotic search is implemented in the best solution of each iteration.
The remainder of this paper is organized as follows. Section 2 describes the original PSO. Then our proposed modifications to PSO are described in Section 3. Numerical results and discussions are presented in Section 4. Finally, some concluding remarks are provided in Section 5.
2. Particle Swarm Optimization (PSO)
Assume that the search space is [figure omitted; refer to PDF] -dimensional; [figure omitted; refer to PDF] denotes the size of the swarm population. In PSO, each particle [figure omitted; refer to PDF] has a position [figure omitted; refer to PDF] in the search space and a velocity [figure omitted; refer to PDF] to indicate its current state. A position [figure omitted; refer to PDF] denotes a feasible solution. The position [figure omitted; refer to PDF] and the velocity [figure omitted; refer to PDF] are updated by the best position [figure omitted; refer to PDF] encountered by the particle so far and the best position [figure omitted; refer to PDF] found by the entire population of particles according to the following equations: [figure omitted; refer to PDF] where [figure omitted; refer to PDF] and [figure omitted; refer to PDF] are two learning factors which control the influence of the social and cognitive components, [figure omitted; refer to PDF] are random numbers in the range [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] is the inertia weight, which ensures the convergence of the PSO algorithm and is decreased linearly.
3. The Novel PSO Algorithm (NPSO)
In the original PSO, since each particle moves in the search space guided only by its historical best solution [figure omitted; refer to PDF] and the global best solution [figure omitted; refer to PDF] , it may get trapped in a local optimal solution when current global best solution in a local optimum and not easy for the particle escapes from it. To solve such problem, in this section, three improvement strategies are proposed.
3.1. The First Improvement Strategy
From (2), we observe that only the information of the historical best position [figure omitted; refer to PDF] of each particle and the global best position [figure omitted; refer to PDF] of all particles is utilized. As a matter of fact, the information of the best neighbor [figure omitted; refer to PDF] of the particle [figure omitted; refer to PDF] may provide a better guidance than [figure omitted; refer to PDF] . The details are given as follows.
Firstly, we explain how to define the neighbors and to determine the best neighbor [figure omitted; refer to PDF] of the particle [figure omitted; refer to PDF] . In order to define appropriate neighbors, different approaches could be used. In this paper, the neighbors of [figure omitted; refer to PDF] are defined by using the mean Euclidean distance between [figure omitted; refer to PDF] and the rest of solutions. Let [figure omitted; refer to PDF] be the Euclidean distance between [figure omitted; refer to PDF] and [figure omitted; refer to PDF] and let [figure omitted; refer to PDF] be the mean Euclidean distance for [figure omitted; refer to PDF] . Then [figure omitted; refer to PDF] can be computed as follows: [figure omitted; refer to PDF] By (3), if [figure omitted; refer to PDF] , then [figure omitted; refer to PDF] could be accepted as a neighbor of [figure omitted; refer to PDF] .
In addition, we can also use a more general and flexible definition to determine a neighbor of [figure omitted; refer to PDF] : [figure omitted; refer to PDF] If (4) is used, then a new parameter [figure omitted; refer to PDF] , which refers to the "neighbourhood radius," will be added to the parameters of PSO. If [figure omitted; refer to PDF] , it turns to the standard PSO. With the value of [figure omitted; refer to PDF] increasing, the neighborhood of [figure omitted; refer to PDF] enlarges or its neighborhood shrinks as the value of [figure omitted; refer to PDF] decreases. Once the neighbors are determined, we select the best position among the neighbors of [figure omitted; refer to PDF] as the best neighbor [figure omitted; refer to PDF] .
After determining the the best neighbor [figure omitted; refer to PDF] , we give a new way moving for each particle. If [figure omitted; refer to PDF] , then set [figure omitted; refer to PDF] If [figure omitted; refer to PDF] , then set [figure omitted; refer to PDF] After that, [figure omitted; refer to PDF] moves to a new position by the following equation: [figure omitted; refer to PDF]
In our algorithm, it is obvious that before each particle moves, it first watches the region which is centered by itself, selects the best neighbour, and then uses (5)-(7) to generate the next position.
3.2. The Second Improvement Strategy
To avoid premature, in our algorithm, an abandoned mechanism is proposed.
Assume that " [figure omitted; refer to PDF] " is a predetermined number, which will be used to determine whether the position [figure omitted; refer to PDF] of the particle [figure omitted; refer to PDF] should be abandoned. At the beginning, set [figure omitted; refer to PDF] . With the implementation of the algorithm, if the position [figure omitted; refer to PDF] was not improved, then set [figure omitted; refer to PDF] ; else set [figure omitted; refer to PDF] . If the position [figure omitted; refer to PDF] can not be improved anymore when [figure omitted; refer to PDF] is achieved, that is, [figure omitted; refer to PDF] , then the position will be abandoned and a new position [figure omitted; refer to PDF] will replace it, which is generated by using the following equation: [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is a random number in the range [figure omitted; refer to PDF] and [figure omitted; refer to PDF] is the inertia weight that controls the impact of the optimal solution at current iteration, which is increased linearly.
From (8), it can be seen that, in the early stage of the algorithm, [figure omitted; refer to PDF] will be generated with a large randomness and in the late stage, it will be generated near the global best position [figure omitted; refer to PDF] .
3.3. The Third Improvement Strategy
To improve the global convergence of NPSO, a chaotic search operator is adopted. Next, we give the details.
Let [figure omitted; refer to PDF] be the best solution of the current iteration. Firstly, utilize the following equation (9) to generate chaotic variable [figure omitted; refer to PDF] : [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is the length of chaotic sequence and [figure omitted; refer to PDF] is a random number. Then map [figure omitted; refer to PDF] to a chaotic vector [figure omitted; refer to PDF] in the interval [figure omitted; refer to PDF] : [figure omitted; refer to PDF] where [figure omitted; refer to PDF] and [figure omitted; refer to PDF] are the lower bound and upper bound of variable [figure omitted; refer to PDF] , respectively. Finally, a new candidate solution [figure omitted; refer to PDF] is obtained by the following equation: [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is a shrinking factor, which is defined as follows: [figure omitted; refer to PDF] where [figure omitted; refer to PDF] is the maximum number of iterations and [figure omitted; refer to PDF] is the number of iterations.
By (11) and (12), it can be seen that [figure omitted; refer to PDF] will become smaller with the increase of evolutionary generation; that is, the local search range will become smaller with the process of evolution.
Based on the abovementioned explanation, the pseudocode of the NPSO algorithm is given in Algorithm 1.
Algorithm 1: Framework of NPSO. Remark: in our algorithm, for simplicity, in (8), we can set [figure omitted; refer to PDF] , which can meet our needs.
(1) Initialize a population of [figure omitted; refer to PDF] particles with random positions [figure omitted; refer to PDF] in a given search space, and random
velocities [figure omitted; refer to PDF] ; the maximum iteration [figure omitted; refer to PDF] ; [figure omitted; refer to PDF] ; [figure omitted; refer to PDF] ; the length of chaotic sequence [figure omitted; refer to PDF] .
(2) Set [figure omitted; refer to PDF] and find [figure omitted; refer to PDF] .
(3) while [figure omitted; refer to PDF] do
(4) [figure omitted; refer to PDF]
(5) for [figure omitted; refer to PDF] to [figure omitted; refer to PDF] do
(6) for [figure omitted; refer to PDF] to [figure omitted; refer to PDF] do
(7) By (3) and (4), update the velocity of each particle.
(8) By (5), update the position of each particle.
(9) end for
(10) if [figure omitted; refer to PDF]
(11) [figure omitted; refer to PDF] , set [figure omitted; refer to PDF] ;
(12) else
(13) set [figure omitted; refer to PDF] .
(14) end if
(15) if [figure omitted; refer to PDF]
(16) set [figure omitted; refer to PDF] ,
(17) end if
(18) end for
(19) for [figure omitted; refer to PDF] to [figure omitted; refer to PDF] do
(20) if [figure omitted; refer to PDF]
(21) By (8), to generate a new position, and replace [figure omitted; refer to PDF] .
(22) end if
(23) end for
(24) By (9)-(12), to chaotic search in [figure omitted; refer to PDF] , and update [figure omitted; refer to PDF] (if necessary).
(25) [figure omitted; refer to PDF]
(26) end while
4. Experimental Results and Discussion
4.1. Experiments 1
In this subsection, the performance of NPSO algorithm is compared to PSO algorithm by evaluating convergence and best solution found for 14 benchmark functions, where [figure omitted; refer to PDF] are shifted functions and [figure omitted; refer to PDF] are rotated functions. The characteristics, dimensions, initial range, and formulations of these functions are listed in Table 1. [figure omitted; refer to PDF] is [figure omitted; refer to PDF] for [figure omitted; refer to PDF] and [figure omitted; refer to PDF] for [figure omitted; refer to PDF] ; [figure omitted; refer to PDF] is the shifted global optimum; [figure omitted; refer to PDF] is the [figure omitted; refer to PDF] orthogonal matrix.
Table 1: Benchmark functions used in experiments.
Functions | Dimension | C | Range | Optimal value |
[figure omitted; refer to PDF] | 30 | US | [figure omitted; refer to PDF] | 0 |
[figure omitted; refer to PDF] | 30 | US | [figure omitted; refer to PDF] | 0 |
[figure omitted; refer to PDF] | 30 | UN | [figure omitted; refer to PDF] | 0 |
[figure omitted; refer to PDF] | 30 | MS | [figure omitted; refer to PDF] | 0 |
[figure omitted; refer to PDF] | 30 | MS | [figure omitted; refer to PDF] | 0 |
[figure omitted; refer to PDF] | 30 | UN | [figure omitted; refer to PDF] | 0 |
[figure omitted; refer to PDF] | 30 | MN | [figure omitted; refer to PDF] | 0 |
[figure omitted; refer to PDF] | ||||
[figure omitted; refer to PDF] | 30 | MS | [figure omitted; refer to PDF] | 0 |
[figure omitted; refer to PDF] | 30 | US | [figure omitted; refer to PDF] | 0 |
[figure omitted; refer to PDF] | 30 | UN | [figure omitted; refer to PDF] | 0 |
[figure omitted; refer to PDF] | 30 | MS | [figure omitted; refer to PDF] | 0 |
[figure omitted; refer to PDF] | 30 | UN | [figure omitted; refer to PDF] | -450 |
[figure omitted; refer to PDF] | 30 | MS | [figure omitted; refer to PDF] | -330 |
[figure omitted; refer to PDF] | 30 | US | [figure omitted; refer to PDF] | 90 |
[figure omitted; refer to PDF] |
C: characteristic, U: unimodal, M: multimodal, N: nonseparable, and S: separable.
The proposed algorithm NPSO and PSO are coded in Matlab 7.0, and the experiments' platform is a personal computer with Pentium 4, 3.06 GHz CPU, 512 M memory, and Windows XP.
The parameters of algorithms are given as follows. The common parameters are the dimension [figure omitted; refer to PDF] , the population size [figure omitted; refer to PDF] , the maximum number of iterations [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , [figure omitted; refer to PDF] , and [figure omitted; refer to PDF] . NPSO settings are as follows: the length [figure omitted; refer to PDF] of chaotic sequence and the limit [figure omitted; refer to PDF] of a position abandoned are set to 5; [figure omitted; refer to PDF] . Each experiment is repeated 30 times independently. The global minimums (Min), maximum number of iterations (Max iteration), mean best values (Mean), and standard deviations (SD) are given in Table 2. To show the convergence speed of PSO and NPSO more clearly, the convergence graphs of PSO and NPSO are shown in Figure 1.
Table 2: NPSO performance comparison with PSO.
Function | Max iteration | Algorithm | Mean | SD | Min |
[figure omitted; refer to PDF] | 1000 | PSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
NPSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | ||
[figure omitted; refer to PDF] | 1000 | PSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
NPSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | ||
[figure omitted; refer to PDF] | 1000 | PSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
NPSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | ||
[figure omitted; refer to PDF] | 1000 | PSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
NPSO | 0 | 0 | 0 | ||
[figure omitted; refer to PDF] | 1000 | PSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
NPSO | 0 | 0 | 0 | ||
[figure omitted; refer to PDF] | 1000 | PSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
NPSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | ||
[figure omitted; refer to PDF] | 1000 | PSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
NPSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | ||
[figure omitted; refer to PDF] | 1000 | PSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
NPSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | ||
[figure omitted; refer to PDF] | 1000 | PSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
NPSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | ||
[figure omitted; refer to PDF] | 1000 | PSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
NPSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | ||
[figure omitted; refer to PDF] | 1000 | PSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
NPSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | ||
[figure omitted; refer to PDF] | 1000 | PSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
NPSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | ||
[figure omitted; refer to PDF] | 1000 | PSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
NPSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | ||
[figure omitted; refer to PDF] | 1000 | PSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
NPSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
Figure 1: Convergence rates on test functions.
[figure omitted; refer to PDF]
From Table 2, it can be seen that the NPSO performs better than PSO on all test functions. From Figure 1, it can be seen that the convergence speed of NPSO is more fast than PSO.
4.2. Experiments 2
In this subsection, to further test the efficiency of NPSO, it is compared with other five algorithms, that is, CPSO [28], CLPSO [29], FIPS [30], Frankenstein [31], and AIWPSO [17].
Twelve benchmark functions are used for the comparison. The characteristics, dimensions, initial range, and formulations of these functions are listed in Table 3.
Table 3: Benchmark functions used in experiments.
Functions | Dimension ( [figure omitted; refer to PDF] ) | C | Range | Optimal value |
[figure omitted; refer to PDF] | 30 | US | [figure omitted; refer to PDF] | 0 |
[figure omitted; refer to PDF] | 30 | UN | [figure omitted; refer to PDF] | 0 |
[figure omitted; refer to PDF] | 30 | UN | [figure omitted; refer to PDF] | 0 |
[figure omitted; refer to PDF] | 30 | MS | [figure omitted; refer to PDF] | 0 |
[figure omitted; refer to PDF] | 30 | UN | [figure omitted; refer to PDF] | 0 |
[figure omitted; refer to PDF] | 30 | US | [figure omitted; refer to PDF] | 0 |
[figure omitted; refer to PDF] | 30 | MS | [figure omitted; refer to PDF] | 0 |
[figure omitted; refer to PDF] , | 30 | US | [figure omitted; refer to PDF] | 0 |
[figure omitted; refer to PDF] if [figure omitted; refer to PDF] else [figure omitted; refer to PDF] | ||||
[figure omitted; refer to PDF] | 30 | MN | [figure omitted; refer to PDF] | 0 |
[figure omitted; refer to PDF] | ||||
[figure omitted; refer to PDF] | 30 | MS | [figure omitted; refer to PDF] | 0 |
[figure omitted; refer to PDF] | 30 | MS | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
[figure omitted; refer to PDF] | 30 | US | [figure omitted; refer to PDF] | 0 |
C: characteristic, U: unimodal, M: multimodal, N: nonseparable, and S: separable.
In order to make a fair comparison, the maximum number of function evaluations (maxFEs) is set to 2e 5 for all algorithms; the population size is 20. The other parameters for NPSO are set as in Experiments 1. The comparison results are presented in Table 4. For the sake of convenience and reliability, except for the NPSO algorithm, the rest of results reported here are taken directly from the literature [17].
Table 4: The mean and standard deviation of the best solutions of six PSO variants on 12 test problems in 200,000 function evaluations.
| [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
CPSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
CLPSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
FIPS | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
Frankenstein | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
AIWPSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
NPSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
| |||
| [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
| |||
CPSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
CLPSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
FIPS | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
Frankenstein | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
AIWPSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
NPSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
| |||
| [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
| |||
CPSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
CLPSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
FIPS | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
Frankenstein | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
AIWPSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
NPSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
| |||
| [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
| |||
CPSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
CLPSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
FIPS | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
Frankenstein | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
AIWPSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
NPSO | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] | [figure omitted; refer to PDF] |
From Table 4, it can be seen that NPSO is significantly better than the other five algorithms on almost all the test functions, except for the function [figure omitted; refer to PDF] .
5. Conclusion
In this paper, by utilizing the information of the best neighbor of each particle and the best solution in the current iteration, we presented a new move equation. After that, based on the other two improvement strategies, a novel Particle Swarm Optimization algorithm NPSO was proposed. The performance of NPSO was compared with the standard PSO and other five variants of PSO. The results showed that NPSO presents promising results for considered problems.
In the future, the adaption of the parameters in NPSO can be studied to improve its performance.
Acknowledgments
The research was supported by NSFC (U1404105, 11171094); the Key Scientific and Technological Project of Henan Province (142102210058); the Doctoral Scientific Research Foundation of Henan Normal University (qd12103); the Youth Science Foundation of Henan Normal University (2013qk02); Henan Normal University National Research Project to Cultivate the Funded Projects (01016400105); the Henan Normal University Youth Backbone Teacher Training.
Conflict of Interests
The authors declare that there is no conflict of interest regarding the publication of this paper.
[1] N. Gabere Simulated annealing driven pattern search algorithms for global optimization [M.S. thesis] , University of the Witwatersrand, Johannesburg, South Africa, 2007.
[2] K. Deep, M. Thakur, "A new mutation operator for real coded genetic algorithms," Applied Mathematics and Computation , vol. 193, no. 1, pp. 211-230, 2007.
[3] P. Kaelo, M. M. Ali, "Integrated crossover rules in real coded genetic algorithms," European Journal of Operational Research , vol. 176, no. 1, pp. 60-76, 2007.
[4] R. Storn, K. Price, "Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces," Journal of Global Optimization , vol. 11, no. 4, pp. 341-359, 1997.
[5] J. Kennedy, R. Eberhart, "Particle swarm optimization," in Proceedings of the IEEE International Conference on Neural Networks, vol. 4, pp. 1942-1948, IEEE, Perth, Australia, November-December 1995.
[6] M. Dorigo, T. Stutzle Ant Colony Optimization , MIT Press, Cambridge, Mass, USA, 2004.
[7] D. Karaboga, "An idea based on honey bee swarm for numerical optimization,", no. TR06, Erciyes University, Kayseri, Turkey, 2005.
[8] Z. W. Geem, J. H. Kim, G. V. Loganathan, "A new heuristic optimization algorithm: harmony search," Simulation , vol. 76, no. 2, pp. 60-68, 2001.
[9] N. W. Oo, "A comparison study on particle swarm and evolutionary particle swarm optimization using capacitor placement problem," in Proceedings of the 2nd IEEE International Conference on Power and Energy Conference, pp. 1208-1211, Johor Bahru, Malaysia, December 2008.
[10] B. Wang, N.-L. Tai, H.-Q. Zhai, J. Ye, J.-D. Zhu, L.-B. Qi, "A new ARMAX model based on evolutionary algorithm and particle swarm optimization for short-term load forecasting," Electric Power Systems Research , vol. 78, no. 10, pp. 1679-1685, 2008.
[11] H. Wang, F. Qian, "An improved particle swarm optimizer with behavior-distance models and its application in soft-sensor," in Proceedings of the 7th World Congress on Intelligent Control and Automation (WCICA '08), pp. 4473-4478, IEEE, Chongqing, China, June 2008.
[12] S. Naka, T. Genji, T. Yura, Y. Fukuyama, "A hybrid particle swarm optimization for distribution state estimation," IEEE Transactions on Power Systems , vol. 18, no. 1, pp. 60-68, 2003.
[13] A. A. El-Dib, H. K. M. Youssef, M. M. El-Metwally, Z. Osman, "Maximum loadability of power systems using hybrid particle swarm optimization," Electric Power Systems Research , vol. 76, no. 6-7, pp. 485-492, 2006.
[14] B. Liu, L. Wang, Y.-H. Jin, F. Tang, D.-X. Huang, "Directing orbits of chaotic systems by particle swarm optimization," Chaos, Solitons and Fractals , vol. 29, no. 2, pp. 454-461, 2006.
[15] M. F. Tasgetiren, Y.-C. Liang, M. Sevkli, G. Gencyilmaz, "A particle swarm optimization algorithm for makespan and total flowtime minimization in the permutation flowshop sequencing problem," European Journal of Operational Research , vol. 177, no. 3, pp. 1930-1947, 2007.
[16] R. C. Eberhart, Y. H. Shi, "Tracking and optimizing dynamic systems with particle swarms," in Proceedings of the IEEE Congress on Evolutionary Computation, vol. 1, pp. 94-100, Seoul, South Korea, May 2001.
[17] A. Nickabadi, M. M. Ebadzadeh, R. Safabakhsh, "A novel particle swarm optimization algorithm with adaptive inertia weight," Applied Soft Computing , vol. 11, no. 4, pp. 3658-3670, 2011.
[18] H. Wang, C. Li, Y. Liu, S. Zeng, "A hybrid particle swarm algorithm with cauchy mutation," in Proceedings of the IEEE Swarm Intelligence Symposium (SIS '07), pp. 356-360, IEEE, Honolulu, Hawaii, USA, April 2007.
[19] S. H. Ling, H. H. C. Iu, K. Y. Chan, H. K. Lam, B. C. W. Yeung, F. H. Leung, "Hybrid particle swarm optimization with wavelet mutation and its industrial applications," IEEE Transactions on Systems, Man, and Cybernetics Part B: Cybernetics , vol. 38, no. 3, pp. 743-763, 2008.
[20] A. Ratnaweera, S. K. Halgamuge, H. C. Watson, "Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients," IEEE Transactions on Evolutionary Computation , vol. 8, no. 3, pp. 240-255, 2004.
[21] F. Valdez, P. Melin, O. Castillo, "Modular Neural Networks architecture optimization with a new nature inspired method using a fuzzy combination of Particle Swarm Optimization and Genetic Algorithms," Information Sciences , vol. 270, pp. 143-153, 2014.
[22] G. H. Wu, D. S. Qiu, Y. Yu, W. Pedrycz, M. Ma, H. Li, "Superior solution guided particle swarm optimization combined with local search techniques," Expert Systems with Applications , vol. 41, no. 16, pp. 7536-7548, 2014.
[23] Y.-B. Shin, E. Kita, "Search performance improvement of particle swarm optimization by second best particle information," Applied Mathematics and Computation , vol. 246, pp. 346-354, 2014.
[24] Y.-P. Chen, W.-C. Peng, M.-C. Jian, "Particle swarm optimization with recombination and dynamic linkage discovery," IEEE Transactions on Systems, Man, and Cybernetics Part B: Cybernetics , vol. 37, no. 6, pp. 1460-1470, 2007.
[25] Z.-H. Zhan, J. Zhang, Y. Li, Y.-H. Shi, "Orthogonal learning particle swarm optimization," IEEE Transactions on Evolutionary Computation , vol. 15, no. 6, pp. 832-847, 2011.
[26] B. Alatas, E. Akin, A. B. Ozer, "Chaos embedded particle swarm optimization algorithms," Chaos, Solitons and Fractals , vol. 40, no. 4, pp. 1715-1734, 2009.
[27] Z.-H. Zhan, J. Zhang, Y. Li, H. S.-H. Chung, "Adaptive particle swarm optimization," IEEE Transactions on Systems, Man, and Cybernetics Part B: Cybernetics , vol. 39, no. 6, pp. 1362-1381, 2009.
[28] F. van den Bergh, A. P. Engelbrecht, "A cooperative approach to particle swarm optimization," IEEE Transactions on Evolutionary Computation , vol. 8, no. 3, pp. 225-239, 2004.
[29] J. J. Liang, A. K. Qin, P. N. Suganthan, S. Baskar, "Comprehensive learning particle swarm optimizer for global optimization of multimodal functions," IEEE Transactions on Evolutionary Computation , vol. 10, no. 3, pp. 281-295, 2006.
[30] R. Mendes, J. Kennedy, J. Neves, "The fully informed particle swarm: simpler, maybe better," IEEE Transactions on Evolutionary Computation , vol. 8, no. 3, pp. 204-210, 2004.
[31] M. A. M. de Oca, T. Stützle, M. Birattari, M. Dorigo, "Frankenstein's PSO: a composite particle swarm optimization algorithm," IEEE Transactions on Evolutionary Computation , vol. 13, no. 5, pp. 1120-1132, 2009.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2016 Chun-Feng Wang and Kui Liu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
Particle Swarm Optimization (PSO) is a recently developed optimization method, which has attracted interest of researchers in various areas due to its simplicity and effectiveness, and many variants have been proposed. In this paper, a novel Particle Swarm Optimization algorithm is presented, in which the information of the best neighbor of each particle and the best particle of the entire population in the current iteration is considered. Meanwhile, to avoid premature, an abandoned mechanism is used. Furthermore, for improving the global convergence speed of our algorithm, a chaotic search is adopted in the best solution of the current iteration. To verify the performance of our algorithm, standard test functions have been employed. The experimental results show that the algorithm is much more robust and efficient than some existing Particle Swarm Optimization algorithms.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer