1. Introduction
Large-scale optimization problems, also called high-dimensional problems, are ubiquitous in daily life and industrial engineering in the era of big data and the Internet of Things (IoT), such as water distribution optimization problems [1], cyber-physical systems design problems [2], control of pollutant spreading on social networks [3], and offshore wind farm collector system planning problems [4]. As the dimensionality of optimization problems increases, most existing optimization methods encounter the degradation of optimization effectiveness, due to the “curse of dimensionality” [5,6].
Specifically, the increase of dimensionality results in the following challenges for existing optimization algorithms: (1) With the growth of dimensionality, the properties of optimization problems become much more complicated. In particular, in the high-dimensional environment, optimization problems usually are non-convex, non-differentiable, or even non-continuous [7,8,9]. This makes traditional gradient-based optimization algorithms become infeasible. (2) The solution space grows exponentially as the dimensionality increases [10,11,12,13]. This greatly challenges the optimization efficiency of most existing algorithms. (3) The landscape of optimization problems becomes more complex in a high-dimensional space. On the one hand, some unimodal problems may become multimodal with the increase of dimensionality; on the other hand, in some multimodal problems, not only does the number of local optimal regions increase rapidly, but also the local regions become much wider and flatter [11,12,14]. This likely leads to premature convergence and stagnation of existing optimization techniques.
As a kind of metaheuristic algorithm, particle swarm optimization (PSO) maintains a population of particles, each of which represents a feasible solution to optimization problems, to search the solution space for the global optimum solutions [15,16,17]. By means of its great merits, such as strong global search ability, independence in the mathematic properties of optimization problems, and inherent parallelism [17], PSO has witnessed rapid development and excellent success in solving complex optimization problems [18,19,20,21,22] since it was proposed in 1995 [15]. As a result, PSO has been widely employed to solve real-world optimization problems in daily life and industrial engineering [1,23].
However, most existing PSOs are initially designed for low-dimensional optimization problems. Confronted with large-scale optimization problems, their effectiveness usually deteriorates due to the previously mentioned challenges [24,25,26]. To improve the optimization effectiveness of PSO in tackling high-dimensional problems, researchers have been devoted to designing novel and effective evolution mechanisms for PSO. Broadly speaking, existing large-scale PSOs can be divided into two categories [27], namely cooperative coevolutionary large-scale PSOs [6,28,29] and holistic large-scale PSOs [24,26,30,31,32].
Cooperative coevolutionary PSOs (CCPSOs) [6,28,29,33] adopt the divide-and-conquer technique to decompose one large-scale optimization problem into several exclusive smaller sub-problems and then optimize these sub-problems individually by traditional PSOs designed for low-dimensional problems to find the optimal solution to the large-scale optimization problem. Since the decomposed subproblems are separately optimized, the key component of CCPSOs is the decomposition strategy [6,28]. Ideally, a good decomposition strategy should place interacted variables into the same sub-problem, so that they can be optimized together. However, without prior knowledge, it is considerably difficult to decompose a large-scale problem accurately. As a result, current research on CCPSOs lies in developing novel decomposition strategies to divide the large-scale optimization problem as accurately as possible. Hence, many effective decomposition strategies [6,34,35,36,37,38] have been put forward.
However, CCPSOs heavily rely on the quality of the decomposition strategies. According to the no free lunch theorem, there is no decomposition strategy suitable for all large-scale problems. Therefore, some researchers attempt to design large-scale PSOs from another perspective, namely the holistic large-scale PSOs [5,26,30,39].
In contrast to CCPSOs, holistic large-scale PSOs [5,26,30,39,40] still optimize all variables simultaneously such as traditional PSOs. Since the learning strategy in updating the velocity of particles plays the most important role in PSO [15,16,18], the key to improving the effectiveness of PSO in coping with large-scale optimization is to devise effective learning strategies for particles, which should not only help particles explore the solution space efficiently to locate promising areas fast, but also aid particles to exploit the promising areas effectively to obtain high-quality solutions. Along this line, researchers have developed many remarkable learning strategies for PSO to solve high-dimensional problems, such as the competitive learning scheme [26], the social learning strategy [30], the two-phase learning method [1], and the level-based learning approach [25]. Recently, some researchers even have attempted to develop novel coding schemes for PSO to improve its optimization performance in solving large-scale optimization problems [41].
Although the above-mentioned large-scale PSOs have presented excellent optimization performance in solving some large-scale optimization problems, they still encounter limitations, such as premature convergence and stagnation into local areas, in solving complicated high-dimensional problems, especially those with overlapping correlated variables or fully non-separable variables. Therefore, the optimization performance of PSOs in tackling large-scale optimization still deserves improvement, which still remains an open and hot topic to study in the evolutionary computation community.
In nature, individuals with better fitness usually preserve more valuable evolutionary information than those with worse fitness, to guide the evolution of one species [42]. Moreover, in general, different individuals usually preserve different useful genes. Inspired by these observations, in this paper, we propose a dimension group-based comprehensive elite learning swarm optimizer (DGCELSO) by integrating useful genes embedded in different elite individuals to guide the update of particles to search the large-scale solution space effectively and efficiently. Specifically, the main components of the proposed DGCELSO are summarized as follows:
-
(1). A dimension group-based comprehensive elite learning scheme is proposed to guide the update of inferior particles by learning from multiple superior ones. Instead of learning from only at most two exemplars in existing holistic large-scale PSOs [24,25,26,30], the devised learning strategy first randomly divides the dimensions of each inferior particle into several equally sized groups and then employs different superior particles to guide the update of different dimension groups. Moreover, unlike existing elite strategies that only use one elite to direct the evolution of an individual [43,44], it employs a random dimension group-based recombination techniques to try to integrate valuable evolutionary information in multiple elites to guide the update of each non-elite particle. In this way, the learning diversity of particles could be largely promoted, which is beneficial for particles to avoid falling into local traps. Moreover, it is also possible that useful evolutionary information embedded in different superior particles could be integrated to direct the learning of inferior particles, which may be profitable for particles to approach promising areas quickly.
-
(2). Dynamic adjustment strategies for the control parameters involved in the proposed learning strategy are further designed to cooperate with the learning strategy to help PSO search the large-scale solution space properly. With these dynamic strategies, the developed DGCELSO could appropriately compromise the intensification and diversification of the search process at the swarm level and the particle level.
To verify the effectiveness of the proposed DGCELSO, extensive experiments are conducted to compare DGCELSO with several state-of-the-art large-scale optimizers on the widely used CEC’2010 [7] and CEC’2013 [8] large-scale benchmark optimization problem sets. Meanwhile, deep investigations on DGCELSO are also conducted to discover what contributes to its good performance.
The rest of this paper is organized as follows. Section 2 introduces the classical PSO and large-scale PSO variants. Then, the proposed DGCELSO is elucidated in detail in Section 3. Section 4 conducts extensive experiments to verify the effectiveness of the proposed DGCELSO. Finally, Section 5 concludes this paper.
2. Related Work
In this paper, a -dimensional single-objective minimization optimization problem is considered, which is defined as follows:
(1)
where x consisting of variables is a feasible solution to the optimization problem, and is the dimension size. In this paper, we directly use the function value as the fitness value of one particle.2.1. Canonical PSO
In the canonical PSO [15,16], each particle is represented by two vectors, namely the position vector x and the velocity vector v. During the evolution, in the canonical PSO [15,16], each particle is guided by its historically personal best position and the historically best position of the whole swarm. Specifically, each particle is updated as follows:
(2)
(3)
where is the th dimension of the velocity of the th particle, is the th dimension of the position of the th particle, is the th dimension of the historically personal best position found by the th particle, and is the th dimension of the historically global best position found by the whole swarm. As for the parameters, and are two acceleration coefficients, while and are two real random numbers uniformly generated within [0, 1]. represents the inertia weight.As shown in Equation (2), in the canonical PSO, each particle is cognitively directed by its pbest (the second part in the right hand of Equation (2) and socially guided by gbest of the whole swarm (the third part in the right hand of Equation (2). Due to the greedy attraction of gbest, the swarm in the canonical PSOs usually becomes trapped in local areas when tackling multimodal problems [18,45]. Therefore, to improve the effectiveness of PSO in searching multimodal space with many local areas, researchers developed many novel learning strategies to guide the learning of particles, such as the comprehensive learning strategy [46], the genetic learning strategy [47], the scatter learning strategy [18], and the orthogonal learning strategy [48], etc.
Though a lot of novel learning strategies have helped PSO achieve very promising performance in solving multimodal problems, most of them are particularly designed for low-dimensional optimization problems. Encountered with large-scale optimization problems, most existing PSOs lose their effectiveness due to the “curse of dimensionality” and the aforementioned challenges in high-dimensional problems.
2.2. Large-Scale PSO
To solve the previously mentioned challenges of large-scale optimization, researchers devoted extensive attention to designing novel PSOs. As a result, numerous large-scale PSO variants have sprung up [1,26]. In a broad sense, existing large-scale PSOs can be classified into the following two categories.
2.2.1. Cooperative Coevolutionary Large-Scale PSO (CCPSO)
Cooperative coevolutionary PSOs (CCPSOs) [6,29,49] mainly use the divide-and-conquer technique to separate all variables of one high-dimensional problem into several exclusive groups, and then optimize each group of variables independently to obtain the optimal solution to the high-dimensional problem. Bergh and Engelbrecht put forward the earliest CCPSO [49]. In this algorithm, all variables in a large-scale optimization problem are randomly divided into K groups with each containing D/K variables (where D is the dimension size). Then the canonical PSO described in Section 2.1 is employed to optimize each group of variables. Nevertheless, the performance of this algorithm heavily relies on the setting of the number of groups (namely K). To alleviate this issue, in [29], an improved CCPSO, named CCPSO2, was proposed by first predefining a set of group numbers and then randomly selecting a group number in each iteration to separate variables into groups. In the above two algorithms, the correlations between variables are not taken into account explicitly. Hence, their optimization effectiveness degrades dramatically in solving problems with many interacted variables [11,12].
To alleviate the above issue, researchers have attempted to design effective variable grouping strategies to separate variables into groups by detecting the correlations between variables [6,35,36,37]. In the literature, the most representative grouping strategy is the differential grouping (DG) method [6], which uses the differential function values to detect the correlation between any two variables by exerting the same disturbance on the two variables. Based on the detected correlations between variables, DG could separate variables into groups satisfactorily. However, this method has two drawbacks. (1) It cannot detect the indirect interaction between variables [36], and (2) it consumes a lot of fitness evaluations (O(D2), D is the number of variables) in the variable decomposition stage [35,37].
To fill the first gap, Sun et al. devised an extended DG (XDG) [36], and Mei et al. brought up a global DG (GDG) [50] to detect both the direct and indirect interactions between variables. To alleviate the second predicament, a fast DG, named DG2 [35], and a recursive DG (RDG) [37] were put forward to reduce the consumption of fitness evaluations in the variable grouping stage. To further improve the detection efficiency of RDG, an efficient recursive differential grouping (ERDG) [51] was devised to reduce the used fitness evaluations in the decomposition stage, and to alleviate the sensitivity of RDG to parameters, an improved version, named RDG2, was developed [52] by adaptively adjusting the setting of parameters. In [53], Ma et al. proposed a merged differential grouping method based on subset-subset interaction and binary search by first identifying separable variables and non-separable variables, and putting all separable variables into the same subset, while dividing the non-separable variables into multiple subsets via a binary-tree-based iterative merging method. To further promote the variable grouping accuracy, Liu et al. proposed a deep grouping method by considering both the variable interaction and the essentialness of the variable to decompose one high-dimensional problem [54]. Instead of decomposing a large-scale optimization problem into fixed variable groups, Zhang et al. developed a dynamic grouping strategy to dynamically separate variables into groups during the evolution [55]. Specifically, the proposed algorithm first evaluates the contribution of variables based on the historical information and then constructs dynamic variable groups for the next generation based on the evaluated contribution and the detected interaction information.
By means of their promising performance in solving large-scale optimization problems, cooperative coevolutionary algorithms have been widely applied to solve various industrial engineering problems. For instance, Neshat et al. [56] proposed a novel multi-swarm cooperative co-evolution algorithm with the multi verse optimizer algorithm, the equilibrium optimization method, and the moth flame optimization approach, to optimize the layout of offshore wave energy converters. To tackle distributed flowshop group scheduling problems, Pan et al. [57] proposed a cooperative co-evolutionary algorithm with a collaboration model and a re-initialization scheme to tackle them. In [58], a hybrid cooperative co-evolution algorithm with a symmetric local search plus Nelder–Mead was devised to optimize the positions and the power-take-off settings of wave energy converters. In [59], Liang et al. developed a cooperative coevolutionary multi-objective evolutionary algorithm to tackle the transit network design and frequency setting problem.
Although the above-mentioned cooperative coevolutionary algorithms including CCPSOs achieved good performance in dealing with certain kinds of high-dimensional problems and have been applied to solve real-world problems, they are still confronted with limitations in tackling complicated high-dimensional problems. On the one hand, according to the theorem of No Free Lunch, there is no universal grouping method that could accurately separate variables into groups for all types of large-scale optimization problems; on the other hand, faced with high-dimensional problems with overlapping variable correlations, most existing variable grouping strategies would separate all these variables into the same group, leading to a very large variable group. Under this situation, traditional PSOs designed for low-dimensional problems used in CCPSO still cannot effectively optimize such a large group of variables. As a result, some researchers have attempted to design large-scale PSOs from another perspective to be elucidated next.
2.2.2. Holistic Large-Scale PSO
Unlike CCPSOs, holistic large-scale PSOs [18,26] still consider all variables as a whole and optimize them simultaneously like in traditional low-dimensional PSOs [16]. To solve the previously mentioned challenges of large-scale optimization, the key to holistic large-scale PSOs is to devise effective and efficient learning strategies for particles to largely promote the swarm diversity so that particles could explore the exponentially increased solution space efficiently and exploit the promising areas extensively to obtain high-quality solutions.
In [60], a dynamic multi-swarm PSO along with the Quasi-Newton local search method (DMS-L-PSO) was proposed to optimize large-scale optimization problems by dynamically separating particles into smaller sub-swarms in each generation. Taking inspiration from the competitive learning scheme in human society, Cheng and Jin proposed a competitive swarm optimizer (CSO) [26]. Specifically, this optimizer first separates particles into exclusive pairs and then lets each pair of particles compete with each other. After the competition, the winner is not updated and thus directly enters the next generation, while the loser is updated by learning from the winner. Likewise, inspired by the social learning strategy in animals, a social learning PSO (SLPSO) [61] was devised to let each particle probabilistically learn from those which are better than itself. By extending the pairwise competition mechanism in CSO to a tri-competitive strategy, Mohapatra et al. [62] developed a modified CSO (MCSO) to accelerate the convergence speed of the swarm to tackle high-dimensional problems. Taking inspiration from the comprehensive learning strategy designed for low-dimensional problems [46] and the competitive learning approach in CSO [26], Yang et al. designed a segment-based predominant learning swarm optimizer (SPLSO) [30] to cope with large-scale optimization. Specifically, this optimizer first uses the pairwise competition mechanism in CSO to divide particles into two groups, namely the relatively good particles and the relatively poor particles. Then, it further randomly separates the dimensions of each relatively poor particle into a certain number of exclusive segments, and subsequently randomly selects a relatively good particle to direct the update of each segment of the inferior particle.
Unlike the above large-scale PSOs [26,30,62], which let the updated particle learn from only one superior, Yang et al. devised a level-based learning swarm optimizer (LLSO) [25] by taking inspiration from the teaching theory in pedagogy. Specifically, this optimizer first separates particles into different levels and then lets each particle in lower levels learn from two random superior exemplars selected from higher levels. Inspired by the cooperative learning behavior in human society, Lan et al. put forward a two-phase learning swarm optimizer (TPLSO) [24]. This optimizer separates the learning of each particle into the mass learning phase and the elite learning phase. In the former learning phase, the tri-competitive mechanism is employed to update particles, while in the elite learning phase, the elite particles are picked out to learn from each other to further exploit promising areas to refine the found solutions. Similarly, Wang et al. proposed a multiple strategy learning particle swarm optimization (MSL-PSO) [40], in which different learning strategies are used to update particles in different evolution stages. In the first stage, each particle learns from those with better fitness and the mean position of the swarm to probe promising positions. Then, all the best probed positions are sorted based on their fitness and the top best ones are used to update particles in the second stage. In [41], Jian et al. developed a novel region encoding scheme to extend the solution representation from a single point to a region, and a novel adaptive region search strategy to keep the search diversity. These two schemes are then embedded into SLPSO to tackle large-scale optimization problems.
To find a good compromise between exploration and exploitation, Li et al. devised a learning structure to decouple exploration and exploitation for PSO in [63] to solve large-scale optimization. In particular, an exploration learning strategy was devised to direct particles to sparse areas based on a local sparseness degree measurement, and then an adaptive exploitation learning strategy was developed to let particles exploit the found promising areas. Deng et al. [39] devised a ranking-based biased learning swarm optimizer (RBLSO) based on the principle that the fitness difference between learners and exemplars should be maximized. In particular, in this algorithm, a ranking paired learning (RPL) scheme was designed to let the worse particles learn peer-to-peer from the better ones, and at the same time, a biased center learning (BCL) strategy was devised to let each particle learn from the weighted mean position of the whole swarm. Lan et al. [64] proposed a hierarchical sorting swarm optimizer (HSSO) to tackle large-scale optimization. Specifically, this optimizer first divides particles into a good swarm and a bad swarm with equal sizes based on their fitness. Then, particles in the bad group are updated by learning from those in the good one. Subsequently, the good swarm is taken as a new swarm to execute the above swarm division and particle updating operations until there is only one particle in the good swarm. Kong et al. [65] devised an adaptive multi-swarm particle swarm optimizer to cope with high-dimensional problems. Specifically, it first adaptively divides particles into several sub-swarms and then employs the competition mechanism to select exemplars for particle updating. Huang et al. [66] put forward a convergence speed controller to cooperate with PSO to deal with large-scale optimization. Specifically, this controller is triggered periodically to produce an early warning to PSO before it falls into premature convergence.
Though most existing large-scale PSOs have presented their success in solving certain kinds of high-dimensional problems, their effectiveness still degrades in solving complicated high-dimensional problems [11,12,27,67], especially on those with many wide and flat local areas. Therefore, promoting the effectiveness and efficiency of PSO in solving large-scale optimization still deserves extensive attention and thus this research direction is still an active and hot topic in the evolutionary computation community.
3. Dimension Group-Based Comprehensive Elite Learning Swarm Optimizer
In nature, during the evolution of one species, those elite individuals with better adaptability to the environment usually preserve more valuable evolutionary information, such as genes, to direct the evolution of the species [42]. Moreover, different individuals may preserve different useful genes. Likewise, during the evolution of the swarm in PSO, different particles may contain useful variable values that may be close to the true global optimal solutions. Therefore, a natural idea is to integrate those useful values embedded in different particles to guide the evolution of the swarm. To this end, this paper proposes a dimension group-based comprehensive elite learning swarm optimizer (DGCELSO) to tackle large-scale optimization. The detailed components of this optimizer are elucidated as follows.
3.1. Dimension Group-Based Comprehensive Elite Learning
Given that NP particles are maintained in the swarm, the proposed DGCEL strategy first partitions the swarm into two exclusive sets, namely the elite set, denoted by ES, and the non-elite set, denoted by NES. Specifically, ES contains the best es particles in the swarm, while NES consists of the rest nes = (NP − es) particles. Since the size of ES, namely es, is related to NP, we set es = (where tp is the ratio of the elite particles in ES out of the whole swarm), for the convenience of parameter fine-tuning.
Since elite particles usually preserve more valuable evolutionary information than the non-elite ones, in this paper, we first develop an elite learning strategy (EL). Specifically, we let the elite particles in ES directly enter the next generation, while only updating the non-elite particles in NES. Moreover, the elite particles in ES are employed to guide the learning of non-elite particles in NES.
With respect to the elite particles, during the evolution, though they may be far from the global optimal area, they usually contain valuable genes that are very close to the true global optimal solution. To integrate the useful evolutionary information embedded in different elites, we propose a dimension group-based comprehensive learning strategy (DGCL). Specifically, during the update of each non-elite particle, the whole dimensions of this particle are first randomly shuffled and then are partitioned into NDG dimension groups (where NDG denotes the number of dimension groups), with each group containing D/NDG dimensions. In this way, the dimensions of each non-elite particle are randomly divided into NDG groups, namely DG = [].
Here, it should be mentioned that for each non-elite particle, the dimensions are randomly shuffled, and thus it is likely that the division of dimension groups is different for different non-elite particles. In addition, if D%NDG is not zero, then the remaining dimensions are equally allocated to the first (D%NDG) groups, i.e., each of the first (D%NDG) groups contains (D/NDG + 1) dimensions.
Subsequently, unlike most existing large-scale PSOs [25,26,30] which use the same exemplars to update all dimensions of one inferior particle, the proposed DGCL uses one exemplar to update each dimension group of each non-elite particle, and thus one non-elite particle could learn from different exemplars.
Incorporating the proposed EL into the DGCL, the DGCEL is developed by using the elite particles in ES to direct the update of each dimension group of a non-elite particle. Specifically, each non-elite particle is updated as follows:
(4)
(5)
where represents the jth non-elite particle in NES; denotes the ith dimension group of the jth non-elite particle; and are the ith dimension group of the position and velocity of the jth particle in NES, respectively; and are two different elite particles randomly selected from ES; , , and are three random real parameters uniformly sampled within [0, 1]; is a control parameter in charge of the influence of the second elite particle.As for the update of each non-elite particle in NES, as shown in Equation (4), the following details should be paid careful attention:
-
(1). As previously mentioned, for each non-elite particle, the dimensions are randomly shuffled. As a result, the partition of dimension groups is different for different non-elite particles.
-
(2). For each dimension group , two different elite particles and are first randomly selected from ES. Then, the better one between these two elites (suppose it is ) acts as the first exemplar in Equation (4), while the worse one (suppose it is ) acts as the second exemplar to guide the update of the dimension group of the non-elite particle.
-
(3). The two elite particles guiding the update of each dimension group are both randomly selected. Therefore, they are likely to be different for different dimension groups.
As a whole, a complete flowchart of the proposed DGCEL is shown in Figure 1. Taking deep analysis on Equation (4) and Figure 1, we find that the proposed DGCEL strategy brings the following advantages to PSO:
(1). Instead of using historical evolutionary information, such as the historically global best position (gbest), the personal best positions (pbest), and the neighborhood best position (nbest), in traditional PSOs [18,47], the devised DGCEL employs the elite particles in the current swarm to direct the learning of the non-elite particles. In contrast to the historical information, which may remain unchanged for many generations, particles in the swarm are usually updated generation by generation. Therefore, in the proposed DGCEL, the selected two guiding exemplars are not only likely different for different particles but also probably different for the same particle in different generations. This is very beneficial for the promotion of swarm diversity.
(2). Instead of updating each particle with the same exemplars for all dimensions in most existing large-scale PSOs [5,24,25,26,30], the proposed DGCEL updates non-elite particles at the dimension group level. Therefore, for different dimension groups, the two guiding exemplars are likely different. In this way, not only could one non-elite particle learn from multiple different elite ones, but also the useful genes hidden in different elites could be incorporated to direct the evolution of the swarm. As a result, not only the learning diversity of particles could be improved, but also the learning efficiency of particles could be promoted.
(3). In DGCEL, each dimension group of a non-elite particle is guided by two randomly selected elite particles in ES. With the guidance of multiple elites, each non-elite particle is expected to approach promising areas quickly. In addition, since the elite particles in ES are not updated and directly enter the next generation, the useful evolutionary information in the current swarm is protected from being destroyed by uncertain updates. Therefore, the elites in ES become better and better as the evolution iterates, and at last, it is expected that these elites converge to the optimal areas.
Remark
To the best of our knowledge, there are four existing PSOs that are very similar to the proposed DGCELSO. They are CLPSO [46], OLPSO [48], GLPSO [47], and SPLSO [30]. The first three were originally designed for low-dimensional problems, while the last one was initially devised for large-scale optimization. Compared with these existing PSOs, the developed DGCELSO distinguishes from them in the following ways:
(1). In contrast to the three low-dimensional PSOs [46,47,48], the proposed DGCELSO uses the elite particles in the swarm to comprehensively guide the learning of the non-elite particles at the dimension group level. First, the three low-dimensional PSOs all use the personal best positions (pbests) of particles to construct only one guiding exemplar for each updated particle, whereas DGCELSO leverages the elite particles in the current swarm to construct two different guiding exemplars for each non-elite particle. Second, the three low-dimensional PSOs construct the guiding exemplar dimension by dimension. Nevertheless, DGCELSO constructs the two guiding exemplars group by group. With these two differences, DGCELSO is expected to construct more promising guiding exemplars for the updated particles, and thus the learning effectiveness and efficiency of particles could be largely promoted to explore the large-scale solution space.
(2). In contrast to the large-scale PSO, namely SPLSO [30], DGCELSO uses two different elite particles to direct the update of each dimension group of each non-elite particle. First, the partition of the swarm in DGCELSO is very different from the one in SPLSO. In DGCELSO, the swarm is divided into two exclusive sets according to the fitness of particles, with the best es particles entering ES and the rest entering NES. However, in SPLSO, particles in the swarm are paired together and each paired two particles compete with each other, with the winner entering the relatively good set and the loser entering the relatively poor set. Second, for each non-elite particle, DGCELSO adopts two random elites in ES to guide the update of each dimension group, whereas in SPLSO, each dimension group of a loser is updated by only one random relatively good particle with the other exemplar being the mean position of the relatively good set, which is shared by all updated particles. Therefore, it is expected that the learning effectiveness and efficiency of particles in DGCELSO are higher than in SPLSO. Hence, DGCELSO is expected to explore and exploit the large-scale solution space more appropriately than SPLSO.
3.2. Adaptive Strategies for Control Parameters
Taking deep investigation on the proposed DGCELSO, we find that except for the swarm size NP, it has three control parameters, namely the ratio of elite particles out of the whole swarm tp, the number of dimension groups NDG, and the control parameter in Equation (4). The swarm size NP is a common parameter for all evolutionary algorithms, which is usually problem-dependent and thus remains fine-tuned. As for , it subtly controls the influence of the second guiding exemplar in the velocity update. We also leave it to be fine-tuned in the experiment as NP. For the other two control parameters, we devise the following dynamic adjustment schemes to alleviate the sensitivity of DGCELSO to them.
3.2.1. Dynamic Adjustment for tp
With respect to the ratio of elite particles out of the whole swarm tp, it determines the size of the elite set ES. When tp is large, on the one hand, a large number of particles are preserved and enter the next generation directly; on the other hand, the learning of non-elite particles is diversified due to a large number of candidate exemplars, namely the elite particles. In this situation, the swarm biases to explore the solution space. In contrast, when tp is small, only a small number of elites are preserved. In this case, the learning of non-elite particles is concentrated to exploit the promising areas where the elites locate. Therefore, the swarm biases to exploit the solution space. However, it should be mentioned that such a bias is not at the serious sacrifice of swarm diversity because the guiding exemplars are both randomly selected for each dimension group of each non-elite particle.
Based on the above consideration, it seems rational not to keep tp fixed during the evolution. To this end, we devise a dynamic adjustment strategy for tp as follows:
(6)
where fes represents the number of fitness evaluations used so far, and is the maximum number of fitness evaluations.From Equation (6), it is found that tp is linearly decreased from 0.4 to 0.2. Therefore, at the early stage, tp is high, while at the late stage, tp is small. As a result, as the evolution proceeds, the swarm gradually tends to exploit the solution space. This just matches the expectation that the swarm should explore the solution fully in the early stages to find promising areas while exploiting the found promising areas in the late stage to obtain high-quality solutions. The effectiveness of this dynamic adjustment scheme will be verified in the experiments in Section 4.3.
3.2.2. Dynamic Adjustment for NDG
In terms of the number of dimension groups NDG, it directly affects the learning of non-elite particles. A large NDG leads to a large number of elite particles that might participate in the learning of non-elite particles. This might be useful when the useful genes are scattered in very diversified dimensions. In this situation, with a large NDG, the chance of integrating the useful genes together to direct the learning of non-elite particles could be promoted. By contrast, when the useful genes are scattered in centered dimensions, a small NDG is preferred. However, without prior knowledge of the positions of useful genes embedded in the elite particles, it is difficult to give a proper setting of NDG.
To alleviate the above concern, we devise the following dynamic adjustment of NDG for each non-elite particle based on the Cauchy distribution:
(7)
(8)
where denotes the setting of NDG for the jth particle in NES, Cauchy (60, 10) is a Cauchy distribution with the position parameter 60 and scaling parameter 10. floor(x) is a function that returns the largest integer smaller than x. mod(x,y) is a function that returns the remainder when x/y.In Equations (7) and (8), two details deserve careful attention. First, the Cauchy distribution is used here because it can generate values around the position parameter with a long fat tail. With this distribution, the generated NDGs for different non-elite particles are likely diversified. Second, with Equation (8), we keep the setting of NDG for each non-elite particle at multiple times of 10. This setting is adopted here for promoting the difference between two different values of NDG to improve the learning diversity of non-elite particles and for the convenience of computation.
From Equations (7) and (8), it is found that different non-elite particles likely preserve different NDGs. On the one hand, the learning diversity of non-elite particles could be further improved. On the other hand, the chance of integrating useful genes embedded in different elite particles is likely promoted with different settings of NDG. The effectiveness of this dynamic adjustment scheme for NDG will be verified in the experiments in Section 4.3.
3.3. Overall Procedure of DGCELSO
By integrating the above components, DGCELSO is developed with the overall procedure outlined in Algorithm 1 and the complete flowchart shown in Figure 2. Specifically, after the swarm is initialized and evaluated (Line 1), the algorithm goes to the main iteration loop (Lines 2~17). First, the swarm is partitioned into the elite set (ES) and the non-elite set (NES) as shown in Lines 3 and 4. Then, each particle in NES is updated as shown in Lines 5~16. During the update of one non-elite particle, the dimensions of this particle are first separated into several dimension groups (Lines 6 and 7). Then, for each dimension group of the non-elite particle, two different elite particles are randomly selected from ES (Line 9), and then the dimension group is updated by learning from these two elites (Line 13). The above process iterates until the termination condition is met. At the end of the algorithm, the best solution in the swarm is output (Line 18).
With respect to the computational complexity in time, from Algorithm 1, it is found that in each generation, it takes O(NPlog2NP) to sort the swarm and O(NP) to partition the swarm into two sets in Line 4; then, it takes O(NP∗D) to shuffle the dimensions and O(NP∗D) to partition the shuffled dimensions into groups for all non-elite particles (Line 7); at last, it takes O(NP∗D) to update all non-elite particles (Lines 8~14). To sum up, the time complexity of DGCELSO is O(NP∗D) based on the consideration that the swarm size is usually much smaller than the dimension size in large-scale optimization.
| Algorithm 1: The Pseudocode of DGCELSO. | |
| Input: | Population size NP, Maximum number of fitness evaluations FESmax, Control parameter ; |
| 1: | Initialize NP particles randomly and calculate their fitness; fes = NP; |
| 2: | While (fes ≤ FESmax) do |
| 3: | Calculate tp according to Equation (6) and obtain the elite set size es = ; |
| 4: | Sort particles based on their fitness and divide them into two sets, namely ES and NES; |
| 5: | For each non-elite particle in NES do |
| 6: | Generate based on Equation (7); |
| 7: | Random shuffle the dimensions and then split the dimensions into groups; |
| 8: | For each dimension group do |
| 9: | Randomly select two different elite particles from ES: ; |
| 10: | If (f() < f()) then |
| 11: | Swap ESr1 and ESr2; |
| 12: | End If |
| 13: | Update the dimension group of according to Equations (3) and (4); |
| 14: | End For |
| 15: | Calculate the fitness of the updated , and fes ++; |
| 16: | End For |
| 17: | End While |
| 18: | Obtain the best solution in the swarm gbest and its fitness f(gbest) |
| Output: f(gbest) and gbest | |
Regarding the computational complexity in space occupation, in Algorithm 1, we can see that except for O(NP∗D) to store the positions of all particles and O(NP∗D) to store the velocities of all particles, it only takes extra O(NP) to store the index of particles in the two sets, and O(D) to store the dimension groups. Comprehensively, DGCELSO only takes O(NP∗D) space.
Based on the above time and space complexity analysis, it is found that the proposed DGCELSO remains as efficient as the classical PSO, which also takes O(NP∗D) time in each generation and O(NP∗D) space.
4. Experimental Section
To verify the effectiveness of the proposed DGCELSO, extensive experiments are conducted on two sets of large-scale optimization problems, namely the CEC’2010 [7] and the CEC’2013 [8] large-scale benchmark sets in this section. The CEC’2010 set contains 20 high-dimensional problems with 1000 dimensions, while the CEC’2013 set consists of 15 problems with 1000 dimensions as well. In particular, the CEC’2013 set is an extension of the CEC’2010 set by introducing more complicated features, such as overlapping interactions among variables and imbalance contribution of variables. Therefore, compared with the CEC’2010 problems, the CEC’2013 problems are more complicated and more difficult to optimize. For more detailed information on the two benchmark large-scale problem sets, readers are referred to [7,8].
In this section, we first investigate the settings of two key parameters (namely the swarm size NP and the control parameter ) for DGCELSO in Section 4.1. Then, extensive experiments are conducted on the two benchmark sets to compare DGCELSO with several state-of-the-art large-scale optimizers in Section 4.2. At last, a deep investigation into the proposed DGCELSO is performed to observe what contributes to the good performance of DGCELSO.
In the experiments, unless otherwise stated, the maximum number of fitness evaluations is set as 3000 × D, where D is the dimension size. In this paper, the dimension size of all optimization problems is 1000, and thus the total number of fitness evaluations is 3 × 106. To make fair and comprehensive comparisons, the median, the mean, and the standard deviation (Std) values over 30 independent runs are used to evaluate the performance of all algorithms. Moreover, to tell the statistical significance, the Wilcoxon rank-sum test at the significance level of “α = 0.05” was conducted to compare two different algorithms. Furthermore, to obtain the overall ranks of different algorithms on one whole benchmark set, the Friedman test at the significance level of “α = 0.05” was conducted on each benchmark set.
Lastly, it is worth noting that we use the C programming language and Code Blocks software to implement the proposed DGCELO. Moreover, all experiments were run on a PC with 8 Intel Core i7-10700 2.90-GHz CPUs, 8-GB memory, and the 64-bit Ubuntu 12.04 LTS system.
4.1. Parameter Setting
Due to the proposed two dynamic adjustment strategies of the associated parameters in DGCELSO, there are only two parameters, namely the swarm size NP and the control parameter that need fine-tuning. Therefore, to investigate the optimal setting of the two parameters for DGCELSO in solving 1000-D large-scale optimization problems, we conduct experiments by varying NP from 100 to 600 and ranging from 0.1 to 0.9 for DGCELSO on the CEC’2010 benchmark set. Table 1 shows the mean fitness values obtained by DGCELSO with different settings of NP and on the CEC’2010 set. In this table, the best results are highlighted in bold, and the average rank of each configuration is also presented, which was obtained using the Friedman test at the significance level of “α = 0.05”.
From this table, we obtain the following findings. (1) From the perspective of the Friedman test, when NP is fixed, the setting of parameter is neither too small nor too large, and the optimal setting is usually within [0.3, 0.6]. Specifically, when NP is 100 and 200, the optimal is 0.6 and 0.5 respectively. When NP is within [300, 500], the optimal is consistently 0.4. When NP is 600, the optimal is 0.3. (2) More specifically, we find that when NP is small, such as 100, the optimal is usually large. This is because a small NP could not afford enough diversity for DGCELPSO to explore the solution space. Therefore, to improve the diversity, should be large to enhance the influence of the second guiding exemplar in Equation (4), which is in charge of preventing the updated particle from being greedily attracted by the first guiding exemplar. On the contrary, when NP is large, such as 600, a small is preferred. This is because a large NP offers too high diversity for DGCELPSO to slow down its convergence. Consequently, to let particles fully exploit the found promising areas, should be small to decrease the influence of the second guiding exemplar in Equation (4). (3) Taking comprehensive comparisons among all settings of NP along with the associated optimal settings of , we find that DGCELSO with NP = 300 and = 0.4 achieves the best overall performance.
Based on the above observation, NP = 300 and = 0.4 are adopted for DGCELSO in the experiments related to 1000-D optimization problems.
4.2. Comparisons with State-of-the-Art Methods
To comprehensively verify the effectiveness of the devised DGCELSO, this section conducts extensive comparison experiments to compare DGCELSO with several state-of-the-art large-scale algorithms. Specifically, nine popular and latest large-scale methods are selected, namely TPLSO [24], SPLSO (The source code can be downloaded from
Table 2 and Table 3 display the comparison results between DGCELSO and the nine compared algorithms on the 1000-D CEC’2010 and the 1000-D CEC’2013 large-scale benchmark sets, respectively. In these two tables, the symbols, “+”, “−”, and “=” above the p-values obtained from the Wilcoxon rank test denote that the proposed DGCELSO is significantly superior to, significantly inferior to, and equivalent to the associated compared algorithms on the related functions, respectively. “w/t/l” in the second to last rows of the two tables count the numbers of functions where DGCELSO performs significantly better, equivalently, and significantly worse than the associated compared methods. Actually, they are the numbers of “+”, “=” and “−”, respectively. In the last rows of the two tables, the averaged ranks of all algorithms obtained from the Friedman test are presented as well.
In Table 2, the comparison results on the CEC’2010 set are summarized as follows. (1) From the perspective of the Friedman test, as shown in the last row, it is found that the proposed DGCELSO has the lowest rank value, which is much smaller than those of the compared algorithms. This means that DGCELSO achieves the best overall performance and shows great superiority to the compared algorithms. (2) With respect to the Wilcoxon rank-sum test, as shown in the second last row, it is observed that DGCELSO performs significantly better than the compared algorithms on at least 14 problems. In particular, competed with the four cooperative coevolutionary evolutionary algorithms, DGCELSO presents significant superiority to them on at least 16 problems and only shows inferiority in at most four problems. In comparison with the five holistic large-scale PSO variants, DGCELSO is significantly superior to SLPSO on 18 problems, achieves much better performance than TPLSO on 16 problems, outperforms both LLSO and CSO on 15 problems, and beats SPLSO down on 14 problems. The superiority of DGCELSO to the five holistic large-scale PSOs demonstrates the effectiveness of the proposed DGCEL strategy.
In Table 3, we summarize the comparison results on the CEC’2013 set as follows. (1) From the perspective of the Friedman test, as shown in the last row, it is found that the rank value of the proposed DGCELSO is still the lowest among the ten algorithms, and such a rank is still much smaller than those of the nine compared algorithms. This demonstrates that DGCELSO still achieves the best overall performance on the complicated CEC’2013 benchmark set and shows great dominance to the compared algorithms. (2) With respect to the Wilcoxon rank-sum test, as shown in the second to last row, it is observed that except for SPLSO, DGCELSO shows significantly better performance than the other eight compared algorithms on at least 10 problems and shows inferiority on at most three problems. Competed with SPLSO, DGCELSO beats it on eight problems and is defeated on only three problems. The superiority of DGCELSO to the compared algorithms on the CEC’2013 benchmark set demonstrates that it is promising for complicated large-scale optimization problems.
The above experiments demonstrated the effectiveness of the proposed DGCELSO. To further demonstrate its efficiency in solving large-scale optimization problems, we conduct experiments on the two large-scale benchmark sets to investigate the convergence speed of the proposed DGCELSO in comparison with the nine compared methods. In this experiment, the maximum number of fitness evaluations is set as 5 × 106. Figure 3 and Figure 4 show the convergence comparison results on the CEC’2010 and the CEC’2013 benchmark sets, respectively.
In Figure 3, on the CEC’2010 benchmark set, the following findings can be obtained. (1) At first glance, it is found that the proposed DGCELSO obviously obtains faster convergence along with better solutions than all the nine compared algorithms on nine problems (F1, F4, F7, F9, F11, F12, F14, F16, and F17). On F3, F13, F18, and F20, DGCELSO achieves very similar performance with some compared algorithms in terms of the solution quality but obtains much faster convergence than the associated compared algorithms. (2) More specifically, we find that DGCELSO obviously shows much better performance in both convergence speed and solution quality than the five holistic large-scale PSO variants, namely TPLSO, SPLSO, LLSO, CSO, and SLPSO on 17, 16, 15, 16, and 17, respectively. In the competition with the four cooperative coevolutionary evolutionary algorithms, namely DECC-DG, DECC-GD2, DECC-RDG, and DECC-RDG2, DGCELSO shows clear superiority in both convergence speed and solution quality on 17, 17, 17, and 15 problems, respectively.
From Figure 4, similar observations on the CEC’2013 benchmark set can be attained. (1) At first glance, it is found that the proposed DGCELSO obtains faster convergence along with better solutions than all the nine compared algorithms on six problems (F1, F4, F7, F11, F13, and F14). On F8, F9, and F12, DGCELSO shows superiority in both convergence speed and solution quality to eight compared algorithms and is inferior to only one compared algorithm. (2) More specifically, we find that DGCELSO performs better with faster convergence speed and higher solution quality than TPLSO, SPLSO, LLSO, CSO, and SLPSO on 11, 11, 9, 12, and 10 problems, respectively. In competition with DECC-DG, DECC-GD2, DECC-RDG, and DECC-RDG2, DGCELSO presents great dominance to them on 11, 9, 11, and 12 problems, respectively.
To sum up, compared with these state-of-the-art large-scale algorithms, DGCELSO performs much better in both convergence speed and solution quality. The superiority of DGCELSO mainly benefits from the proposed DGCEL strategy, which could implicitly assemble useful information embedded in elite particles to guide the evolution of the swarm. In particular, the superiority of DGCELSO to the five holistic large-scale PSOs, which also adopt elite particles in the current swarm to direct the evolution of the swarm, demonstrates that the assembly of evolutionary information in elites is effective. Such assembly not only improves the learning diversity of particles due to the random selection of guiding exemplars from the elites but also promotes the learning effectiveness of particles because each updated particle could learn from multiple different elites with the help of the dimension group-based learning. As a result, DGCELSO could compromise search intensification and diversification well to explore and exploit the large-scale solution appropriately to locate satisfactory solutions.
4.3. Deep Investigation on DGCELSO
In this section, we conduct extensive experiments on the 1000-D CEC’2010 benchmark set to verify the effectiveness of the main components in the proposed DGCELSO.
4.3.1. Effectiveness of the Proposed DGCEL
First, we conduct experiments to investigate the effectiveness of the proposed DGCEL strategy. To this end, we first incorporate the segment-based predominance learning strategy (SPL) in SPLSO, which is the most similar work to the proposed DGCELSO, to replace the DGCEL strategy, leading to a new variant of DGCELSO, which we denote as “DGCELSO-SPL”. In addition, we also develop two extreme cases of DGCELSO, where the number of dimension groups (NDG) is set as 1 and 1000, respectively. The former, which we denote as “DGCELSO-1”, con all dimensions as a group, and thus can be considered a DGCELSO without the dimension group-based comprehensive learning, while the latter, which we denote as “DGCELSO-1000”, considers each dimension as a group. This can be considered a DGCELSO by introducing the comprehensive learning strategy in CLPSO [46] to replace the dimension group-based comprehensive learning in DGCELSO. Then, we conduct experiments on the CEC’2010 benchmark set to compare the above four versions of DGCELSO. Table 4 shows the comparison results among the four versions of DGCELSO. In this table, the best results are highlighted in bold.
From Table 4, the following observations can be attained. (1) From the perspective of the Friedman test, it is found that the rank value of DGCELSO is the smallest among the four versions of DGCELSO. This demonstrates that DGCELSO achieves the best overall performance. (2) Comparing DGCELSO with DGCELSO-SPL, DGCELSO shows great superiority. This demonstrates that the proposed DGCEL strategy is much better than SPL. It should be mentioned that, like DGCEL, SPL also lets each particle learn from multiple elites in the swarm, based on the dimension group. The differences between DGCEL and SPL lie in two aspects. On the one hand, SPL lets particles learn from relatively better elites which are determined by the competition between randomly paired two particles, while DGCEL lets particles learn from absolutely better elites which are the top tp∗NP best particles in the swarm. On the other hand, the second exemplar in the velocity update in SPL is the mean position of the whole swarm, which is shared by all updated particles, while the second exemplar in DGCEL is also randomly selected from the elite particles. With the observed superiority of DGCEL to SPL, it is demonstrated that the exemplar selection in DGCEL is better than that in SPL. (3) Competed with DGCELSO-1 and DGCELSO-1000, DGCELSO presents great superiority. This superiority demonstrates the effectiveness of the proposed dimension group-based comprehensive learning strategy. Instead of learning from only two exemplars in DGCELSO-1, which consider all dimensions as a group, and learning from multiple exemplars dimension by dimension in DGCELSO-1000, which considers each dimension as a group, DGCELSO lets each updated particle learn from multiple exemplars based on dimension group. In this way, the potentially useful information embedded in different exemplars is more likely to be assembled in DGCELSO than in DGCELSO-1 and DGCELSO-1000.
Based on the above observations, it is found that the proposed DGCEL strategy is effective and plays a crucial role in helping DGCELSO achieve promising performance.
4.3.2. Effectiveness of the Proposed Dynamic Adjustment Schemes for Parameters
In this subsection, we conduct experiments to verify the effectiveness of the proposed dynamic adjustment schemes for the two control parameters, namely the elite ratio tp and the number of dimension groups NDG.
First, we conduct experiments to investigate the effectiveness of the proposed dynamic scheme for tp. To this end, we first set tp as different fixed values from 0.1 to 0.9. Then, we compare the DGCELSO with the dynamic scheme with these DGCELSOs with different fixed tp values. Table 5 shows the comparison results between the DGCELSO with the dynamic scheme and the ones with different values of tp on the CEC’2010 benchmark set. In this table, the best results are highlighted in bold.
From Table 5, the following findings can be obtained. (1) From the perspective of the Friedman test, it is found that DGCELSO with the dynamic tp ranks first among all versions of DGCELSO with different settings of tp. This demonstrates that DGCELSO with the dynamic tp achieves the best overall performance. (2) More specifically, we find that DGCELSO with the dynamic strategy obtains the best results on 4 problems and its results on the other problems are very close to the best ones obtained by the DGCELSO with the associated optimal settings of tp. These two observations demonstrate that the dynamic strategy for tp is helpful in achieving good performance for DGCELSO.
Then, we conduct experiments to verify the dynamic scheme for the number of dimension groups (NDG). To this end, we first set NDG as different fixed values from 20 to 100. Subsequently, we conduct experiments on the CEC’2010 set to compare the DGCELSO with the dynamic scheme for NDG and the ones with different fixed values of NDG. Table 6 shows the comparison results among the above versions of DGCELSO. In this table, the best results are highlighted in bold.
From Table 6, we can obtain the following findings. (1) From the perspective of the Friedman test, it is found that the rank value of the DGCELSO with the dynamic scheme for NDG is the smallest among all versions of DGCELSO with different settings of NDG. This demonstrates that DGCELSO with the dynamic strategy achieves the best overall performance. (2) More specifically, we find that DGCELSO with the dynamic strategy obtains the best results on nine problems, while DGCELSO with fixed NDG obtains the best results on at most four problems. In particular, on the other 11 problems where DGCELSO with the dynamic strategy does not achieve the best results, its optimization results are very close to the best ones obtained by DGCELSO with the associated optimal NDG. These two observations verify the effectiveness of the dynamic strategy for NDG.
To sum up, the above comparative experiments demonstrated the effectiveness and efficiency of DGCELSO in solving large-scale optimization problems. In particular, the deep investigation experiments have validated that it is the proposed DGCEL strategy along with the two dynamic strategies that play a crucial role in helping DGCELSO achieve promising performance.
5. Conclusions
This paper proposed a dimension group-based comprehensive elite learning swarm optimizer (DGCELSO) to effectively solve large-scale optimization problems. Specifically, this optimizer first partitions the swarm into two exclusive sets, namely the elite set and the non-elite set. Then, the non-elite particles are updated by learning from the elite ones with the elite particles directly entering the next generation. During the update of each non-elite particle, the dimensions are separated into several dimension groups. Subsequently, for each dimension group, two elites are randomly selected from the elite set and then act as the guiding exemplars to direct the update of the dimension group. In this way, each non-elite particle could comprehensively learn from multiple elites. Moreover, not only are the guiding exemplars for different non-elite particles different, but the guiding exemplars for different dimension groups of the same non-elite particle are also likely to be different. As a result, not only could the learning diversity of particles be improved, but the learning efficiency of particles could also be promoted. To further aid the optimizer to explore and exploit the solution space properly, we designed two dynamic adjustment strategies for the associated control parameters in the proposed DGCELSO.
Experiments conducted on the 1000-D CEC’2010 and CEC’2013 large-scale benchmark sets verified the effectiveness of the proposed DGCELSO by comparing it with nine state-of-the-art large-scale methods. Experimental results demonstrate that DGCELSO achieves highly competitive or even much better performance than the compared methods in terms of both the solution quality and the convergence speed.
Q.Y.: Conceptualization, supervision, methodology, formal analysis, and writing—original draft preparation. K.-X.Z.: Implementation, formal analysis, and writing—original draft preparation. X.-D.G.: Methodology, and writing—review, and editing. D.-D.X.: Writing—review and editing. Z.-Y.L.: Writing—review and editing, and funding acquisition. S.-W.J.: Writing—review and editing. J.Z.: Conceptualization and writing—review and editing. All authors have read and agreed to the published version of the manuscript.
This work was supported in part by the National Natural Science Foundation of China under Grant 62006124 and U20B2061, in part by the Natural Science Foundation of Jiangsu Province under Project BK20200811, in part by the Natural Science Foundation of the Jiangsu Higher Education Institutions of China under Grant 20KJB520006, in part by the National Research Foundation of Korea (NRF-2021H1D3A2A01082705), and in part by the Startup Foundation for Introducing Talent of NUIST.
Not applicable.
Not applicable.
Not applicable.
The authors declare no conflict of interest.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Figure 3. Convergence behavior comparison between DGCELSO and the compared algorithms on each 1000-D CEC’2010 benchmark problem.
Figure 4. Convergence behavior comparison between DGCELSO and the compared algorithms on each 1000-D CEC’2013 benchmark problem.
Comparison results among DGCELSO with different settings of NP and
| F | NP = 100 | NP = 200 | ||||||||||||||||
| ϕ = 0.1 | ϕ = 0.2 | ϕ = 0.3 | ϕ = 0.4 | ϕ = 0.5 | ϕ = 0.6 | ϕ = 0.7 | ϕ = 0.8 | ϕ = 0.9 | ϕ = 0.1 | ϕ = 0.2 | ϕ = 0.3 | ϕ = 0.4 | ϕ = 0.5 | ϕ = 0.6 | ϕ = 0.7 | ϕ = 0.8 | ϕ = 0.9 | |
| F 1 | 3.31 × 102 | 5.18 × 107 | 8.23 × 107 | 1.34 × 107 | 1.12 × 103 | 2.92 × 10−23 | 9.51 × 10−20 | 5.27 × 105 | 1.01 × 108 | 5.12 × 10−26 | 6.22 × 10−29 | 5.73 × 10−27 | 0.00 × 100 | 1.10 × 10−26 | 2.11 × 10−22 | 9.04 × 102 | 5.08 × 107 | 1.22 × 109 |
| F 2 | 2.93 × 103 | 3.58 × 103 | 3.64 × 103 | 3.22 × 103 | 2.50 × 103 | 1.62 × 103 | 1.12 × 103 | 9.12 × 103 | 1.13 × 104 | 1.16 × 103 | 1.61 × 103 | 1.69 × 103 | 1.40 × 103 | 8.84 × 102 | 2.95 × 103 | 1.07 × 104 | 1.14 × 104 | 1.19 × 104 |
| F 3 | 5.74 × 100 | 1.13 × 101 | 1.14 × 101 | 8.23 × 100 | 3.24 × 100 | 2.18 × 10−1 | 6.43 × 10−14 | 3.89 × 10−1 | 1.36 × 101 | 3.47 × 10−14 | 2.90 × 10−2 | 1.19 × 10−1 | 3.42 × 10−14 | 3.81 × 10−14 | 4.88 × 10−14 | 3.63 × 10−1 | 1.27 × 101 | 1.71 × 101 |
| F 4 | 4.77 × 1011 | 5.57 × 1012 | 5.93 × 1012 | 2.88 × 1012 | 1.66 × 1011 | 1.14 × 1011 | 1.53 × 1011 | 2.90 × 1011 | 7.08 × 1011 | 1.74 × 1011 | 1.96 × 1011 | 4.21 × 1011 | 1.28 × 1011 | 1.25 × 1011 | 1.67 × 1011 | 2.48 × 1011 | 6.04 × 1011 | 2.14 × 1013 |
| F 5 | 2.96 × 107 | 3.16 × 107 | 3.01 × 107 | 3.54 × 107 | 1.30 × 108 | 2.75 × 108 | 2.86 × 108 | 2.96 × 108 | 3.05 × 108 | 2.81 × 108 | 2.36 × 108 | 2.24 × 108 | 2.55 × 108 | 2.77 × 108 | 2.84 × 108 | 2.91 × 108 | 3.04 × 108 | 3.09 × 108 |
| F 6 | 1.99 × 101 | 2.02 × 101 | 2.02 × 101 | 2.01 × 101 | 1.99 × 101 | 2.01 × 101 | 2.15 × 101 | 2.15 × 101 | 2.03 × 101 | 1.94 × 101 | 1.97 × 101 | 1.97 × 101 | 1.98 × 101 | 1.96 × 101 | 4.00 × 10−9 | 3.82 × 10−1 | 1.34 × 101 | 1.78 × 101 |
| F 7 | 2.94 × 106 | 9.82 × 108 | 1.28 × 109 | 3.37 × 108 | 7.90 × 105 | 7.40 × 105 | 1.17 × 105 | 8.40 × 104 | 7.80 × 105 | 3.34 × 10−6 | 3.70 × 104 | 1.75 × 106 | 1.40 × 103 | 8.73 × 10−6 | 3.14 × 10−1 | 2.51 × 104 | 5.02 × 105 | 1.76 × 107 |
| F 8 | 3.39 × 107 | 4.89 × 107 | 4.72 × 107 | 4.42 × 107 | 1.67 × 105 | 6.68 × 104 | 1.58 × 107 | 4.17 × 107 | 4.89 × 107 | 3.33 × 105 | 5.47 × 106 | 2.47 × 107 | 1.86 × 103 | 3.95 × 103 | 1.73 × 107 | 3.97 × 107 | 4.52 × 107 | 4.62 × 107 |
| F 9 | 8.72 × 107 | 1.03 × 109 | 1.17 × 109 | 6.34 × 108 | 2.80 × 107 | 1.75 × 107 | 4.08 × 107 | 4.70 × 108 | 1.36 × 1010 | 1.97 × 107 | 3.44 × 107 | 6.48 × 107 | 1.98 × 107 | 1.47 × 107 | 4.03 × 107 | 3.65 × 109 | 2.29 × 1010 | 4.11 × 1010 |
| F 10 | 3.14 × 103 | 3.85 × 103 | 3.97 × 103 | 3.43 × 103 | 2.65 × 103 | 1.69 × 103 | 2.66 × 103 | 1.09 × 104 | 1.16 × 104 | 1.20 × 103 | 1.76 × 103 | 1.82 × 103 | 1.48 × 103 | 9.59 × 102 | 1.01 × 104 | 1.08 × 104 | 1.14 × 104 | 1.20 × 104 |
| F 11 | 7.08 × 101 | 9.71 × 101 | 9.38 × 101 | 8.60 × 101 | 5.22 × 101 | 3.03 × 101 | 2.47 × 101 | 2.53 × 101 | 6.32 × 101 | 1.57 × 101 | 2.00 × 101 | 2.03 × 101 | 2.00 × 101 | 1.09 × 101 | 1.85 × 10−13 | 1.38 × 100 | 5.04 × 101 | 1.41 × 102 |
| F 12 | 9.54 × 104 | 1.03 × 106 | 1.13 × 106 | 6.89 × 105 | 4.82 × 103 | 8.18 × 102 | 6.14 × 104 | 5.07 × 106 | 6.64 × 106 | 2.35 × 103 | 1.71 × 104 | 7.25 × 104 | 1.72 × 103 | 1.92 × 103 | 2.36 × 106 | 5.14 × 106 | 6.61 × 106 | 7.97 × 106 |
| F 13 | 5.89 × 103 | 3.83 × 106 | 4.40 × 106 | 9.77 × 105 | 5.29 × 103 | 3.06 × 103 | 2.35 × 103 | 6.58 × 104 | 1.36 × 108 | 6.55 × 102 | 8.02 × 102 | 1.02 × 103 | 5.58 × 102 | 4.97 × 102 | 5.48 × 102 | 2.96 × 103 | 3.95 × 107 | 9.38 × 109 |
| F 14 | 2.68 × 108 | 2.11 × 109 | 2.30 × 109 | 1.45 × 109 | 8.31 × 107 | 4.56 × 107 | 1.39 × 108 | 3.47 × 109 | 3.23 × 1010 | 5.82 × 107 | 1.12 × 108 | 2.20 × 108 | 6.07 × 107 | 4.61 × 107 | 2.11 × 108 | 2.02 × 1010 | 5.18 × 1010 | 7.58 × 1010 |
| F 15 | 3.33 × 103 | 4.07 × 103 | 4.13 × 103 | 3.53 × 103 | 2.81 × 103 | 1.11 × 104 | 1.09 × 104 | 1.12 × 104 | 1.17 × 104 | 1.07 × 104 | 3.71 × 103 | 3.33 × 103 | 1.07 × 104 | 1.05 × 104 | 1.05 × 104 | 1.08 × 104 | 1.14 × 104 | 1.21 × 104 |
| F 16 | 1.89 × 102 | 2.56 × 102 | 2.56 × 102 | 2.22 × 102 | 1.47 × 102 | 8.35 × 101 | 5.10 × 101 | 6.61 × 101 | 2.58 × 102 | 6.10 × 10−1 | 1.60 × 101 | 2.71 × 101 | 6.75 × 100 | 3.42 × 10−2 | 2.93 × 10−13 | 1.35 × 101 | 2.52 × 102 | 3.39 × 102 |
| F 17 | 3.02 × 105 | 1.61 × 106 | 1.71 × 106 | 1.27 × 106 | 3.52 × 104 | 1.06 × 104 | 2.08 × 106 | 9.92 × 106 | 1.40 × 107 | 4.93 × 104 | 9.98 × 104 | 2.77 × 105 | 2.24 × 104 | 1.18 × 105 | 6.85 × 106 | 1.08 × 107 | 1.47 × 107 | 1.80 × 107 |
| F 18 | 1.70 × 104 | 7.87 × 108 | 1.16 × 109 | 3.75 × 107 | 2.67 × 103 | 1.71 × 103 | 2.72 × 103 | 6.17 × 106 | 4.49 × 1010 | 1.95 × 103 | 2.54 × 103 | 3.90 × 103 | 1.66 × 103 | 1.30 × 103 | 1.59 × 103 | 1.71 × 107 | 2.95 × 1010 | 1.40 × 1011 |
| F 19 | 2.34 × 106 | 4.66 × 106 | 4.74 × 106 | 3.92 × 106 | 1.72 × 106 | 6.52 × 106 | 1.48 × 107 | 2.01 × 107 | 2.49 × 107 | 9.10 × 106 | 2.46 × 106 | 2.41 × 106 | 5.98 × 106 | 1.09 × 107 | 1.60 × 107 | 2.09 × 107 | 2.58 × 107 | 3.04 × 107 |
| F 20 | 8.41 × 103 | 1.01 × 109 | 1.39 × 109 | 4.32 × 107 | 2.93 × 103 | 1.29 × 103 | 1.25 × 103 | 7.61 × 106 | 4.93 × 1010 | 1.41 × 103 | 2.13 × 103 | 2.72 × 103 | 1.51 × 103 | 1.10 × 103 | 1.02 × 103 | 2.27 × 107 | 3.25 × 1010 | 1.47 × 1011 |
| Rank | 3.75 | 6.35 | 7.05 | 5.30 | 2.80 | 2.45 | 3.25 | 5.75 | 8.30 | 3.25 | 4.25 | 5.25 | 3.15 | 2.45 | 3.95 | 6.25 | 7.70 | 8.75 |
| F | NP = 300 | NP = 400 | ||||||||||||||||
| ϕ = 0.1 | ϕ = 0.2 | ϕ = 0.3 | ϕ = 0.4 | ϕ = 0.5 | ϕ = 0.6 | ϕ = 0.7 | ϕ = 0.8 | ϕ = 0.9 | ϕ = 0.1 | ϕ = 0.2 | ϕ = 0.3 | ϕ = 0.4 | ϕ = 0.5 | ϕ = 0.6 | ϕ = 0.7 | ϕ = 0.8 | ϕ = 0.9 | |
| F 1 | 9.78 × 10−27 | 0.00 × 100 | 0.00 × 100 | 0.00 × 100 | 0.00 × 100 | 7.97 × 10−10 | 5.83 × 105 | 1.86 × 108 | 2.23 × 109 | 6.50 × 10−24 | 0.00 × 100 | 0.00 × 100 | 0.00 × 100 | 8.36 × 10−24 | 4.18 × 10−3 | 3.39 × 106 | 3.30 × 108 | 3.01 × 109 |
| F 2 | 6.93 × 102 | 1.06 × 103 | 1.15 × 103 | 8.88 × 102 | 5.90 × 102 | 1.04 × 104 | 1.09 × 104 | 1.15 × 104 | 1.21 × 104 | 5.75 × 102 | 8.12 × 102 | 8.78 × 102 | 6.57 × 102 | 9.82 × 103 | 1.05 × 104 | 1.10 × 104 | 1.16 × 104 | 1.22 × 104 |
| F 3 | 3.36 × 10−14 | 3.05 × 10−14 | 3.15 × 10−14 | 3.18 × 10−14 | 3.88 × 10−14 | 1.15 × 10−7 | 5.14 × 100 | 1.47 × 101 | 1.77 × 101 | 3.51 × 10−14 | 2.99 × 10−14 | 2.98 × 10−14 | 3.15 × 10−14 | 3.98 × 10−14 | 3.29 × 10−4 | 7.64 × 100 | 1.54 × 101 | 1.79 × 101 |
| F 4 | 2.22 × 1011 | 2.13 × 1011 | 2.00 × 1011 | 1.60 × 1011 | 1.57 × 1011 | 2.06 × 1011 | 3.80 × 1011 | 1.31 × 1012 | 6.16 × 1013 | 2.88 × 1011 | 2.60 × 1011 | 2.27 × 1011 | 1.96 × 1011 | 1.82 × 1011 | 2.48 × 1011 | 5.19 × 1011 | 3.12 × 1012 | 1.09 × 1014 |
| F 5 | 2.83 × 108 | 2.81 × 108 | 2.76 × 108 | 2.80 × 108 | 2.82 × 108 | 2.86 × 108 | 2.93 × 108 | 3.02 × 108 | 3.18 × 108 | 2.82 × 108 | 2.82 × 108 | 2.81 × 108 | 2.78 × 108 | 2.83 × 108 | 2.89 × 108 | 2.94 × 108 | 3.06 × 108 | 3.16 × 108 |
| F 6 | 4.00 × 10−9 | 6.18 × 100 | 1.86 × 101 | 4.00 × 10−9 | 4.00 × 10−9 | 2.08 × 10−7 | 5.36 × 100 | 1.54 × 101 | 1.84 × 101 | 4.00 × 10−9 | 3.88 × 10−9 | 4.00 × 10−9 | 4.00 × 10−9 | 4.00 × 10−9 | 4.72 × 10−4 | 8.08 × 100 | 1.61 × 101 | 1.86 × 101 |
| F 7 | 3.43 × 10−3 | 1.20 × 10−3 | 3.63 × 10−2 | 2.15 × 10−5 | 2.54 × 10−3 | 3.60 × 102 | 1.83 × 105 | 9.66 × 105 | 4.68 × 108 | 8.32 × 10−1 | 3.50 × 10−2 | 3.16 × 10−2 | 7.06 × 10−3 | 1.03 × 100 | 7.18 × 103 | 3.40 × 105 | 5.23 × 106 | 1.55 × 109 |
| F 8 | 1.37 × 107 | 3.51 × 105 | 3.72 × 104 | 4.36 × 103 | 9.82 × 105 | 3.01 × 107 | 4.30 × 107 | 4.58 × 107 | 4.65 × 107 | 2.23 × 107 | 1.00 × 107 | 3.36 × 106 | 6.67 × 105 | 1.33 × 107 | 3.52 × 107 | 4.41 × 107 | 4.61 × 107 | 4.67 × 107 |
| F 9 | 2.47 × 107 | 2.28 × 107 | 2.49 × 107 | 1.77 × 107 | 2.14 × 107 | 1.69 × 108 | 1.40 × 1010 | 3.19 × 1010 | 5.08 × 1010 | 3.12 × 107 | 2.42 × 107 | 2.41 × 107 | 2.01 × 107 | 3.01 × 107 | 1.36 × 109 | 1.92 × 1010 | 3.70 × 1010 | 5.55 × 1010 |
| F 10 | 8.49 × 102 | 1.13 × 103 | 1.22 × 103 | 9.23 × 102 | 9.75 × 103 | 1.05 × 104 | 1.09 × 104 | 1.15 × 104 | 1.22 × 104 | 9.74 × 103 | 8.59 × 102 | 9.25 × 102 | 1.11 × 103 | 1.02 × 104 | 1.05 × 104 | 1.10 × 104 | 1.16 × 104 | 1.22 × 104 |
| F 11 | 1.25 × 10−13 | 2.23 × 10−1 | 6.68 × 100 | 1.10 × 10−13 | 1.17 × 10−13 | 3.45 × 10−7 | 8.82 × 100 | 7.59 × 101 | 1.59 × 102 | 1.30 × 10−13 | 1.11 × 10−13 | 1.05 × 10−13 | 1.06 × 10−13 | 1.31 × 10−13 | 7.60 × 10−4 | 1.51 × 101 | 9.53 × 101 | 1.67 × 102 |
| F 12 | 2.50 × 104 | 4.39 × 103 | 5.55 × 103 | 2.55 × 103 | 8.74 × 104 | 4.13 × 106 | 5.75 × 106 | 7.14 × 106 | 8.45 × 106 | 1.71 × 105 | 9.83 × 103 | 7.41 × 103 | 1.18 × 104 | 2.03 × 106 | 4.62 × 106 | 6.16 × 106 | 7.49 × 106 | 8.77 × 106 |
| F 13 | 5.69 × 102 | 5.35 × 102 | 5.91 × 102 | 5.15 × 102 | 4.69 × 102 | 4.85 × 102 | 1.08 × 105 | 5.63 × 108 | 1.67 × 1010 | 5.31 × 102 | 5.36 × 102 | 5.46 × 102 | 4.93 × 102 | 4.50 × 102 | 4.80 × 102 | 3.72 × 105 | 1.46 × 109 | 2.23 × 1010 |
| F 14 | 7.79 × 107 | 6.96 × 107 | 7.62 × 107 | 5.17 × 107 | 7.69 × 107 | 5.73 × 109 | 3.85 × 1010 | 6.50 × 1010 | 8.96 × 1010 | 1.14 × 108 | 7.11 × 107 | 7.20 × 107 | 6.01 × 107 | 1.43 × 108 | 1.68 × 1010 | 4.63 × 1010 | 7.22 × 1010 | 9.64 × 1010 |
| F 15 | 1.04 × 104 | 1.05 × 104 | 1.05 × 104 | 1.04 × 104 | 1.03 × 104 | 1.06 × 104 | 1.10 × 104 | 1.16 × 104 | 1.22 × 104 | 1.04 × 104 | 1.03 × 104 | 1.04 × 104 | 1.03 × 104 | 1.03 × 104 | 1.06 × 104 | 1.11 × 104 | 1.17 × 104 | 1.23 × 104 |
| F 16 | 2.15 × 10−13 | 5.86 × 10−2 | 9.78 × 10−2 | 1.55 × 10−13 | 2.01 × 10−13 | 2.36 × 10−6 | 1.02 × 102 | 2.92 × 102 | 3.52 × 102 | 2.39 × 10−13 | 1.61 × 10−13 | 1.53 × 10−13 | 1.60 × 10−13 | 2.44 × 10−13 | 7.14 × 10−3 | 1.52 × 102 | 3.07 × 102 | 3.57 × 102 |
| F 17 | 1.53 × 106 | 5.71 × 104 | 5.65 × 104 | 6.57 × 104 | 4.44 × 106 | 8.75 × 106 | 1.28 × 107 | 1.65 × 107 | 1.98 × 107 | 4.81 × 106 | 1.72 × 105 | 1.03 × 105 | 7.07 × 105 | 6.20 × 106 | 1.02 × 107 | 1.37 × 107 | 1.72 × 107 | 2.06 × 107 |
| F 18 | 1.56 × 103 | 1.65 × 103 | 1.56 × 103 | 1.31 × 103 | 1.13 × 103 | 1.26 × 103 | 1.57 × 109 | 5.42 × 1010 | 1.76 × 1011 | 1.31 × 103 | 1.31 × 103 | 1.33 × 103 | 1.25 × 103 | 1.12 × 103 | 3.88 × 103 | 4.92 × 109 | 6.87 × 1010 | 1.94 × 1011 |
| F 19 | 1.33 × 107 | 8.92 × 106 | 8.14 × 106 | 1.02 × 107 | 1.43 × 107 | 1.88 × 107 | 2.25 × 107 | 2.75 × 107 | 3.21 × 107 | 1.45 × 107 | 1.09 × 107 | 1.02 × 107 | 1.21 × 107 | 1.53 × 107 | 1.94 × 107 | 2.47 × 107 | 2.87 × 107 | 3.36 × 107 |
| F 20 | 1.12 × 103 | 1.32 × 103 | 1.40 × 103 | 1.08 × 103 | 9.79 × 102 | 1.00 × 103 | 1.88 × 109 | 5.74 × 1010 | 1.82 × 1011 | 9.89 × 102 | 1.15 × 103 | 1.10 × 103 | 9.85 × 102 | 9.82 × 102 | 1.72 × 103 | 5.69 × 109 | 7.40 × 1010 | 2.03 × 1011 |
| Rank | 3.80 | 3.48 | 4.03 | 2.03 | 2.88 | 5.00 | 6.90 | 7.95 | 8.95 | 3.90 | 2.90 | 2.50 | 2.05 | 3.95 | 5.70 | 7.00 | 8.00 | 9.00 |
| F | NP = 500 | NP = 600 | ||||||||||||||||
| ϕ = 0.1 | ϕ = 0.2 | ϕ = 0.3 | ϕ = 0.4 | ϕ = 0.5 | ϕ = 0.6 | ϕ = 0.7 | ϕ = 0.8 | ϕ = 0.9 | ϕ = 0.1 | ϕ = 0.2 | ϕ = 0.3 | ϕ = 0.4 | ϕ = 0.5 | ϕ = 0.6 | ϕ = 0.7 | ϕ = 0.8 | ϕ = 0.9 | |
| F 1 | 8.02 × 10−22 | 0.00 × 100 | 0.00 × 100 | 0.00 × 100 | 6.91 × 10−21 | 4.51 × 100 | 8.52 × 106 | 4.72 × 108 | 3.65 × 109 | 2.72 × 10−19 | 2.86 × 10−26 | 0.00 × 100 | 5.33 × 10−26 | 9.43 × 10−16 | 1.90 × 102 | 1.57 × 107 | 6.02 × 108 | 4.18 × 109 |
| F 2 | 8.73 × 103 | 6.81 × 102 | 7.34 × 102 | 5.74 × 102 | 1.01 × 104 | 1.06 × 104 | 1.11 × 104 | 1.16 × 104 | 1.23 × 104 | 9.87 × 103 | 6.03 × 102 | 6.59 × 102 | 4.15 × 103 | 1.02 × 104 | 1.06 × 104 | 1.11 × 104 | 1.17 × 104 | 1.23 × 104 |
| F 3 | 3.92 × 10−14 | 2.96 × 10−14 | 2.90 × 10−14 | 3.12 × 10−14 | 8.55 × 10−14 | 1.24 × 10−2 | 9.15 × 100 | 1.58 × 101 | 1.80 × 101 | 8.00 × 10−13 | 2.97 × 10−14 | 2.90 × 10−14 | 3.16 × 10−14 | 6.07 × 10−11 | 1.06 × 10−1 | 1.02 × 101 | 1.61 × 101 | 1.82 × 101 |
| F 4 | 3.35 × 1011 | 3.11 × 1011 | 2.86 × 1011 | 2.49 × 1011 | 2.16 × 1011 | 3.21 × 1011 | 6.31 × 1011 | 1.05 × 1013 | 1.22 × 1014 | 4.16 × 1011 | 3.83 × 1011 | 3.30 × 1011 | 2.84 × 1011 | 2.67 × 1011 | 3.79 × 1011 | 7.59 × 1011 | 1.73 × 1013 | 1.39 × 1014 |
| F 5 | 2.80 × 108 | 2.76 × 108 | 2.78 × 108 | 2.77 × 108 | 2.77 × 108 | 2.90 × 108 | 2.95 × 108 | 3.06 × 108 | 3.20 × 108 | 2.80 × 108 | 2.79 × 108 | 2.76 × 108 | 2.79 × 108 | 2.84 × 108 | 2.87 × 108 | 2.99 × 108 | 3.03 × 108 | 3.18 × 108 |
| F 6 | 4.00 × 10−9 | 4.00 × 10−9 | 4.00 × 10−9 | 4.00 × 10−9 | 4.00 × 10−9 | 1.69 × 10−2 | 9.72 × 100 | 1.66 × 101 | 1.88 × 101 | 4.09 × 10−9 | 3.88 × 10−9 | 3.88 × 10−9 | 4.00 × 10−9 | 6.07 × 10−9 | 1.45 × 10−1 | 1.08 × 101 | 1.68 × 101 | 1.89 × 101 |
| F 7 | 2.61 × 101 | 1.21 × 100 | 7.81 × 10−1 | 4.75 × 10−1 | 3.43 × 101 | 3.08 × 104 | 4.80 × 105 | 3.01 × 107 | 2.65 × 109 | 2.63 × 102 | 1.79 × 101 | 1.21 × 101 | 8.88 × 100 | 3.52 × 102 | 7.34 × 104 | 6.46 × 105 | 1.15 × 108 | 3.95 × 109 |
| F 8 | 2.74 × 107 | 1.73 × 107 | 1.16 × 107 | 9.50 × 106 | 2.05 × 107 | 3.79 × 107 | 4.47 × 107 | 4.63 × 107 | 4.68 × 107 | 3.08 × 107 | 2.23 × 107 | 1.74 × 107 | 1.59 × 107 | 2.52 × 107 | 3.96 × 107 | 4.50 × 107 | 4.64 × 107 | 4.69 × 107 |
| F 9 | 3.94 × 107 | 2.71 × 107 | 2.62 × 107 | 2.33 × 107 | 4.11 × 107 | 4.17 × 109 | 2.23 × 1010 | 4.01 × 1010 | 5.87 × 1010 | 4.84 × 107 | 3.01 × 107 | 2.85 × 107 | 2.62 × 107 | 5.78 × 107 | 6.49 × 109 | 2.49 × 1010 | 4.26 × 1010 | 6.14 × 1010 |
| F 10 | 1.00 × 104 | 1.35 × 103 | 7.94 × 102 | 9.73 × 103 | 1.02 × 104 | 1.06 × 104 | 1.11 × 104 | 1.17 × 104 | 1.23 × 104 | 1.02 × 104 | 9.45 × 103 | 6.43 × 103 | 9.99 × 103 | 1.03 × 104 | 1.06 × 104 | 1.11 × 104 | 1.17 × 104 | 1.24 × 104 |
| F 11 | 1.67 × 10−13 | 1.11 × 10−13 | 1.04 × 10−13 | 1.10 × 10−13 | 5.51 × 10−13 | 2.59 × 10−2 | 1.94 × 101 | 1.08 × 102 | 1.72 × 102 | 6.69 × 10−12 | 1.12 × 10−13 | 1.05 × 10−13 | 1.13 × 10−13 | 4.00 × 10−10 | 1.90 × 10−1 | 2.66 × 101 | 1.17 × 102 | 1.75 × 102 |
| F 12 | 1.54 × 106 | 2.34 × 104 | 1.47 × 104 | 4.12 × 104 | 3.02 × 106 | 4.87 × 106 | 6.37 × 106 | 7.75 × 106 | 9.01 × 106 | 2.65 × 106 | 4.99 × 104 | 2.82 × 104 | 1.18 × 105 | 3.37 × 106 | 5.07 × 106 | 6.50 × 106 | 7.83 × 106 | 9.11 × 106 |
| F 13 | 5.27 × 102 | 4.92 × 102 | 4.67 × 102 | 4.65 × 102 | 4.69 × 102 | 4.80 × 102 | 1.08 × 106 | 2.45 × 109 | 2.62 × 1010 | 4.82 × 102 | 5.44 × 102 | 5.20 × 102 | 4.56 × 102 | 4.42 × 102 | 5.63 × 102 | 3.15 × 106 | 3.37 × 109 | 2.90 × 1010 |
| F 14 | 1.66 × 108 | 8.14 × 107 | 7.67 × 107 | 7.21 × 107 | 3.28 × 108 | 2.36 × 1010 | 5.14 × 1010 | 7.78 × 1010 | 1.30 × 1011 | 2.61 × 108 | 9.02 × 107 | 8.54 × 107 | 8.71 × 107 | 8.67 × 108 | 2.77 × 1010 | 5.44 × 1010 | 7.81 × 1010 | 1.03 × 1011 |
| F 15 | 1.03 × 104 | 1.03 × 104 | 1.03 × 104 | 1.03 × 104 | 1.03 × 104 | 1.07 × 104 | 1.11 × 104 | 1.17 × 104 | 1.24 × 104 | 1.03 × 104 | 1.03 × 104 | 1.03 × 104 | 1.03 × 104 | 1.03 × 104 | 1.07 × 104 | 1.12 × 104 | 1.17 × 104 | 1.24 × 104 |
| F 16 | 2.83 × 10−13 | 1.65 × 10−13 | 1.55 × 10−13 | 1.66 × 10−13 | 1.34 × 10−12 | 2.74 × 10−1 | 1.83 × 102 | 3.16 × 102 | 3.61 × 102 | 1.54 × 10−11 | 1.74 × 10−13 | 1.58 × 10−13 | 1.80 × 10−13 | 1.18 × 10−9 | 2.41 × 100 | 2.05 × 102 | 3.22 × 102 | 3.63 × 102 |
| F 17 | 5.98 × 106 | 7.91 × 105 | 2.80 × 105 | 3.07 × 106 | 7.05 × 106 | 1.07 × 107 | 1.43 × 107 | 1.76 × 107 | 2.10 × 107 | 6.68 × 106 | 2.56 × 106 | 1.14 × 106 | 4.24 × 106 | 7.51 × 106 | 1.11 × 107 | 1.44 × 107 | 1.81 × 107 | 2.12 × 107 |
| F 18 | 1.18 × 103 | 1.26 × 103 | 1.18 × 103 | 1.13 × 103 | 1.00 × 103 | 1.10 × 105 | 8.30 × 109 | 7.98 × 1010 | 2.08 × 1011 | 1.08 × 103 | 1.22 × 103 | 1.22 × 103 | 1.05 × 103 | 9.59 × 102 | 1.41 × 106 | 1.12 × 1010 | 8.67 × 1010 | 2.19 × 1011 |
| F 19 | 1.55 × 107 | 1.23 × 107 | 1.15 × 107 | 1.31 × 107 | 1.64 × 107 | 2.02 × 107 | 2.50 × 107 | 2.92 × 107 | 3.32 × 107 | 1.62 × 107 | 1.31 × 107 | 1.23 × 107 | 1.40 × 107 | 1.72 × 107 | 2.06 × 107 | 2.48 × 107 | 2.97 × 107 | 3.49 × 107 |
| F 20 | 9.94 × 102 | 1.02 × 103 | 1.06 × 103 | 9.70 × 102 | 9.78 × 102 | 1.10 × 10 | 9.17 × 109 | 8.52 × 1010 | 2.20 × 1011 | 9.91 × 102 | 9.85 × 102 | 9.94 × 102 | 9.77 × 102 | 9.86 × 102 | 1.53 × 106 | 1.28 × 1010 | 9.33 × 1010 | 2.30 × 1011 |
| Rank | 4.25 | 2.75 | 2.00 | 2.00 | 4.15 | 5.85 | 7.00 | 8.00 | 9.00 | 4.10 | 2.65 | 1.85 | 2.30 | 4.20 | 5.90 | 7.00 | 8.00 | 9.00 |
Fitness comparison between DECELSO and the compared algorithms on the 1000-D CEC’2010 problems with 3 × 106 fitness evaluations.
| F | Quality | DGCELSO | TPLSO | SPLSO | LLSO | CSO | SLPSO | DECC-GDG | DECC-DG2 | DECC-RDG | DECC-RDG2 |
|---|---|---|---|---|---|---|---|---|---|---|---|
| F 1 | Median | 0.00 × 100 | 1.98 × 10−18 | 7.70 × 10−20 | 2.97 × 10−22 | 4.64 × 10−12 | 7.65 × 10−18 | 6.53 × 100 | 1.95 × 10−1 | 2.60 × 10−3 | 1.05 × 10−3 |
| Mean | 0.00 × 100 | 1.93 × 10−18 | 7.73 × 10−20 | 3.13 × 10−22 | 4.75 × 10−12 | 7.73 × 10−18 | 6.54 × 100 | 7.34 × 10−1 | 6.42 × 100 | 8.08 × 10−3 | |
| Std | 0.00 × 100 | 3.04 × 10−19 | 6.95 × 10−21 | 6.93 × 10−23 | 7.77 × 10−13 | 8.84 × 10−19 | 9.35 × 10−1 | 1.61 × 100 | 3.41 × 101 | 3.28 × 10−2 | |
| p-value | - | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | |
| F 2 | Median | 8.85 × 102 | 1.13 × 103 | 4.45 × 102 | 9.71 × 102 | 7.52 × 103 | 1.94 × 103 | 1.40 × 103 | 3.00 × 103 | 2.98 × 103 | 2.99 × 103 |
| Mean | 8.88 × 102 | 1.11 × 103 | 4.45 × 102 | 9.78 × 102 | 7.48 × 103 | 1.93 × 103 | 1.40 × 103 | 3.00 × 103 | 2.98 × 103 | 3.00 × 103 | |
| Std | 4.13 × 101 | 8.28 × 101 | 1.63 × 101 | 5.17 × 101 | 2.60 × 102 | 8.05 × 101 | 2.67 × 101 | 1.34 × 102 | 1.16 × 102 | 1.35 × 102 | |
| p-value | - | 4.32 × 10−8+ | 4.32 × 10−8− | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | |
| F 3 | Median | 3.24 × 10−14 | 1.44 × 100 | 2.56 × 10−13 | 2.89 × 10−14 | 2.56 × 10−9 | 1.88 × 100 | 1.12 × 101 | 1.08 × 101 | 1.12 × 101 | 1.11 × 101 |
| Mean | 3.18 × 10−14 | 1.45 × 100 | 2.52 × 10−13 | 2.76 × 10−14 | 2.57 × 10−9 | 1.84 × 100 | 1.11 × 101 | 1.09 × 101 | 1.11 × 101 | 1.10 × 101 | |
| Std | 1.32 × 10−15 | 1.34 × 10−1 | 1.86 × 10−14 | 2.16 × 10−15 | 1.82 × 10−10 | 2.62 × 10−1 | 5.69 × 10−1 | 6.40 × 10−1 | 6.46 × 10−1 | 6.88 × 10−1 | |
| p-value | - | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8− | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | |
| F 4 | Median | 1.58 × 1011 | 2.77 × 1011 | 4.36 × 1011 | 4.48 × 1011 | 6.92 × 1011 | 2.68 × 1011 | 1.37 × 1014 | 1.44 × 1012 | 1.39 × 1012 | 1.37 × 1012 |
| Mean | 1.60 × 1011 | 2.89 × 1011 | 4.30 × 1011 | 4.54 × 1011 | 6.87 × 1011 | 2.83 × 1011 | 1.38 × 1014 | 1.69 × 1012 | 1.49 × 1012 | 1.44 × 1012 | |
| Std | 3.72 × 1010 | 9.22 × 1010 | 8.17 × 1010 | 1.29 × 1011 | 1.76 × 1011 | 8.77 × 1010 | 2.68 × 1013 | 6.16 × 1011 | 6.33 × 1011 | 5.35 × 1011 | |
| p-value | - | 1.00 × 100= | 3.49 × 10−3+ | 4.32 × 10−8+ | 1.02 × 10−3+ | 3.19 × 10−7+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 3.19 × 10−7+ | |
| F 5 | Median | 2.82 × 108 | 1.63 × 107 | 5.97 × 106 | 1.09 × 107 | 2.00 × 106 | 2.89 × 107 | 3.84 × 108 | 1.72 × 108 | 1.75 × 108 | 1.72 × 108 |
| Mean | 2.80 × 108 | 1.59 × 107 | 6.30 × 106 | 1.16 × 107 | 2.46 × 106 | 3.04 × 107 | 3.82 × 108 | 1.75 × 108 | 1.71 × 108 | 1.73 × 108 | |
| Std | 9.11 × 106 | 4.51 × 106 | 1.73 × 106 | 2.93 × 106 | 1.33 × 106 | 8.42 × 106 | 1.54 × 107 | 1.84 × 107 | 1.84 × 107 | 1.50 × 107 | |
| p-value | - | 4.32 × 10−8− | 4.32 × 10−8− | 4.32 × 10−8− | 4.32 × 10−8− | 4.32 × 10−8− | 4.32 × 10−8+ | 4.32 × 10−8− | 4.32 × 10−8− | 4.32 × 10−8− | |
| F 6 | Median | 4.00 × 10−9 | 2.08 × 100 | 1.00 × 10−8 | 4.00 × 10−9 | 8.18 × 10−7 | 2.14 × 101 | 3.51 × 105 | 8.81 × 100 | 1.07 × 101 | 1.06 × 101 |
| Mean | 4.00 × 10−9 | 2.20 × 100 | 9.44 × 10−9 | 4.00 × 10−9 | 8.16 × 10−7 | 1.95 × 101 | 3.58 × 105 | 8.90 × 100 | 1.05 × 101 | 1.05 × 101 | |
| Std | 3.73 × 10−15 | 3.74 × 10−1 | 1.18 × 10−9 | 8.27 × 10−25 | 2.57 × 10−8 | 4.13 × 100 | 4.27 × 104 | 6.50 × 10−1 | 7.02 × 10−1 | 6.84 × 10−1 | |
| p-value | - | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8− | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | |
| F 7 | Median | 1.89 × 10−5 | 9.21 × 102 | 4.51 × 102 | 6.58 × 100 | 2.13 × 104 | 6.26 × 104 | 2.98 × 1010 | 1.80 × 103 | 4.86 × 101 | 5.18 × 101 |
| Mean | 2.15 × 10−5 | 5.86 × 103 | 4.76 × 102 | 2.31 × 101 | 2.13 × 104 | 6.49 × 104 | 3.10 × 1010 | 1.98 × 103 | 6.40 × 101 | 5.87 × 101 | |
| Std | 1.55 × 10−5 | 1.03 × 104 | 1.29 × 102 | 7.45 × 101 | 4.53 × 103 | 3.81 × 104 | 4.19 × 109 | 9.49 × 102 | 4.67 × 101 | 3.71 × 101 | |
| p-value | - | 2.07 × 10−6+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | |
| F 8 | Median | 4.28 × 103 | 4.78 × 105 | 3.11 × 107 | 2.33 × 107 | 3.86 × 107 | 7.51 × 106 | 6.78 × 108 | 6.05 × 102 | 6.57 × 10−1 | 3.68 × 10−1 |
| Mean | 4.36 × 103 | 4.98 × 105 | 3.11 × 107 | 2.33 × 107 | 3.87 × 107 | 7.57 × 106 | 8.05 × 108 | 2.71 × 105 | 6.65 × 105 | 7.43 × 10−1 | |
| Std | 4.17 × 102 | 1.43 × 105 | 9.43 × 104 | 2.96 × 105 | 8.47 × 104 | 2.44 × 106 | 4.70 × 108 | 9.94 × 105 | 1.49 × 106 | 1.24 × 100 | |
| p-value | - | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | |
| F 9 | Median | 1.76 × 107 | 4.25 × 107 | 4.57 × 107 | 4.64 × 107 | 6.65 × 107 | 3.31 × 107 | 7.45 × 108 | 2.15 × 108 | 1.76 × 108 | 1.77 × 108 |
| Mean | 1.77 × 107 | 4.32 × 107 | 4.59 × 107 | 4.48 × 107 | 6.68 × 107 | 3.35 × 107 | 7.43 × 108 | 2.18 × 108 | 1.73 × 108 | 1.77 × 108 | |
| Std | 1.69 × 106 | 4.10 × 106 | 2.99 × 106 | 4.16 × 106 | 4.38 × 106 | 3.63 × 106 | 3.71 × 107 | 1.73 × 107 | 1.22 × 107 | 1.66 × 107 | |
| p-value | - | 4.32 × 10−8+ | 1.00 × 100= | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | |
| F 10 | Median | 9.18 × 102 | 9.67 × 102 | 7.99 × 103 | 8.87 × 102 | 9.58 × 103 | 2.59 × 103 | 4.16 × 103 | 6.73 × 103 | 6.32 × 103 | 6.27 × 103 |
| Mean | 9.23 × 102 | 9.84 × 102 | 7.99 × 103 | 8.88 × 102 | 9.58 × 103 | 2.79 × 103 | 4.15 × 103 | 6.72 × 103 | 6.32 × 103 | 6.27 × 103 | |
| Std | 3.82 × 101 | 8.52 × 101 | 1.25 × 102 | 3.50 × 101 | 6.49 × 101 | 1.28 × 103 | 5.70 × 101 | 9.30 × 101 | 1.12 × 102 | 1.09 × 102 | |
| p-value | - | 1.06 × 10−2+ | 4.32 × 10−8+ | 4.32 × 10−8− | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | |
| F 11 | Median | 1.11 × 10−13 | 3.48 × 100 | 3.02 × 10−12 | 2.90 × 100 | 3.98 × 10−8 | 2.37 × 101 | 5.58 × 100 | 5.39 × 100 | 4.76 × 100 | 4.86 × 100 |
| Mean | 1.10 × 10−13 | 3.50 × 100 | 3.05 × 10−12 | 5.51 × 100 | 3.98 × 10−8 | 2.42 × 101 | 5.53 × 100 | 5.59 × 100 | 4.75 × 100 | 4.86 × 100 | |
| Std | 2.36 × 10−15 | 1.30 × 100 | 2.84 × 10−13 | 5.43 × 100 | 3.19 × 10−9 | 3.03 × 100 | 5.49 × 10−1 | 6.12 × 10−1 | 4.79 × 10−1 | 3.88 × 10−1 | |
| p-value | - | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | |
| F 12 | Median | 2.55 × 103 | 1.23 × 104 | 9.39 × 104 | 1.24 × 104 | 4.25 × 105 | 1.30 × 104 | 2.87 × 105 | 3.99 × 104 | 2.22 × 104 | 2.21 × 104 |
| Mean | 2.55 × 103 | 1.23 × 104 | 9.53 × 104 | 1.23 × 104 | 4.37 × 105 | 1.54 × 104 | 2.87 × 105 | 3.94 × 104 | 2.21 × 104 | 2.19 × 104 | |
| Std | 2.13 × 102 | 1.30 × 103 | 6.64 × 103 | 1.32 × 103 | 6.49 × 104 | 7.06 × 103 | 1.10 × 104 | 2.17 × 103 | 1.28 × 103 | 1.45 × 103 | |
| p-value | - | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 3.19 × 10−7+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | |
| F 13 | Median | 4.64 × 102 | 7.29 × 102 | 4.50 × 102 | 7.82 × 102 | 4.68 × 102 | 8.87 × 102 | 1.39 × 103 | 1.65 × 103 | 8.25 × 102 | 8.17 × 102 |
| Mean | 5.15 × 102 | 7.54 × 102 | 5.48 × 102 | 7.91 × 102 | 5.53 × 102 | 9.81 × 102 | 1.42 × 103 | 1.77 × 103 | 8.24 × 102 | 8.40 × 102 | |
| Std | 1.49 × 102 | 1.07 × 102 | 1.66 × 102 | 2.37 × 102 | 1.75 × 102 | 3.86 × 102 | 3.40 × 102 | 5.06 × 102 | 1.35 × 102 | 1.98 × 102 | |
| p-value | - | 5.90 × 10−5+ | 3.49 × 10−3+ | 4.32 × 10−8+ | 2.73 × 10−1= | 2.85 × 10−2+ | 3.49 × 10−3+ | 5.90 × 10−5+ | 1.18 × 10−5+ | 2.07 × 10−6+ | |
| F 14 | Median | 5.10 × 107 | 1.29 × 108 | 1.61 × 108 | 1.23 × 108 | 2.46 × 108 | 8.61 × 107 | 8.59 × 108 | 8.71 × 108 | 7.19 × 108 | 7.18 × 108 |
| Mean | 5.17 × 107 | 1.32 × 108 | 1.60 × 108 | 1.22 × 108 | 2.46 × 108 | 8.55 × 107 | 8.64 × 108 | 8.60 × 108 | 7.23 × 108 | 7.25 × 108 | |
| Std | 2.76 × 106 | 9.33 × 106 | 8.42 × 106 | 6.41 × 106 | 1.29 × 107 | 7.57 × 106 | 3.30 × 107 | 4.17 × 107 | 3.65 × 107 | 3.44 × 107 | |
| p-value | - | 4.32 × 10−8+ | 2.07 × 10−6+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | |
| F 15 | Median | 1.04 × 104 | 1.04 × 104 | 9.92 × 103 | 8.30 × 102 | 1.01 × 104 | 1.12 × 104 | 6.75 × 103 | 6.73 × 103 | 6.55 × 103 | 6.56 × 103 |
| Mean | 1.04 × 104 | 8.88 × 103 | 9.91 × 103 | 8.97 × 102 | 1.01 × 104 | 1.12 × 104 | 6.76 × 103 | 6.73 × 103 | 6.55 × 103 | 6.55 × 103 | |
| Std | 6.65 × 101 | 3.41 × 103 | 6.31 × 101 | 3.47 × 102 | 6.48 × 101 | 1.19 × 102 | 8.82 × 101 | 7.27 × 101 | 8.86 × 101 | 8.39 × 101 | |
| p-value | - | 1.44 × 10−1= | 4.32 × 10−8− | 4.32 × 10−8− | 1.06 × 10−2− | 4.32 × 10−8+ | 4.32 × 10−8− | 4.32 × 10−8− | 4.32 × 10−8− | 4.32 × 10−8− | |
| F 16 | Median | 1.55 × 10−13 | 1.78 × 101 | 4.66 × 10−12 | 4.40 × 100 | 5.64 × 10−8 | 2.12 × 101 | 3.98 × 10−4 | 3.89 × 10−4 | 1.92 × 10−5 | 1.88 × 10−5 |
| Mean | 1.55 × 10−13 | 1.89 × 101 | 4.68 × 10−12 | 4.33 × 100 | 5.68 × 10−8 | 2.36 × 101 | 3.97 × 10−4 | 3.90 × 10−4 | 1.93 × 10−5 | 1.89 × 10−5 | |
| Std | 2.66 × 10−15 | 7.46 × 100 | 4.41 × 10−13 | 2.50 × 100 | 6.21 × 10−9 | 1.11 × 101 | 1.44 × 10−5 | 1.33 × 10−5 | 8.87 × 10−7 | 8.30 × 10−7 | |
| p-value | - | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | |
| F 17 | Median | 6.70 × 104 | 9.65 × 104 | 6.90 × 105 | 9.17 × 104 | 2.19 × 106 | 8.64 × 104 | 2.64 × 105 | 2.64 × 105 | 1.99 × 105 | 1.97 × 105 |
| Mean | 6.57 × 104 | 9.83 × 104 | 6.84 × 105 | 9.12 × 104 | 2.21 × 106 | 8.74 × 104 | 2.65 × 105 | 2.63 × 105 | 1.98 × 105 | 1.98 × 105 | |
| Std | 7.55 × 103 | 9.90 × 103 | 3.57 × 104 | 5.43 × 103 | 2.07 × 105 | 1.39 × 104 | 7.79 × 103 | 7.33 × 103 | 8.75 × 103 | 9.45 × 103 | |
| p-value | - | 4.32 × 10−8+ | 7.15 × 10−2+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | |
| F 18 | Median | 1.25 × 103 | 2.29 × 103 | 1.25 × 103 | 2.49 × 103 | 1.38 × 103 | 2.95 × 103 | 1.15 × 103 | 1.14 × 103 | 1.08 × 103 | 1.11 × 103 |
| Mean | 1.31 × 103 | 2.36 × 103 | 1.35 × 103 | 2.51 × 103 | 1.64 × 103 | 2.92 × 103 | 1.16 × 103 | 1.13 × 103 | 1.07 × 103 | 1.10 × 103 | |
| Std | 2.94 × 102 | 4.19 × 102 | 3.81 × 102 | 7.42 × 102 | 8.13 × 102 | 8.08 × 102 | 1.31 × 102 | 1.29 × 102 | 1.08 × 102 | 1.02 × 102 | |
| p-value | - | 5.90 × 10−5+ | 3.49 × 10−3+ | 2.61 × 10−4+ | 2.73 × 10−1= | 2.85 × 10−2+ | 3.19 × 10−7− | 3.19 × 10−7− | 3.19 × 10−7− | 4.32 × 10−8− | |
| F 19 | Median | 1.02 × 107 | 3.94 × 106 | 8.19 × 106 | 1.85 × 106 | 9.78 × 106 | 5.20 × 106 | 2.11 × 106 | 2.09 × 106 | 1.96 × 106 | 1.93 × 106 |
| Mean | 1.02 × 107 | 3.89 × 106 | 8.20 × 106 | 1.82 × 106 | 9.86 × 106 | 5.23 × 106 | 2.12 × 106 | 2.10 × 106 | 1.95 × 106 | 1.92 × 106 | |
| Std | 7.69 × 105 | 2.64 × 105 | 4.61 × 105 | 9.22 × 104 | 5.07 × 105 | 9.15 × 105 | 8.77 × 104 | 9.92 × 104 | 7.80 × 104 | 1.05 × 105 | |
| p-value | - | 4.32 × 10−8− | 4.32 × 10−8− | 4.32 × 10−8− | 4.32 × 10−8− | 4.32 × 10−8− | 4.32 × 10−8− | 4.32 × 10−8− | 4.32 × 10−8− | 4.32 × 10−8− | |
| F 20 | Median | 1.06 × 103 | 2.04 × 103 | 9.79 × 102 | 1.88 × 103 | 9.87 × 102 | 1.73 × 103 | 5.43 × 103 | 5.33 × 103 | 4.32 × 103 | 4.25 × 103 |
| Mean | 1.08 × 103 | 2.08 × 103 | 1.06 × 103 | 1.92 × 103 | 1.07 × 103 | 1.73 × 103 | 5.45 × 103 | 5.46 × 103 | 4.28 × 103 | 4.34 × 103 | |
| Std | 7.30 × 101 | 2.00 × 102 | 1.75 × 102 | 3.00 × 102 | 1.70 × 102 | 1.53 × 102 | 3.32 × 102 | 3.37 × 102 | 2.29 × 102 | 3.20 × 102 | |
| p-value | - | 4.32 × 10−8+ | 5.90 × 10−5− | 4.32 × 10−8+ | 5.90 × 10−5− | 1.18 × 10−5+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | |
| w/t/l | 16/2/2 | 14/1/5 | 15/0/5 | 15/2/3 | 18/0/2 | 17/0/3 | 16/0/4 | 16/0/4 | 16/0/4 | ||
| Rank | 2.75 | 4.80 | 4.65 | 3.70 | 6.25 | 6.05 | 8.20 | 7.10 | 5.85 | 5.65 | |
Fitness comparison between DECELSO and the compared algorithms on the 1000-D CEC’2013 problems with 3 × 106 fitness evaluations.
| F | Quality | DGCELSO | TPLSO | SPLSO | LLSO | CSO | SLPSO | DECC-GDG | DECC-DG2 | DECC-RDG | DECC-RDG2 |
|---|---|---|---|---|---|---|---|---|---|---|---|
| F 1 | Median | 0.00 × 100 | 3.21 × 10−18 | 1.17 × 10−19 | 4.02 × 10−22 | 7.92 × 10−12 | 1.03 × 10−17 | 7.06 × 100 | 3.46 × 100 | 2.04 × 10−2 | 2.96 × 10−2 |
| Mean | 0.00 × 100 | 3.81 × 10−18 | 1.18 × 10−19 | 4.28 × 10−22 | 7.88 × 10−12 | 1.65 × 10−17 | 7.43 × 100 | 6.31 × 100 | 3.51 × 10−2 | 1.08 × 10−1 | |
| Std | 0.00 × 100 | 1.57 × 10−18 | 1.04 × 10−20 | 1.29 × 10−22 | 1.19 × 10−12 | 3.25 × 10−17 | 9.38 × 10−1 | 7.78 × 100 | 3.88 × 10−2 | 2.08 × 10−1 | |
| p-value | - | 4.32 × 10−8+ | 4.32 × 10−8+ | 9.63 × 10−7+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | |
| F 2 | Median | 8.61 × 102 | 1.30 × 103 | 9.64 × 102 | 1.14 × 103 | 8.58 × 103 | 2.09 × 103 | 1.43 × 103 | 7.81 × 103 | 7.81 × 103 | 7.69 × 103 |
| Mean | 8.77 × 102 | 1.34 × 103 | 1.06 × 103 | 1.14 × 103 | 8.58 × 103 | 2.10 × 103 | 1.43 × 103 | 7.88 × 103 | 7.74 × 103 | 7.74 × 103 | |
| Std | 4.28 × 101 | 1.75 × 102 | 4.38 × 102 | 5.00 × 101 | 1.76 × 102 | 1.61 × 102 | 2.43 × 101 | 4.07 × 102 | 3.47 × 102 | 3.56 × 102 | |
| p-value | - | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | |
| F 3 | Median | 2.16 × 101 | 2.22 × 101 | 2.16 × 101 | 2.16 × 101 | 2.16 × 101 | 2.16 × 101 | 2.15 × 101 | 2.15 × 101 | 2.14 × 101 | 2.15 × 101 |
| Mean | 2.16 × 101 | 2.31 × 101 | 2.16 × 101 | 2.16 × 101 | 2.16 × 101 | 2.16 × 101 | 2.15 × 101 | 2.15 × 101 | 2.14 × 101 | 2.15 × 101 | |
| Std | 6.26 × 10−3 | 1.72 × 100 | 7.11 × 10−15 | 7.11 × 10−15 | 7.11 × 10−15 | 2.37 × 10−1 | 3.00 × 10−2 | 4.23 × 10−2 | 4.82 × 10−2 | 4.90 × 10−2 | |
| p-value | - | 3.49 × 10−3+ | 2.61 × 10−4− | 2.61 × 10−4− | 2.07 × 10−6− | 7.15 × 10−1= | 4.65 × 10−1= | 4.65 × 10−1= | 7.15 × 10−1= | 7.15 × 10−1= | |
| F 4 | Median | 2.55 × 109 | 4.23 × 109 | 9.14 × 109 | 6.40 × 109 | 1.22 × 1010 | 4.28 × 109 | 4.15 × 1011 | 8.12 × 1010 | 7.45 × 1010 | 6.10 × 1010 |
| Mean | 2.52 × 109 | 4.27 × 109 | 9.41 × 109 | 6.55 × 109 | 1.35 × 1010 | 4.33 × 109 | 4.20 × 1011 | 7.79 × 1010 | 7.16 × 1010 | 6.78 × 1010 | |
| Std | 6.55 × 108 | 1.03 × 109 | 1.86 × 109 | 1.40 × 109 | 3.12 × 109 | 9.91 × 108 | 7.75 × 1010 | 2.19 × 1010 | 1.92 × 1010 | 2.32 × 1010 | |
| p-value | - | 1.44 × 10−1= | 1.02 × 10−3+ | 1.06 × 10−2+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | |
| F 5 | Median | 7.83 × 105 | 6.80 × 105 | 6.43 × 105 | 6.51 × 105 | 5.90 × 105 | 8.89 × 105 | 8.62 × 106 | 6.10 × 106 | 5.81 × 106 | 5.72 × 106 |
| Mean | 7.91 × 105 | 6.79 × 105 | 6.30 × 105 | 6.56 × 105 | 5.97 × 105 | 8.90 × 105 | 8.66 × 106 | 6.06 × 106 | 5.72 × 106 | 5.67 × 106 | |
| Std | 1.03 × 105 | 1.10 × 105 | 1.00 × 105 | 1.01 × 105 | 1.03 × 105 | 1.31 × 105 | 2.80 × 105 | 2.40 × 105 | 4.24 × 105 | 3.61 × 105 | |
| p-value | - | 1.06 × 10−2− | 1.18 × 10−5− | 3.49 × 10−3− | 2.61 × 10−4− | 2.85 × 10−2+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | |
| F 6 | Median | 1.06 × 106 | 1.17 × 106 | 1.06 × 106 | 1.06 × 106 | 1.06 × 106 | 1.06 × 106 | 1.06 × 106 | 1.06 × 106 | 1.06 × 106 | 1.06 × 106 |
| Mean | 1.06 × 106 | 1.22 × 106 | 1.06 × 106 | 1.06 × 106 | 1.06 × 106 | 1.06 × 106 | 1.06 × 106 | 1.06 × 106 | 1.06 × 106 | 1.06 × 106 | |
| Std | 1.27 × 103 | 1.61 × 105 | 0.00 × 100 | 0.00 × 100 | 0.00 × 100 | 0.00 × 100 | 3.00 × 103 | 0.00 × 100 | 0.00 × 100 | 0.00 × 100 | |
| p-value | - | 5.90 × 10−5+ | 4.65 × 10−1= | 1.18 × 10−5− | 2.73 × 10−1= | 2.61 × 10−4− | 4.65 × 10−1= | 1.44 × 10−1= | 2.73 × 10−1= | 2.85 × 10−2− | |
| F 7 | Median | 7.93 × 104 | 1.22 × 106 | 5.42 × 106 | 1.70 × 106 | 5.45 × 106 | 1.47 × 106 | 7.45 × 108 | 7.36 × 107 | 2.84 × 108 | 8.36 × 107 |
| Mean | 9.71 × 104 | 1.24 × 106 | 5.50 × 106 | 1.87 × 106 | 5.81 × 106 | 1.58 × 106 | 7.67 × 108 | 7.79 × 107 | 3.65 × 108 | 8.25 × 107 | |
| Std | 5.54 × 104 | 5.05 × 105 | 2.23 × 106 | 1.08 × 106 | 3.04 × 106 | 7.53 × 105 | 1.32 × 108 | 2.73 × 107 | 2.63 × 108 | 2.06 × 107 | |
| p-value | - | 1.18 × 10−5+ | 1.18 × 10−5+ | 4.32 × 10−8+ | 2.61 × 10−4+ | 3.19 × 10−7+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | |
| F 8 | Median | 5.58 × 1013 | 7.07 × 1013 | 1.56 × 1014 | 1.37 × 1014 | 2.43 × 1014 | 9.65 × 1013 | 1.70 × 1016 | 9.35 × 1015 | 6.96 × 1015 | 5.83 × 1015 |
| Mean | 6.15 × 1013 | 7.28 × 1013 | 1.55 × 1014 | 1.36 × 1014 | 2.46 × 1014 | 1.09 × 1014 | 1.65 × 1016 | 9.32 × 1015 | 6.95 × 1015 | 6.38 × 1015 | |
| Std | 2.08 × 1013 | 4.02 × 1013 | 2.92 × 1013 | 3.39 × 1013 | 8.71 × 1013 | 5.44 × 1013 | 4.49 × 1015 | 2.71 × 1015 | 1.64 × 1015 | 1.99 × 1015 | |
| p-value | - | 5.90 × 10−5+ | 1.44 × 10−1= | 2.85 × 10−2− | 1.18 × 10−5− | 3.49 × 10−3− | 4.32 × 10−8− | 4.32 × 10−8− | 4.32 × 10−8− | 4.32 × 10−8− | |
| F 9 | Median | 4.67 × 107 | 4.52 × 107 | 7.23 × 107 | 1.11 × 108 | 5.94 × 107 | 8.05 × 107 | 5.62 × 108 | 5.55 × 108 | 5.40 × 108 | 5.32 × 108 |
| Mean | 4.47 × 107 | 4.28 × 107 | 8.08 × 107 | 1.29 × 108 | 6.08 × 107 | 7.99 × 107 | 5.61 × 108 | 5.59 × 108 | 5.38 × 108 | 5.31 × 108 | |
| Std | 1.37 × 107 | 7.49 × 106 | 2.21 × 107 | 8.85 × 107 | 1.29 × 107 | 1.18 × 107 | 3.24 × 107 | 2.93 × 107 | 3.03 × 107 | 2.33 × 107 | |
| p-value | - | 4.65 × 10−1= | 3.19 × 10−7+ | 4.32 × 10−8+ | 2.61 × 10−4+ | 3.19 × 10−7+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | |
| F 10 | Median | 9.40 × 107 | 9.44 × 107 | 9.40 × 107 | 9.41 × 107 | 9.41 × 107 | 9.37 × 107 | 9.46 × 107 | 9.46 × 107 | 9.46 × 107 | 9.45 × 107 |
| Mean | 9.40 × 107 | 9.52 × 107 | 9.39 × 107 | 9.41 × 107 | 9.40 × 107 | 9.27 × 107 | 9.46 × 107 | 9.46 × 107 | 9.46 × 107 | 9.45 × 107 | |
| Std | 2.95 × 105 | 1.70 × 106 | 2.18 × 105 | 2.23 × 105 | 2.14 × 105 | 1.99 × 106 | 2.57 × 105 | 2.51 × 105 | 1.98 × 105 | 2.78 × 105 | |
| p-value | - | 1.02 × 10−3+ | 6.79 × 10−2= | 1.18 × 10−5+ | 2.07 × 10−6+ | 1.02 × 10−3− | 3.19 × 10−7+ | 3.19 × 10−7+ | 4.32 × 10−8+ | 2.07 × 10−6+ | |
| F 11 | Median | 6.44 × 107 | 1.88 × 108 | 9.22 × 1011 | 9.23 × 1011 | 9.26 × 1011 | 9.38 × 1011 | 6.80 × 108 | 1.99 × 1010 | 5.75 × 108 | 1.33 × 1010 |
| Mean | 7.14 × 107 | 1.83 × 108 | 9.27 × 1011 | 9.28 × 1011 | 9.29 × 1011 | 9.34 × 1011 | 6.84 × 108 | 2.52 × 1010 | 5.68 × 108 | 1.49 × 1010 | |
| Std | 2.45 × 107 | 5.62 × 107 | 9.35 × 109 | 9.68 × 109 | 9.63 × 109 | 8.96 × 109 | 1.09 × 108 | 1.38 × 1010 | 9.23 × 107 | 7.57 × 109 | |
| p-value | - | 3.49 × 10−3+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 2.61 × 10−4+ | 4.32 × 10−8+ | 3.49 × 10−3+ | 4.32 × 10−8+ | |
| F 12 | Median | 1.12 × 103 | 2.19 × 103 | 1.03 × 103 | 1.80 × 103 | 1.04 × 103 | 1.76 × 103 | 5.54 × 103 | 5.42 × 103 | 4.28 × 103 | 4.25 × 103 |
| Mean | 1.14 × 103 | 2.13 × 103 | 1.05 × 103 | 1.82 × 103 | 1.08 × 103 | 1.77 × 103 | 5.51 × 103 | 5.59 × 103 | 4.34 × 103 | 4.30 × 103 | |
| Std | 9.96 × 101 | 2.72 × 102 | 5.45 × 101 | 1.52 × 102 | 7.45 × 101 | 1.69 × 102 | 3.67 × 102 | 7.64 × 102 | 3.24 × 102 | 2.48 × 102 | |
| p-value | - | 4.32 × 10−8+ | 2.85 × 10−2− | 4.32 × 10−8+ | 1.00 × 100= | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | |
| F 13 | Median | 4.89 × 107 | 2.01 × 108 | 1.20 × 109 | 2.98 × 108 | 7.08 × 108 | 4.01 × 108 | 1.56 × 109 | 1.43 × 109 | 2.87 × 109 | 7.08 × 108 |
| Mean | 6.40 × 107 | 2.21 × 108 | 1.20 × 109 | 3.42 × 108 | 7.48 × 108 | 5.20 × 108 | 1.50 × 109 | 1.47 × 109 | 2.98 × 109 | 7.17 × 108 | |
| Std | 5.35 × 107 | 1.24 × 108 | 4.91 × 108 | 1.42 × 108 | 2.85 × 108 | 4.85 × 108 | 3.35 × 108 | 3.46 × 108 | 7.23 × 108 | 1.57 × 108 | |
| p-value | - | 3.19 × 10−7+ | 1.00 × 100= | 4.32 × 10−8+ | 1.02 × 10−3+ | 4.32 × 10−8+ | 1.44 × 10−1= | 2.73 × 10−1= | 4.32 × 10−8+ | 3.49 × 10−3+ | |
| F 14 | Median | 1.77 × 107 | 5.86 × 107 | 5.19 × 109 | 8.06 × 107 | 2.90 × 109 | 1.51 × 108 | 4.45 × 109 | 4.54 × 109 | 2.23 × 109 | 2.50 × 109 |
| Mean | 1.78 × 107 | 6.05 × 107 | 8.31 × 109 | 1.59 × 108 | 3.67 × 109 | 2.51 × 108 | 5.28 × 109 | 4.58 × 109 | 2.78 × 109 | 3.33 × 109 | |
| Std | 2.62 × 106 | 1.34 × 107 | 6.56 × 109 | 2.27 × 108 | 3.32 × 109 | 2.25 × 108 | 3.84 × 109 | 1.83 × 109 | 1.85 × 109 | 2.09 × 109 | |
| p-value | - | 4.32 × 10−8+ | 4.32 × 10−8+ | 4.32 × 10−8+ | 2.07 × 10−6+ | 3.19 × 10−7+ | 2.85 × 10−2+ | 6.79 × 10−2= | 1.00 × 100= | 6.79 × 10−2= | |
| F 15 | Median | 3.53 × 107 | 1.29 × 107 | 4.13 × 107 | 4.58 × 106 | 7.60 × 107 | 5.99 × 107 | 8.60 × 106 | 8.82 × 106 | 7.75 × 106 | 8.04 × 106 |
| Mean | 3.54 × 107 | 1.26 × 107 | 4.13 × 107 | 4.59 × 106 | 7.61 × 107 | 6.03 × 107 | 8.98 × 106 | 8.95 × 106 | 7.96 × 106 | 8.07 × 106 | |
| Std | 7.60 × 106 | 1.36 × 106 | 3.05 × 106 | 3.22 × 105 | 6.14 × 106 | 6.54 × 106 | 8.90 × 105 | 9.38 × 105 | 9.30 × 105 | 9.78 × 105 | |
| p-value | - | 4.32 × 10−8− | 4.32 × 10−8+ | 4.32 × 10−8+ | 1.18 × 10−5+ | 4.65 × 10−1= | 4.32 × 10−8− | 4.32 × 10−8− | 4.32 × 10−8− | 4.32 × 10−8− | |
| w/t/l | 11/2/2 | 8/4/3 | 12/0/3 | 11/2/2 | 11/2/2 | 11/3/1 | 10/4/1 | 11/3/1 | 11/2/2 | ||
| Rank | 2.73 | 4.47 | 4.87 | 4.13 | 5.80 | 5.00 | 8.13 | 7.40 | 6.47 | 6.00 | |
Comparison results among different versions of DGCELSO on the 1000-D CEC’2010 problems.
| F | DGCELSO | DGCELSO-1 | DGCELSO-1000 | DGCELSO-SPL |
|---|---|---|---|---|
| F 1 | 0.00 × 100 | 3.85 × 10−26 | 0.00 × 100 | 1.81 × 103 |
| F 2 | 8.88 × 102 | 1.98 × 103 | 8.70 × 102 | 1.54 × 103 |
| F 3 | 3.18 × 10−14 | 1.08 × 100 | 3.16 × 10−14 | 1.97 × 10−2 |
| F 4 | 1.60 × 1011 | 2.15 × 1011 | 1.56 × 1011 | 9.44 × 1011 |
| F 5 | 2.80 × 108 | 6.93 × 107 | 2.79 × 108 | 1.07 × 107 |
| F 6 | 4.00 × 10−9 | 1.96 × 101 | 4.00 × 10−9 | 3.74 × 10−1 |
| F 7 | 2.15 × 10−5 | 4.01 × 103 | 2.17 × 10−5 | 6.15 × 106 |
| F 8 | 4.36 × 103 | 6.84 × 105 | 4.26 × 103 | 3.27 × 107 |
| F 9 | 1.77 × 107 | 3.28 × 107 | 1.77 × 107 | 1.05 × 108 |
| F 10 | 9.23 × 102 | 2.02 × 103 | 9.34 × 102 | 3.63 × 103 |
| F 11 | 1.10 × 10−13 | 2.08 × 101 | 1.10 × 10−13 | 6.66 × 10−1 |
| F 12 | 2.55 × 103 | 4.60 × 103 | 2.63 × 103 | 1.99 × 105 |
| F 13 | 5.15 × 102 | 7.69 × 102 | 4.87 × 102 | 1.42 × 103 |
| F 14 | 5.17 × 107 | 9.78 × 107 | 5.13 × 107 | 3.42 × 108 |
| F 15 | 1.04 × 104 | 2.04 × 103 | 1.05 × 104 | 1.00 × 104 |
| F 16 | 1.55 × 10−13 | 2.92 × 101 | 2.93 × 10−2 | 5.72 × 10−1 |
| F 17 | 6.57 × 104 | 4.30 × 104 | 7.12 × 104 | 7.10 × 105 |
| F 18 | 1.31 × 103 | 2.30 × 103 | 1.33 × 103 | 2.38 × 104 |
| F 19 | 1.02 × 107 | 1.33 × 106 | 1.06 × 107 | 6.52 × 106 |
| F 20 | 1.08 × 103 | 1.98 × 103 | 1.08 × 103 | 2.11 × 104 |
| Rank | 1.80 | 2.90 | 1.90 | 3.40 |
Comparison results between DGCELSO with the dynamic strategy for tp and the ones with different fixed settings of tp on the 1000-D CEC’2010 problems.
| F | tp = 0.1 | tp = 0.2 | tp = 0.3 | tp = 0.4 | tp = 0.5 | tp = 0.6 | tp = 0.7 | tp = 0.8 | tp = 0.9 | Dynamic |
|---|---|---|---|---|---|---|---|---|---|---|
| F 1 | 9.55 × 10−3 | 0.00 × 100 | 0.00 × 100 | 0.00 × 100 | 0.00 × 100 | 0.00 × 100 | 0.00 × 100 | 0.00 × 100 | 3.31 × 10−26 | 0.00 × 100 |
| F 2 | 2.33 × 103 | 1.38 × 103 | 1.05 × 103 | 8.21 × 102 | 6.71 × 102 | 1.03 × 103 | 9.26 × 103 | 9.83 × 103 | 1.00 × 104 | 8.88 × 102 |
| F 3 | 1.41 × 100 | 3.30 × 10−14 | 3.17 × 10−14 | 3.14 × 10−14 | 2.98 × 10−14 | 2.96 × 10−14 | 2.99 × 10−14 | 2.93 × 10−14 | 2.98 × 10−14 | 3.18 × 10−14 |
| F 4 | 6.60 × 1011 | 1.64 × 1011 | 1.80 × 1011 | 1.89 × 1011 | 2.01 × 1011 | 2.24 × 1011 | 2.28 × 1011 | 2.52 × 1011 | 2.53 × 1011 | 1.60 × 1011 |
| F 5 | 5.90 × 107 | 2.64 × 108 | 2.75 × 108 | 2.76 × 108 | 2.83 × 108 | 2.79 × 108 | 2.81 × 108 | 2.82 × 108 | 2.83 × 108 | 2.80 × 108 |
| F 6 | 1.99 × 101 | 2.00 × 101 | 1.98 × 101 | 4.00 × 10−9 | 4.00 × 10−9 | 4.00 × 10−9 | 4.00 × 10−9 | 3.88 × 10−9 | 4.00 × 10−9 | 4.00 × 10−9 |
| F 7 | 1.15 × 106 | 9.66 × 10−8 | 5.36 × 10−5 | 8.46 × 10−3 | 3.32 × 10−1 | 2.98 × 100 | 1.56 × 101 | 8.23 × 101 | 3.67 × 102 | 2.15 × 10−5 |
| F 8 | 4.15 × 107 | 1.10 × 103 | 7.82 × 103 | 1.19 × 105 | 2.07 × 106 | 6.99 × 106 | 1.07 × 107 | 1.36 × 107 | 1.57 × 107 | 4.36 × 103 |
| F 9 | 1.21 × 108 | 1.93 × 107 | 1.85 × 107 | 1.78 × 107 | 2.05 × 107 | 2.03 × 107 | 2.21 × 107 | 2.17 × 107 | 2.34 × 107 | 1.77 × 107 |
| F 10 | 2.44 × 103 | 1.53 × 103 | 1.06 × 103 | 2.19 × 103 | 9.48 × 103 | 9.80 × 103 | 1.01 × 104 | 1.02 × 104 | 1.02 × 104 | 9.23 × 102 |
| F 11 | 2.86 × 101 | 2.04 × 101 | 1.05 × 101 | 1.11 × 10−13 | 1.09 × 10−13 | 1.11 × 10−13 | 1.11 × 10−13 | 1.13 × 10−13 | 1.15 × 10−13 | 1.10 × 10−13 |
| F 12 | 1.68 × 105 | 1.12 × 103 | 2.48 × 103 | 7.33 × 103 | 2.81 × 104 | 1.16 × 105 | 5.74 × 105 | 1.53 × 106 | 2.08 × 106 | 2.55 × 103 |
| F 13 | 1.68 × 103 | 4.27 × 102 | 5.13 × 102 | 4.12 × 102 | 6.18 × 102 | 4.30 × 102 | 4.50 × 102 | 4.88 × 102 | 5.19 × 102 | 5.15 × 102 |
| F 14 | 3.74 × 108 | 5.88 × 107 | 5.39 × 107 | 5.68 × 107 | 5.82 × 107 | 6.54 × 107 | 6.97 × 107 | 8.12 × 107 | 8.92 × 107 | 5.17 × 107 |
| F 15 | 2.66 × 103 | 1.08 × 104 | 1.05 × 104 | 1.04 × 104 | 1.04 × 104 | 1.04 × 104 | 1.04 × 104 | 1.04 × 104 | 1.04 × 104 | 1.04 × 104 |
| F 16 | 7.57 × 101 | 5.55 × 100 | 1.62 × 10−13 | 1.64 × 10−13 | 1.68 × 10−13 | 1.76 × 10−13 | 1.79 × 10−13 | 1.87 × 10−13 | 1.94 × 10−13 | 1.55 × 10−13 |
| F 17 | 5.02 × 105 | 2.01 × 104 | 5.70 × 104 | 1.86 × 106 | 3.51 × 106 | 4.29 × 106 | 4.92 × 106 | 5.16 × 106 | 5.46 × 106 | 6.57 × 104 |
| F 18 | 4.17 × 103 | 1.45 × 103 | 1.35 × 103 | 1.45 × 103 | 1.09 × 103 | 1.43 × 103 | 1.16 × 103 | 1.12 × 103 | 1.16 × 103 | 1.31 × 103 |
| F 19 | 2.18 × 106 | 6.26 × 106 | 1.05 × 107 | 1.20 × 107 | 1.32 × 107 | 1.39 × 107 | 1.42 × 107 | 1.50 × 107 | 1.53 × 107 | 1.02 × 107 |
| F 20 | 3.09 × 103 | 1.30 × 103 | 1.19 × 103 | 1.10 × 103 | 1.06 × 103 | 1.03 × 103 | 1.02 × 103 | 9.94 × 102 | 9.86 × 102 | 1.08 × 103 |
| Rank | 7.75 | 4.98 | 4.53 | 4.23 | 4.85 | 5.35 | 5.9 | 6.28 | 7.73 | 3.43 |
Comparison results between DGCELSO with the dynamic strategy for NDG and the ones with different fixed settings of NDG on the 1000-D CEC’2010 problems.
| F | NDG = 20 | NDG = 30 | NDG = 40 | NDG = 50 | NDG = 60 | NDG = 70 | NDG = 80 | NDG = 90 | NDG = 100 | Dynamic |
|---|---|---|---|---|---|---|---|---|---|---|
| F 1 | 0.00 × 100 | 0.00 × 100 | 0.00 × 100 | 0.00 × 100 | 0.00 × 100 | 0.00 × 100 | 0.00 × 100 | 0.00 × 100 | 0.00 × 100 | 0.00 × 100 |
| F 2 | 8.46 × 102 | 8.56 × 102 | 8.49 × 102 | 8.41 × 102 | 8.53 × 102 | 8.48 × 102 | 8.52 × 102 | 8.51 × 102 | 8.44 × 102 | 8.88 × 102 |
| F 3 | 3.18 × 10−14 | 3.18 × 10−14 | 3.19 × 10−14 | 3.22 × 10−14 | 3.21 × 10−14 | 3.24 × 10−14 | 3.19 × 10−14 | 3.21 × 10−14 | 3.19 × 10−14 | 3.18 × 10−14 |
| F 4 | 1.69 × 1011 | 1.69 × 1011 | 1.68 × 1011 | 1.65 × 1011 | 1.54 × 1011 | 1.56 × 1011 | 1.65 × 1011 | 1.58 × 1011 | 1.65 × 1011 | 1.60 × 1011 |
| F 5 | 2.78 × 108 | 2.79 × 108 | 2.80 × 108 | 2.78 × 108 | 2.78 × 108 | 2.78 × 108 | 2.79 × 108 | 2.78 × 108 | 2.79 × 108 | 2.80 × 108 |
| F 6 | 4.00 × 10−9 | 4.00 × 10−9 | 4.00 × 10−9 | 4.00 × 10−9 | 4.00 × 10−9 | 4.00 × 10−9 | 4.00 × 10−9 | 4.00 × 10−9 | 3.88 × 10−9 | 4.00 × 10−9 |
| F 7 | 2.36 × 10−5 | 3.22 × 10−5 | 2.64 × 10−5 | 2.58 × 10−5 | 2.18 × 10−5 | 1.92 × 10−5 | 2.22 × 10−5 | 2.00 × 10−5 | 2.93 × 10−5 | 2.15 × 10−5 |
| F 8 | 5.49 × 103 | 5.20 × 103 | 5.15 × 103 | 5.12 × 103 | 5.14 × 103 | 4.98 × 103 | 5.07 × 103 | 5.01 × 103 | 5.07 × 103 | 4.36 × 103 |
| F 9 | 1.80 × 107 | 1.78 × 107 | 1.73 × 107 | 1.81 × 107 | 1.74 × 107 | 1.76 × 107 | 1.78 × 107 | 1.74 × 107 | 1.82 × 107 | 1.77 × 107 |
| F 10 | 8.97 × 102 | 8.94 × 102 | 8.94 × 102 | 8.94 × 102 | 8.92 × 102 | 8.92 × 102 | 9.02 × 102 | 9.14 × 102 | 8.89 × 102 | 9.23 × 102 |
| F 11 | 1.11 × 10−13 | 1.11 × 10−13 | 1.11 × 10−13 | 1.11 × 10−13 | 1.11 × 10−13 | 1.10 × 10−13 | 1.10 × 10−13 | 1.11 × 10−13 | 1.11 × 10−13 | 1.10 × 10−13 |
| F 12 | 3.13 × 103 | 3.13 × 103 | 3.24 × 103 | 3.18 × 103 | 3.24 × 103 | 3.21 × 103 | 3.11 × 103 | 3.14 × 103 | 3.28 × 103 | 2.55 × 103 |
| F 13 | 4.48 × 102 | 5.03 × 102 | 5.13 × 102 | 4.83 × 102 | 5.30 × 102 | 5.09 × 102 | 4.51 × 102 | 4.64 × 102 | 4.82 × 102 | 5.15 × 102 |
| F 14 | 5.26 × 107 | 5.26 × 107 | 5.24 × 107 | 5.16 × 107 | 5.16 × 107 | 5.07 × 107 | 5.23 × 107 | 5.18 × 107 | 5.24 × 107 | 5.17 × 107 |
| F 15 | 1.04 × 104 | 1.04 × 104 | 1.04 × 104 | 1.04 × 104 | 1.04 × 104 | 1.04 × 104 | 1.04 × 104 | 1.04 × 104 | 1.04 × 104 | 1.04 × 104 |
| F 16 | 1.59 × 10−13 | 1.57 × 10−13 | 1.58 × 10−13 | 1.58 × 10−13 | 3.85 × 10−2 | 1.58 × 10−13 | 2.93 × 10−2 | 1.59 × 10−13 | 1.59 × 10−13 | 1.55 × 10−13 |
| F 17 | 1.19 × 105 | 1.20 × 105 | 1.19 × 105 | 1.20 × 105 | 1.28 × 105 | 1.22 × 105 | 1.28 × 105 | 1.34 × 105 | 1.35 × 105 | 6.57 × 104 |
| F 18 | 1.19 × 103 | 1.21 × 103 | 1.28 × 103 | 1.31 × 103 | 1.24 × 103 | 1.18 × 103 | 1.31 × 103 | 1.31 × 103 | 1.28 × 103 | 1.31 × 103 |
| F 19 | 1.10 × 107 | 1.09 × 107 | 1.12 × 107 | 1.10 × 107 | 1.12 × 107 | 1.13 × 107 | 1.12 × 107 | 1.12 × 107 | 1.11 × 107 | 1.02 × 107 |
| F 20 | 1.12 × 103 | 1.11 × 103 | 1.09 × 103 | 1.11 × 103 | 1.08 × 103 | 1.11 × 103 | 1.09 × 103 | 1.08 × 103 | 1.10 × 103 | 1.08 × 103 |
| Rank | 6.05 | 6.05 | 6.18 | 5.33 | 5.78 | 4.70 | 5.50 | 5.18 | 6.00 | 4.25 |
References
1. Jia, Y.H.; Mei, Y.; Zhang, M. A Two-Stage Swarm Optimizer with Local Search for Water Distribution Network Optimization. IEEE Trans. Cybern.; 2021; [DOI: https://dx.doi.org/10.1109/TCYB.2021.3107900]
2. Cao, K.; Cui, Y.; Liu, Z.; Tan, W.; Weng, J. Edge Intelligent Joint Optimization for Lifetime and Latency in Large-Scale Cyber-Physical Systems. IEEE Internet Things J.; 2021; [DOI: https://dx.doi.org/10.1109/JIOT.2021.3102421]
3. Chen, W.N.; Tan, D.Z.; Yang, Q.; Gu, T.; Zhang, J. Ant Colony Optimization for the Control of Pollutant Spreading on Social Networks. IEEE Trans. Cybern.; 2020; 50, pp. 4053-4065. [DOI: https://dx.doi.org/10.1109/TCYB.2019.2922266]
4. Zuo, T.; Zhang, Y.; Meng, K.; Tong, Z.; Dong, Z.Y.; Fu, Y. A Two-Layer Hybrid Optimization Approach for Large-Scale Offshore Wind Farm Collector System Planning. IEEE Trans. Ind. Inform.; 2021; 17, pp. 7433-7444. [DOI: https://dx.doi.org/10.1109/TII.2021.3056428]
5. Yang, Q.; Chen, W.N.; Gu, T.; Jin, H.; Mao, W.; Zhang, J. An Adaptive Stochastic Dominant Learning Swarm Optimizer for High-Dimensional Optimization. IEEE Trans. Cybern.; 2020; 52, pp. 1960-1976. [DOI: https://dx.doi.org/10.1109/TCYB.2020.3034427]
6. Omidvar, M.N.; Li, X.; Mei, Y.; Yao, X. Cooperative Co-Evolution with Differential Grouping for Large Scale Optimization. IEEE Trans. Evol. Comput.; 2014; 18, pp. 378-393. [DOI: https://dx.doi.org/10.1109/TEVC.2013.2281543]
7. Tang, K.; Li, X.; Suganthan, P.; Yang, Z.; Weise, T. Benchmark Functions for the CEC 2010 Special Session and Competition on Large-Scale Global Optimization; Nature Inspired Computation and Applications Laboratory, University of Science and Technology of China: Hefei, China, 2009.
8. Li, X.; Tang, K.; Omidvar, M.N.; Yang, Z.; Qin, K.; China, H. Benchmark Functions for the CEC 2013 Special Session and Competition on Large-Scale Global Optimization; Technical Report Evolutionary Computation and Machine Learning Group, RMIT University: Melbourne, Australia, 2013.
9. Yang, Q.; Li, Y.; Gao, X.-D.; Ma, Y.-Y.; Lu, Z.-Y.; Jeon, S.-W.; Zhang, J. An Adaptive Covariance Scaling Estimation of Distribution Algorithm. Mathematics; 2021; 9, 3207. [DOI: https://dx.doi.org/10.3390/math9243207]
10. Yang, Q.; Chen, W.N.; Gu, T.; Zhang, H.; Yuan, H.; Kwong, S.; Zhang, J. A Distributed Swarm Optimizer with Adaptive Communication for Large-Scale Optimization. IEEE Trans. Cybern.; 2020; 50, pp. 3393-3408. [DOI: https://dx.doi.org/10.1109/TCYB.2019.2904543]
11. Omidvar, M.N.; Li, X.; Yao, X. A Review of Population-Based Metaheuristics for Large-Scale Black-Box Global Optimization: Part A. IEEE Trans. Evol. Comput.; 2021; in press Available online: https://ieeexplore.ieee.org/document/9627116 (accessed on 1 January 2022).
12. Omidvar, M.N.; Li, X.; Yao, X. A Review of Population-Based Metaheuristics for Large-Scale Black-Box Global Optimization: Part B. IEEE Trans. Evol. Comput.; 2021; in press Available online: https://ieeexplore.ieee.org/document/9627138 (accessed on 1 January 2022).
13. Yang, Q.; Xie, H.; Chen, W.; Zhang, J. Multiple Parents Guided Differential Evolution for Large Scale Optimization. Proceedings of the IEEE Congress on Evolutionary Computation; Vancouver, BC, Canada, 24–29 July 2016; pp. 3549-3556.
14. Yang, Q.; Chen, W.N.; Li, Y.; Chen, C.L.P.; Xu, X.M.; Zhang, J. Multimodal Estimation of Distribution Algorithms. IEEE Trans. Cybern.; 2017; 47, pp. 636-650. [DOI: https://dx.doi.org/10.1109/TCYB.2016.2523000]
15. Eberhart, R.; Kennedy, J. A New Optimizer Using Particle Swarm Theory. Proceedings of the International Symposium on Micro Machine and Human Science; Nagoya, Japan, 4–6 October 1995; pp. 39-43.
16. Shi, Y.; Eberhart, R. A Modified Particle Swarm Optimizer. Proceedings of the IEEE International Conference on Evolutionary Computation Proceedings: IEEE World Congress on Computational Intelligence; Anchorage, AK, USA, 4–9 May 1998; pp. 69-73.
17. Tang, J.; Liu, G.; Pan, Q. A Review on Representative Swarm Intelligence Algorithms for Solving Optimization Problems: Applications and Trends. IEEE/CAA J. Autom. Sin.; 2021; 8, pp. 1627-1643. [DOI: https://dx.doi.org/10.1109/JAS.2021.1004129]
18. Ren, Z.; Zhang, A.; Wen, C.; Feng, Z. A Scatter Learning Particle Swarm Optimization Algorithm for Multimodal Problems. IEEE Trans. Cybern.; 2014; 44, pp. 1127-1140. [DOI: https://dx.doi.org/10.1109/TCYB.2013.2279802]
19. Zhang, J.; Lu, Y.; Che, L.; Zhou, M. Moving-Distance-Minimized PSO for Mobile Robot Swarm. IEEE Trans. Cybern.; 2021; [DOI: https://dx.doi.org/10.1109/TCYB.2021.3079346]
20. Villalón, C.L.C.; Dorigo, M.; Stützle, T. PSO-X: A Component-Based Framework for the Automatic Design of Particle Swarm Optimization Algorithms. IEEE Trans. Evol. Comput.; 2021; [DOI: https://dx.doi.org/10.1109/TEVC.2021.3102863]
21. Ding, W.; Lin, C.T.; Cao, Z. Deep Neuro-Cognitive Co-Evolution for Fuzzy Attribute Reduction by Quantum Leaping PSO with Nearest-Neighbor Memeplexes. IEEE Trans. Cybern.; 2019; 49, pp. 2744-2757. [DOI: https://dx.doi.org/10.1109/TCYB.2018.2834390]
22. Yang, Q.; Hua, L.; Gao, X.; Xu, D.; Lu, Z.; Jeon, S.-W.; Zhang, J. Stochastic Cognitive Dominance Leading Particle Swarm Optimization for Multimodal Problems. Mathematics; 2022; 10, 761. [DOI: https://dx.doi.org/10.3390/math10050761]
23. Bonavolontà, F.; Noia, L.P.D.; Liccardo, A.; Tessitore, S.; Lauria, D. A PSO-MMA Method for the Parameters Estimation of Interarea Oscillations in Electrical Grids. IEEE Trans. Instrum. Meas.; 2020; 69, pp. 8853-8865. [DOI: https://dx.doi.org/10.1109/TIM.2020.2998909]
24. Lan, R.; Zhu, Y.; Lu, H.; Liu, Z.; Luo, X. A Two-Phase Learning-Based Swarm Optimizer for Large-Scale Optimization. IEEE Trans. Cybern.; 2020; 51, pp. 6284-6293. [DOI: https://dx.doi.org/10.1109/TCYB.2020.2968400]
25. Yang, Q.; Chen, W.; Deng, J.D.; Li, Y.; Gu, T.; Zhang, J. A Level-Based Learning Swarm Optimizer for Large-Scale Optimization. IEEE Trans. Evol. Comput.; 2018; 22, pp. 578-594. [DOI: https://dx.doi.org/10.1109/TEVC.2017.2743016]
26. Cheng, R.; Jin, Y. A Competitive Swarm Optimizer for Large Scale Optimization. IEEE Trans. Cybern.; 2015; 45, pp. 191-204. [DOI: https://dx.doi.org/10.1109/TCYB.2014.2322602]
27. Mahdavi, S.; Shiri, M.E.; Rahnamayan, S. Metaheuristics in Large-Scale Global Continues Optimization: A Survey. Inf. Sci.; 2015; 295, pp. 407-428. [DOI: https://dx.doi.org/10.1016/j.ins.2014.10.042]
28. Ma, X.; Li, X.; Zhang, Q.; Tang, K.; Liang, Z.; Xie, W.; Zhu, Z. A Survey on Cooperative Co-Evolutionary Algorithms. IEEE Trans. Evol. Comput.; 2018; 23, pp. 421-441. [DOI: https://dx.doi.org/10.1109/TEVC.2018.2868770]
29. Li, X.; Yao, X. Cooperatively Coevolving Particle Swarms for Large Scale Optimization. IEEE Trans. Evol. Comput.; 2011; 16, pp. 210-224.
30. Yang, Q.; Chen, W.; Gu, T.; Zhang, H.; Deng, J.D.; Li, Y.; Zhang, J. Segment-Based Predominant Learning Swarm Optimizer for Large-Scale Optimization. IEEE Trans. Cybern.; 2017; 47, pp. 2896-2910. [DOI: https://dx.doi.org/10.1109/TCYB.2016.2616170]
31. Xie, H.Y.; Yang, Q.; Hu, X.M.; Chen, W.N. Cross-Generation Elites Guided Particle Swarm Optimization for Large Scale Optimization. Proceedings of the IEEE Symposium Series on Computational Intelligence; Athens, Greece, 6–9 December 2016; pp. 1-8.
32. Song, G.W.; Yang, Q.; Gao, X.D.; Ma, Y.Y.; Lu, Z.Y.; Zhang, J. An Adaptive Level-Based Learning Swarm Optimizer for Large-Scale Optimization. Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics; Melbourne, Australia, 17–20 October 2021; pp. 152-159.
33. Potter, M.A.; De Jong, K.A. A Cooperative Co-Evolutionary Approach to Function Optimization. Proceedings of the International Conference on Parallel Problem Solving from Nature; Berlin, Germany, 22–26 September 1994; pp. 249-257.
34. Yang, Q.; Chen, W.N.; Zhang, J. Evolution Consistency Based Decomposition for Cooperative Coevolution. IEEE Access; 2018; 6, pp. 51084-51097. [DOI: https://dx.doi.org/10.1109/ACCESS.2018.2869334]
35. Omidvar, M.N.; Yang, M.; Mei, Y.; Li, X.; Yao, X. DG2: A Faster and More Accurate Differential Grouping for Large-Scale Black-Box Optimization. IEEE Trans. Evol. Comput.; 2017; 21, pp. 929-942. [DOI: https://dx.doi.org/10.1109/TEVC.2017.2694221]
36. Sun, Y.; Kirley, M.; Halgamuge, S.K. Extended Differential Grouping for Large Scale Global Optimization with Direct and Indirect Variable Interactions. Proceedings of the Annual Conference on Genetic and Evolutionary Computation; Madrid, Spain, 11–15 July 2015; pp. 313-320.
37. Sun, Y.; Kirley, M.; Halgamuge, S.K. A Recursive Decomposition Method for Large Scale Continuous Optimization. IEEE Trans. Evol. Comput.; 2017; 22, pp. 647-661. [DOI: https://dx.doi.org/10.1109/TEVC.2017.2778089]
38. Song, A.; Chen, W.N.; Gong, Y.J.; Luo, X.; Zhang, J. A Divide-and-Conquer Evolutionary Algorithm for Large-Scale Virtual Network Embedding. IEEE Trans. Evol. Comput.; 2020; 24, pp. 566-580. [DOI: https://dx.doi.org/10.1109/TEVC.2019.2941824]
39. Deng, H.; Peng, L.; Zhang, H.; Yang, B.; Chen, Z. Ranking-Based Biased Learning Swarm Optimizer for Large-Scale Optimization. Inf. Sci.; 2019; 493, pp. 120-137. [DOI: https://dx.doi.org/10.1016/j.ins.2019.04.037]
40. Wang, H.; Liang, M.; Sun, C.; Zhang, G.; Xie, L. Multiple-Strategy Learning Particle Swarm Optimization for Large-Scale Optimization Problems. Complex Intell. Syst.; 2021; 7, pp. 1-16. [DOI: https://dx.doi.org/10.1007/s40747-020-00148-1]
41. Jian, J.R.; Chen, Z.G.; Zhan, Z.H.; Zhang, J. Region Encoding Helps Evolutionary Computation Evolve Faster: A New Solution Encoding Scheme in Particle Swarm for Large-Scale Optimization. IEEE Trans. Evol. Comput.; 2021; 25, pp. 779-793. [DOI: https://dx.doi.org/10.1109/TEVC.2021.3065659]
42. Kampourakis, K. Understanding Evolution; Cambridge University Press: Cambridge, UK, 2014.
43. Ju, X.; Liu, F. Wind Farm Layout Optimization Using Self-Informed Genetic Algorithm with Information Guided Exploitation. Appl. Energy; 2019; 248, pp. 429-445. [DOI: https://dx.doi.org/10.1016/j.apenergy.2019.04.084]
44. Ju, X.; Liu, F.; Wang, L.; Lee, W.-J. Wind Farm Layout Optimization Based on Support Vector Regression Guided Genetic Algorithm with Consideration of Participation among Landowners. Energy Convers. Manag.; 2019; 196, pp. 1267-1281. [DOI: https://dx.doi.org/10.1016/j.enconman.2019.06.082]
45. Xia, X.; Gui, L.; Yu, F.; Wu, H.; Wei, B.; Zhang, Y.L.; Zhan, Z.H. Triple Archives Particle Swarm Optimization. IEEE Trans. Cybern.; 2020; 50, pp. 4862-4875. [DOI: https://dx.doi.org/10.1109/TCYB.2019.2943928]
46. Liang, J.J.; Qin, A.K.; Suganthan, P.N.; Baskar, S. Comprehensive Learning Particle Swarm Optimizer for Global Optimization of Multimodal Functions. IEEE Trans. Evol. Comput.; 2006; 10, pp. 281-295. [DOI: https://dx.doi.org/10.1109/TEVC.2005.857610]
47. Gong, Y.; Li, J.; Zhou, Y.; Li, Y.; Chung, H.S.; Shi, Y.; Zhang, J. Genetic Learning Particle Swarm Optimization. IEEE Trans. Cybern.; 2016; 46, pp. 2277-2290. [DOI: https://dx.doi.org/10.1109/TCYB.2015.2475174]
48. Zhan, Z.; Zhang, J.; Li, Y.; Shi, Y. Orthogonal Learning Particle Swarm Optimization. IEEE Trans. Evol. Comput.; 2011; 15, pp. 832-847. [DOI: https://dx.doi.org/10.1109/TEVC.2010.2052054]
49. Van den Bergh, F.; Engelbrecht, A.P. A Cooperative Approach to Particle Swarm Optimization. IEEE Trans. Evol. Comput.; 2004; 8, pp. 225-239. [DOI: https://dx.doi.org/10.1109/TEVC.2004.826069]
50. Mei, Y.; Omidvar, M.N.; Li, X.; Yao, X. A Competitive Divide-and-Conquer Algorithm for Unconstrained Large-Scale Black-Box Optimization. ACM Trans. Math. Softw.; 2016; 42, pp. 1-24. [DOI: https://dx.doi.org/10.1145/2791291]
51. Yang, M.; Zhou, A.; Li, C.; Yao, X. An Efficient Recursive Differential Grouping for Large-Scale Continuous Problems. IEEE Trans. Evol. Comput.; 2021; 25, pp. 159-171. [DOI: https://dx.doi.org/10.1109/TEVC.2020.3009390]
52. Sun, Y.; Omidvar, M.N.; Kirley, M.; Li, X. Adaptive Threshold Parameter Estimation with Recursive Differential Grouping for Problem Decomposition. Proceedings of the Genetic and Evolutionary Computation Conference; Kyoto, Japan, 15–19 July 2018.
53. Ma, X.; Huang, Z.; Li, X.; Wang, L.; Qi, Y.; Zhu, Z. Merged Differential Grouping for Large-scale Global Optimization. IEEE Trans. Evol. Comput.; 2022; in press [DOI: https://dx.doi.org/10.1109/TEVC.2022.3144684]
54. Liu, H.; Wang, Y.; Fan, N. A Hybrid Deep Grouping Algorithm for Large Scale Global Optimization. IEEE Trans. Evol. Comput.; 2020; 24, pp. 1112-1124. [DOI: https://dx.doi.org/10.1109/TEVC.2020.2985672]
55. Zhang, X.Y.; Gong, Y.J.; Lin, Y.; Zhang, J.; Kwong, S.; Zhang, J. Dynamic Cooperative Coevolution for Large Scale Optimization. IEEE Trans. Evol. Comput.; 2019; 23, pp. 935-948. [DOI: https://dx.doi.org/10.1109/TEVC.2019.2895860]
56. Neshat, M.; Mirjalili, S.; Sergiienko, N.Y.; Esmaeilzadeh, S.; Amini, E.; Heydari, A.; Garcia, D.A. Layout Optimisation of Offshore Wave Energy Converters Using a Novel Multi-swarm Cooperative Algorithm with Backtracking Strategy: A Case Study from Coasts of Australia. Energy; 2022; 239, 122463. [DOI: https://dx.doi.org/10.1016/j.energy.2021.122463]
57. Pan, Q.K.; Gao, L.; Wang, L. An Effective Cooperative Co-Evolutionary Algorithm for Distributed Flowshop Group Scheduling Problems. IEEE Trans. Cybern.; 2020; [DOI: https://dx.doi.org/10.1109/TCYB.2020.3041494]
58. Neshat, M.; Alexander, B.; Wagner, M. A Hybrid Cooperative Co-Evolution Algorithm Framework for Optimising Power Take off and Placements of Wave Energy Converters. Inf. Sci.; 2020; 534, pp. 218-244. [DOI: https://dx.doi.org/10.1016/j.ins.2020.03.112]
59. Liang, M.; Wang, W.; Dong, C.; Zhao, D. A Cooperative Coevolutionary Optimization Design of Urban Transit Network and Operating Frequencies. Expert Syst. Appl.; 2020; 160, 113736. [DOI: https://dx.doi.org/10.1016/j.eswa.2020.113736]
60. Zhao, S.-Z.; Liang, J.J.; Suganthan, P.N.; Tasgetiren, M.F. Dynamic Multi-Swarm Particle Swarm Optimizer with Local Search for Large Scale Global Optimization. Proceedings of the IEEE Congress on Evolutionary Computation; Hong Kong, China, 1–6 June 2008; pp. 3845-3852.
61. Cheng, R. A Social Learning Particle Swarm Optimization Algorithm for Scalable Optimization. Inf. Sci.; 2015; 291, pp. 43-60. [DOI: https://dx.doi.org/10.1016/j.ins.2014.08.039]
62. Mohapatra, P.; Das, K.N.; Roy, S. A Modified Competitive Swarm Optimizer for Large Scale Optimization Problems. Appl. Soft Comput.; 2017; 59, pp. 340-362. [DOI: https://dx.doi.org/10.1016/j.asoc.2017.05.060]
63. Li, D.; Guo, W.; Lerch, A.; Li, Y.; Wang, L.; Wu, Q. An Adaptive Particle Swarm Optimizer with Decoupled Exploration and Exploitation for Large Scale Optimization. Swarm Evol. Comput.; 2021; 60, 100789. [DOI: https://dx.doi.org/10.1016/j.swevo.2020.100789]
64. Lan, R.; Zhang, L.; Tang, Z.; Liu, Z.; Luo, X. A Hierarchical Sorting Swarm Optimizer for Large-Scale Optimization. IEEE Access; 2019; 7, pp. 40625-40635. [DOI: https://dx.doi.org/10.1109/ACCESS.2019.2906082]
65. Kong, F.; Jiang, J.; Huang, Y. An Adaptive Multi-Swarm Competition Particle Swarm Optimizer for Large-Scale Optimization. Mathematics; 2019; 7, 521. [DOI: https://dx.doi.org/10.3390/math7060521]
66. Huang, H.; Lv, L.; Ye, S.; Hao, Z. Particle Swarm Optimization with Convergence Speed Controller for Large-Scale Numerical Optimization. Soft Comput.; 2019; 23, pp. 4421-4437. [DOI: https://dx.doi.org/10.1007/s00500-018-3098-9]
67. LaTorre, A.; Muelas, S.; Peña, J.-M. A Comprehensive Comparison of Large Scale Global Optimizers. Inf. Sci.; 2015; 316, pp. 517-549. [DOI: https://dx.doi.org/10.1016/j.ins.2014.09.031]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
High-dimensional optimization problems are more and more common in the era of big data and the Internet of things (IoT), which seriously challenge the optimization performance of existing optimizers. To solve these kinds of problems effectively, this paper devises a dimension group-based comprehensive elite learning swarm optimizer (DGCELSO) by integrating valuable evolutionary information in different elite particles in the swarm to guide the updating of inferior ones. Specifically, the swarm is first separated into two exclusive sets, namely the elite set (ES) containing the top best individuals, and the non-elite set (NES), consisting of the remaining individuals. Then, the dimensions of each particle in NES are randomly divided into several groups with equal sizes. Subsequently, each dimension group of each non-elite particle is guided by two different elites randomly selected from ES. In this way, each non-elite particle in NES is comprehensively guided by multiple elite particles in ES. Therefore, not only could high diversity be maintained, but fast convergence is also likely guaranteed. To alleviate the sensitivity of DGCELSO to the associated parameters, we further devise dynamic adjustment strategies to change the parameter settings during the evolution. With the above mechanisms, DGCELSO is expected to explore and exploit the solution space properly to find the optimum solutions for optimization problems. Extensive experiments conducted on two commonly used large-scale benchmark problem sets demonstrate that DGCELSO achieves highly competitive or even much better performance than several state-of-the-art large-scale optimizers.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
; Kai-Xuan Zhang 1
; Xu-Dong, Gao 1 ; Dong-Dong, Xu 1 ; Zhen-Yu, Lu 1 ; Sang-Woon Jeon 2 ; Zhang, Jun 3
1 School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing 210044, China;
2 Department of Electrical and Electronic Engineering, Hanyang University, Ansan 15588, Korea;
3 Department of Electrical and Electronic Engineering, Hanyang University, Ansan 15588, Korea;




