1. Introduction
With the advent of the era of artificial intelligence, the evolutionary algorithm of swarm intelligence has received more and more attention [1]. Swarm intelligence algorithms are designed by researchers who are inspired by natural phenomena or population activities [2]. These algorithms are often used to solve complex optimization problems [3].
Optimization problems can be divided into single-objective optimization problems and multi-objective optimization problems [4]. Single-objective optimization problems have more specific objectives than multi-objective optimization problems [5]. Therefore, the solution to multi-objective optimization problems is more complicated. The result of solving a multi-objective optimization problem requires a decision maker to choose the final result [6]. Therefore, when solving multi-objective optimization problems, multiple different results are required for decision-makers to choose. There are usually two ways to deal with multi-objective problems: priori and posteriori [7,8].
A priori means that the decision-maker first weights the decision variables of the problem to be optimized. In this way, the multi-objective problem can be transformed into a single-objective problem [9]. In this case, the problem can be solved by employing a single-objective optimization algorithm. As a result, only one solution can be obtained as the optimal solution. The optimizer must be run multiple times if multiple optimal solutions are required [10]. On the contrary, when dealing with multi-objective problems, the posterior method ensures that the nature of the problem does not change. In other words, the problem will not be transformed into a single-objective problem [11]. This enables the design parameters or conditions to vary within a certain range while obtaining a different set of solutions. Then, the decision maker chooses a solution from the obtained solution set as the optimal solution according to his own preference or actual situation [12].
The multi-objective optimizer aims to find a set of non-dominated solutions, obtain more non-dominated solutions through iteration, and improve the quality of the solution [13]. The resulting non-dominated solution can trade-off between multiple objectives [14]. Compared with adopting a priori method to solve it, the posterior method retains more original characteristics of the multi-objective problem, meaning the method can obtain more optimal solutions in a short time [15].
There are many multi-objective optimization algorithms in the literature that solve multi-objective problems. Most of the algorithms are inspired by the single-objective algorithm, such as the grey wolf optimizer (GWO) [16], particle swarm optimization (PSO) [17], the ant lion optimization algorithm (ALO) [18], the grasshopper optimization algorithm (GOA) [5], the genetic algorithm (GA), and the evolutionary algorithm (EA). These algorithms are summarized in the following Table 1:
Many meta-heuristic multi-objective optimization algorithms (MOAs) have been proposed according to the literature review above. There are many different types of MOAs to solve different types of multi-objective optimization problems (MOPs). The existing MOAs are faced with low speed of convergence [31], are easily trapped by local optima [32,33], lack diversity in solutions when solving MOPs [34,35], etc. Based on the single-objective besiege and conquer algorithm (BCA), we propose an MOBCA to solve these problems. The contributions can be summarized as follows:
(i) The single-objective BCA outperforms other algorithms in the literature, and converging to the local optima is not easy. Therefore, we attempt to demonstrate its effectiveness in MOPs.
(ii) The BCA has been equipped with a grid mechanism to guarantee the diversity in the convergence process.
The rest of the paper is organized as follows: Section 2 provides a literature review concerning MOPs. Section 3 proposes the multi-objective besiege and conquer algorithm. Section 4 presents, discusses, and analyzes the experiment results on the ZDT, IMOP, and real-world problems. Finally, Section 5 concludes the work and suggests future works.
2. Literature Review
2.1. Multi-Objective Optimization Problem
Multi-objective optimization is a classic optimization problem. This problem needs to optimize multiple objectives at the same time. Therefore, the algorithm is required to find the maximum or minimum of multiple objective functions at the same time [9]. Compared with single-objective optimization problems, only one objective function needs to be optimized. As a result, we can easily compare the pros and cons of the fitness value in single-objective optimization problems.
A multi-objective optimization problem can be defined as follows:
where n is the number of variables, o is the number of objective functions, m is the number of inequality constraints, p is the number of equality constraints, is the inequality constraint, indicates the equality constraint, and are the boundaries of the variable.In Figure 1, we obtain two different solutions, and , which correspond to different evaluation values, and . We can clearly label these two fitness values on a one-dimensional coordinate axis. In this coordinate axis, point o is the origin. We can clearly understand the relationship between the fitness values corresponding to the two solutions. That means, if we need to determine the smallest fitness value for the problem, solution in the figure is desirable.
For fitness values in multi-objective problems, each evaluation process produces multiple evaluation values. This means that each evaluation process will produce at least two evaluation values. This is shown in Figure 2a,b. For example, during the two evaluations processes, two sets of fitness values are generated as follows:
where x is the first solution of , is the first evaluation function, is the evaluation function value obtained by x through the first evaluation function, and and are the same. y is the second solution of , its evaluation functions and are the same as in solution x, but the obtained evaluation value is different.In Figure 2a, all the fitness values of the solution x are smaller than the fitness values of solution y. From this, we can easily judge that x is a better solution than y.
In Figure 2b, we can observe that and , and it is difficult for us to judge whether x or y is better.
As mentioned above, the evaluation of the solution is more complicated than that of the single-objective problem. When we evaluate a multi-objective solution, it is difficult to determine an evaluation standard to judge the quality of the solution. So, the introduction of a series of definitions of Pareto is necessary [36].
Pareto Optimality:
Solution is deemed Pareto-optimal if and only if the following is applicable:
Pareto Dominance:
It is assumed that there are two vectors: and . Vector dominates vector (denoted as ) if and only if the following is applicable:
Pareto-optimal Set:
The set including all Pareto-optimal solutions is called a Pareto set:
Pareto-optimal Front:
A set that contains the value of objective functions for the Pareto solutions set:
In Figure 3a,b, all solutions that lie on the Pareto front can dominate solutions that do not. The set of points located on the Pareto front constitutes a Pareto optimal solution set. This solution set is the optimal solution to the multi-objective optimization problem.
2.2. Multi-Objective Optimization in Metaheuristics
The ultimate goal of a multi-objective optimization algorithm is to determine a Pareto optimal solution set and ensure that the solutions within this set are uniformly distributed across the Pareto front [37]. A solution set like this can provide decision-makers with more high-quality choices. Therefore, when solving multi-objective optimization problems, we mainly focus on two issues: the diversity of the solution set and the distance between the solution and the true Pareto front [38].
Metaheuristic optimization algorithms can perform well in solving multi-objective optimization problems [39]. The meta-heuristic algorithm usually uses a random strategy to generate populations and adaptively adjusts the position of randomly generated offspring, which can converge to cover the Pareto front. The parallelism and scalability of the heuristic algorithm are also noteworthy, enabling it to solve large-scale and complex multi-objective optimization problems.
In recent decades, multi-objective optimization algorithms have received extensive attention in their field due to their powerful ability to solve complex optimization problems [40]. Some novel and excellent multi-objective optimization algorithms have been proposed: the multi-objective grey wolf optimizer (MOGWO) [8], the multi-objective ant lion optimizer (MOALO) [11], the multi-objective whale optimization algorithm (MOWOA) [20], the vector-evaluated genetic algorithm (VEGA) [30], the niched Pareto genetic algorithm (NPGA) [28], the non-dominated sorting genetic algorithm (NSGA) [22], the non-dominated sorting genetic algorithm II (NSGA-II) [23], the non-dominated sorting genetic algorithm III (NSGA-III) [24], and the multi-objective evolutionary algorithm based on decomposition MOEA/D [26]. The above algorithms can approach the real Pareto optimal frontier in some multi-objective problems. However, according to the NFL theorem, there may be problems that these algorithms cannot solve [41]. So, new multi-objective algorithms should be proposed. In the next section, a new multi-objective optimization algorithm is proposed to solve the multi-objective optimization problem.
3. Multi-Objective Besiege and Conquer Algorithm (MOBCA)
3.1. Besiege and Conquer Algorithm (BCA)
The BCA algorithm was proposed by Jiang et al. in 2023. To enable BCA to solve multi-objective problems, the single-objective version of BCA also deserves a brief introduction here. The process of BCA optimization and the location update process can be briefly summarized by the following mathematical formulas:
(1)
where is the soldier of the dimension with iteration, is the current best army with iteration, is the army of the dimension with iteration, is a random army of the dimension with iteration, and is the disturbance coefficient.and can be calculated using following two equations:
(2)
(3)
Parameter is set to adjust global search or local search. It can be dynamically changed according to the distance between the current army and the best army. If the current army is the best army in the population, then parameter will change to 0.2 in order to enhance the global search. This will avoid local stagnation.
The BCA algorithm starts the optimization process with a set of randomly generated populations. In the optimization process, the initial population is 1/3 of the set population size, which means that each army has three soldiers. During the position update process, each army will generate three soldiers according to the position update Equation (1). But, the positions of these soldiers will only be stored when the army is updated. The soldiers’ position will be eliminated if the fitness value of soldiers is not better than that of armies when the evaluation process is completed. When all armies are updated, the algorithm judges the distance relationship between the current army and the global optimum. So, the algorithm can adjust the value of according to the distance, which can control the global search or local search. This is an aggressive position update strategy, and it also enables BCA to have better global optimization capabilities.
The number of function evaluations () is described as Equation (4).
(4)
where is the number of soldiers in each army, and is the number of armies. The algorithm will output the best solutions when the reaches max function evaluation times ().3.2. Multi-Objective Besiege and Conquer Algorithm (MOBCA)
To apply BCA to solve multi-objective optimization problems, we need to introduce related mechanisms. These mechanisms function roughly as MOPSO [19], including grid, archive, and leader selection mechanisms.
The purpose of the grid mechanism is to determine the location of each solution in the objective space. During the initialization phase, the objective space is divided into several grids according to the scalar parameter . As the optimization process proceeds, more solutions are obtained by the MOAs. Suppose that a solution is not located in the grid at one iteration, the grid will be updated to cover the new solution. Specifically, the grid is used to determine the relative positions between solutions in objective space. This information will be used to determine whether a solution is retained. This is associated with the archive. In the literature, MOGWO utilizes the grid mechanism to decide the , and wolf [8]. A solution located in the sparest grid will be chosen as the wolf, while the solutions in the second- and third-sparest grids will be chosen as the and . This method aims to lead the algorithm to maintain diversity in solutions.
The archive stores non-dominated solutions during iterations. The maximum size of the archive is equal to the population size. In each iteration, the archive members will be compared with the new offspring population. Then, the archive will update the latest non-dominated solutions as the new archive members. In the replacement process, the archive size may exceed the maximum limitation. The grid mechanism comes in handy to maintain the diversity of the non-dominated population. The grid location will be rearranged if the archive size exceeds the maximum limitation. The surplus member will be deleted via the information on the crowding degree. Those solutions that have occupied a crowding grid will be more likely to be deleted. In GWO, the algorithm only stores the first three wolves in the population [16]. However, this is not suitable in the MOA. We expect to obtain more diverse solutions to guide the subsequent optimization process. So, an archive equal to the population size to store the solutions is necessary.
In PSO, the iterative update of particles is based on global best and personal best [17]. In MOPs, the personal best is easily updated if the new solution dominates the old one. In contrast, the global best is hard to update. It is easy to select a leader in single-objective optimization to guide the population to a promising area to approach the global optimum. However, in multi-objective optimization, comparing one solution with another makes it hard due to the Pareto optimal concepts mentioned before. Therefore, we must use a leader selection mechanism to solve this problem. In multi-objective problems, promising solutions approach the Pareto front and have good population diversity. The solution in the archive set is composed of many promising solutions. So, the leader selection mechanism will use the roulette method to select the least crowded solution in the archive as the leader.
The BCA is introduced in the archive mechanism to store non-dominated solutions. The global best solution will be selected from the archive to lead the population. In MOBCA, every soldier may be assigned to a different global best. This will guarantee diversity in the population. Each soldier will move in a different direction to the Pareto Front. Once armies exceed the maximum size, redundant armies will be deleted on the basis of the grid mechanism. The convergence of the MOBCA is guaranteed because it employs the same mathematical model of BCA. During the optimization process, BCA completes convergence by changing the position of the agent factor. This behavior guarantees the convergence ability of an algorithm in the search process according to [42]. The MOBCA inherits all the characteristics of the BCA, which means that the search agents explore and exploit the search space in the same manner.
The main difference is that the MOBCA is based on a set of archive members, while the BCA only saves and improves three of the best solutions.
The workflow of the BCA is shown in Figure 4, and its pseudo code is presented in Algorithm 1.
Algorithm 1: MOBCA: Mutil-objective besiege and conquer algorithm. |
3.3. Computational Complexity
The complexity of the proposed MOBCA is based on the number of decision variables, the objective variables, and the population size. Imagine a multi-objective problem with D decision variables, M objective variables, and N particles as an example. The MOBCA mainly includes updated soldiers, an updated archive, and updated armies.
The complexity of update soldiers is decided by , and . This process will be executed times; thus, the computational complexity of this process is .
The updated archive involves deleting redundant solutions, and the complexity is .
In the MOBCA, after the update archive process, we must also update armies for the next generation, so this process is similar to the update archive. The complexity of this process is .
Since the , and are far lower than N, the final complexity of the proposed MOBCA is .
4. Experimental Settings
The experiment codes are executed in a MATLAB R2022b environment under the Windows 10 operating system. All simulations are carried out on a computer with an Intel(R) Core(TM) i7-8750H CPU @ 2.20 GHz 2.21 GHz and memory of 16 G.
4.1. Experimental Settings
The proposed algorithm is compared with three well-known algorithms, including the multi-objective particle swarm optimizer (MOPSO) [19], the non-dominated sorting genetic algorithm III (NSGAIII) [24], and the multi-objective evolutionary algorithm based on decomposition (MOEA/D) [26]. The default parameters in PlatEMO v4.3 [43] are used.
Different from the evaluation of a single-objective optimization algorithm, the performance evaluation of a multi-objective optimization algorithm needs to be evaluated by other calculation methods. The specific calculation method of the performance evaluation index is as follows:
For the inverted generational distance (IGD) for measuring convergence [44], its mathematical formula can be expressed as Equation (5):
(5)
where shows the size of the true Pareto optimal solutions set, and indicates the Euclidean distance (ED) between the true Pareto optimal solutions set and obtained solution set. n represents the number of obtained Pareto solutions.IGD can evaluate the distance between the Pareto optimal solution and the actual obtained solution. However, evaluating the sparsity of the Pareto solution set requires the use of another evaluation indicator: hypervolume (HV).
Hypervolume is a widely employed performance metric in the domain of multi-objective optimization [45,46,47]. It quantifies the hypervolume enclosed by a set of solutions within the objective space, representing the volume of the space dominated by these solutions. HV serves as a crucial indicator for assessing the quality of solution sets generated by different multi-objective optimization algorithms, where larger HV values typically indicate superior performance.
The calculation of HV involves the determination of the volume within the Pareto front formed by the set of solutions. It is computed as the integral of the dominated portion of the objective space. Formally, the hypervolume (HV) is defined as follows:
Hypervolume (HV) is calculated using the following Equation (6):
(6)
where X represents the set of solutions, is the objective space, and is the hypervolume contribution Equation (7):(7)
Here, is a reference point in the objective space. The integral spans the entire objective space, and the computation involves evaluating the contribution of each solution in X to the overall hypervolume.
Utilizing the above evaluation metrics allows us to quantitatively compare MOBCA with MOPSO, NSGAIII, and MOEA/D. In addition, we can illustrate the best set of Pareto optimal solutions obtained by each algorithm on the search space. This method allows us to compare the performance of the algorithms qualitatively. All algorithms are run 30 times on the test problems, and the statistical results of these 30 runs are provided in Table 2 and Table 3. Note that we use 10,000 function evaluations for each algorithm. The qualitative results are also provided in Figure 5, Figure 6 and Figure 7.
4.2. Experimental Results and Discussion
The results of algorithms on the test functions are presented in Table 2 and Table 3, and the best Pareto optimal fronts obtained by all algorithms are illustrated in Figure 5. At the same time, the tracking results of HV and IGD during the iteration process are shown in Figure 6 and Figure 7.
4.2.1. Results on ZDT Test Suite
Table 2 and Table 3 show that the MOBCA outperforms other algorithms in three of five ZDT test problems. The superiority can be seen in the columns, showing higher accuracy and better robustness of the MOBCA compared to others in ZDT1, ZDT3, and ZDT6.
The shape of the Pareto optimal fronts obtained by the four algorithms on ZDT1, ZDT2, ZDT3, and ZDT6 is illustrated in Figure 5a–d. Inspecting these figures, it may be observed that NSGAIII shows the poorest convergence despite its good coverage in ZDT6. However, the NSGAIII and MOBCA both provide very good convergence toward all true Pareto optimal fronts. The most interesting pattern is that the Pareto optimal solutions obtained by NSGAIII show higher coverage than MOBCA on ZDT2. However, the coverage of the MOBCA on ZDT3 is better than NSGAIII. This shows that the MOBCA has the potential to outperform NSGAIII in finding a Pareto optimal front with separate regions.
The convergence curve of HV and IGD is presented in Figure 6a–d and Figure 7a–d. Figure 6a,b and Figure 7a,b show that the MOBCA converge to the Pareto front is faster than that of NSGAIII. This shows the MOBCA can quickly converge and cover the Pareto front in comparison to NSGAIII. However, the MOBCA only demonstrates a weak advantage on ZDT3 compared to NSGAIII. We can draw a conclusion based on Table 2: although the MOBCA shows a weak advantage on ZDT3, it is a stable and robust way to solve separated MOPs.
4.2.2. Results on IMOP Test Suite
The previous section investigated the performance of the proposed MOBCA on the ZDT test set. Most of the test functions in this suite are not multi-modal. To benchmark the performance of the proposed algorithm’s more challenging test set, this subsection employs IMOP benchmark functions. These functions are the most difficult test functions in the literature on multi-objective optimization and are able to confirm whether the superiority of the MOBCA is significant or not.
Inspecting the results in Table 2, it is evident that the MOBCA outperforms the MOEA/D, MOPSO, and NSGAIII in all of IMOP test functions. Since IGD is a good metric for benchmarking the convergence of an algorithm, these results indicate that the MOBCA has a better convergence on these benchmark functions. In order to observe the coverage of the algorithms, the HV metric provides quantitative result analysis in Table 3. The MOBCA failed to achieve the best results in IMOP1. We noticed that the MOBCA obtained the worst standard deviation on IMOP1 regarding HV. This means that in 30 rounds of experiments, the HV obtained by MOBCA is unstable. This may be the local optima that causes the convergence of MOBCA to stagnate.
Figure 5e–l shows the obtained Pareto front using algorithms in the IMOPs. The figures show that the MOBCA is closer to the true Pareto front, and the coverage is broader than other algorithms. Especially in Figure 5f–i,k, the MOBCA demonstrates overwhelming advantages in the coverage of the Pareto front. This means that the MOBCA can provide decision-makers with more high-quality solutions when solving practical problems. Although the MOBCA provides similar results to NSGAIII when solving IMOP5, as shown in Figure 5i, the MOBCA is significantly better than NSGAIII when solving IMOP7, as shown in Figure 5k.
The convergence speed of the MOBCA and the obtained Pareto front coverage can be reflected by the HV and IGD. Figure 6 and Figure 7 demonstrate that the MOBCA converges to the Pareto optimal front more quickly in most of the IMOPs. In particular, Figure 6f,k and Figure 7f,k present that the MOBCA can avoid the local optima effectively compared with other algorithms. The MOBCA has been developed based on the single-objective BCA, so it inherits the excellent convergence speed of the BCA. Figure 6e,h,l and Figure 7e,h,l prove this view.
4.2.3. Results on Real-World Problems
The last part of Table 2 and Table 3 present real-world multi-objective optimization problems (RWMOPs). The MOBCA shows outstanding results in terms of the IGD metric, while failed archives show good results in HV. Due to the complexity and dynamics of real-life problems, the MOBCA initially designed to solve multi-objective problems may have certain flaws. However, it is still undeniable that the MOBCA can provide high-quality solutions compared with other algorithms with its current performance.
4.2.4. Discussion
The qualitative and quantitative results show that the MOBCA benefits from high convergence and coverage. The high convergence of the MOBCA is inherited from the BCA. The main mechanisms that guarantee convergence in the BCA and MOBCA are the besiege and conquer mechanism. These two mechanisms emphasize exploitation and convergence proportional to the number of iterations. Since we select one solution from the archive in every iteration for each army and require the soldiers to move around the armies in the MOBCA, being trapped by local optima might be a concern. However, the results prove that the MOBCA algorithm does not suffer from local optima.
5. Conclusions and Future Work
This paper proposes a multi-objective version of the population optimization algorithm BCA. By introducing the grid mechanism, archive mechanism, and leader selection mechanism, and redesigning these mechanisms into the single-objective BCA algorithm, a new multi-objective algorithm, the MOBCA, is generated, which enables the BCA to deal with multi-objective optimization problems.
The proposed algorithm, the MOBCA, is compared with several excellent multi-objective optimization algorithms. These include those inspired by single-objective algorithms: MOPSO, MOEA/D, and NSGAIII. Our experimental results show that the MOBCA is superior to all compared algorithms in this paper in terms of convergence. Furthermore, the archive mechanism and grid mechanism ensure the diversity of the distribution of Pareto optimal solutions obtained by the MOBCA.
Based on the experimental results, the MOBCA shows a similar convergence speed to the BCA. It can converge to the Pareto front effectively in some MOPs. The diversity in solutions is also better than in compared algorithms except in real-world problems. This also shows that the proposed MOBCA still has certain flaws in terms of diversity, and there is considerable room for improvement.
For future work, a new multi-objective optimization mechanism should be introduced to ensure the diversity of the distribution of Pareto optimal solutions obtained by the algorithm. At the same time, a new position update formula should be considered because the MOBCA failed to determine a sufficient number of Pareto optimal solutions in some test functions.
J.J. and J.W. conceived and designed the methodology and experiments; J.W. performed the experiments, analyzed the results and wrote the paper; J.W., J.L., X.Y. and Z.H. revised the paper. All authors have read and agreed to the published version of the manuscript.
Data is contained within the article.
The authors declare no conflicts of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 2. Different situations in a multi-objective problem (for example: two objectives). (a) Situation 1: solution x dominates y. (b) Situation 2: x and y do not dominate each other.
Figure 6. The convergence of hypervolume with the number of function evaluations.
Figure 7. The convergence of invert generation distance with the number of function evaluations.
Classical and novel multi-objective optimization algorithms.
Abbreviations | Algorithms | Authors and Year |
---|---|---|
MOGWO | Multi-Objective Grey Wolf Optimizer | Mirjalili et al., 2016 [ |
MOPSO | Multi-Objective Particle Swarm Optimization | Coello and Lechuga, 2002 [ |
MOALO | Multi-Objective Ant Lion Optimizer | Mirjalili et al., 2017 [ |
MOWOA | Multi-Objective Whale Optimization Algorithm | Mirjalili and Lewis, 2016 [ |
MOGOA | Multi-Objective Grasshopper Optimization Algorithm | Mirjalili et al., 2018 [ |
MOGA | Multi-Objective Genetic Algorithm | Konak et al., 2006 [ |
MOSOA | Multi-Objective Seagull Optimization Algorithm | Dhiman et al., 2021 [ |
NSGA | Non-Dominated Sorting Genetic Algorithm | Srinivas and Deb, 1994 [ |
NSGAII | Non-Dominated Sorting Genetic Algorithm II | Deb et al., 2002 [ |
NSGAIII | Non-Dominated Sorting Genetic Algorithm III | Deb and Jain, 2013 [ |
DN_NSGAII | Decision Space-Based Niching NSGAII | Liang et al., 2016 [ |
MOEA/D | Multi-Objective Evolutionary Algorithm based on Decomposition | Zhang and Li, 2007 [ |
RVEA | Reference Vector-Guided Evolutionary Algorithm | Cheng et al., 2016 [ |
NPGA | Niched Pareto Genetic Algorithm | Horn et al., 1994 [ |
SPEA2 | Improving the Strength of the Pareto Evolutionary Algorithm | Zitzler et al., 2001 [ |
VEGA | Vector-Evaluated Genetic Algorithm | Schaffer, 2014 [ |
Result of invert generation distance (IGD) after 30 experiment runs.
Problem | M | D | MOEAD | MOPSO | NSGAIII | MOBCA |
---|---|---|---|---|---|---|
IMOP1 | 2 | 10 | ||||
IMOP2 | 2 | 10 | ||||
IMOP3 | 2 | 10 | ||||
IMOP4 | 3 | 10 | ||||
IMOP5 | 3 | 10 | ||||
IMOP6 | 3 | 10 | ||||
IMOP7 | 3 | 10 | ||||
IMOP8 | 3 | 10 | ||||
ZDT1 | 2 | 30 | ||||
ZDT2 | 2 | 30 | ||||
ZDT3 | 2 | 30 | ||||
ZDT4 | 2 | 10 | ||||
ZDT6 | 2 | 10 | ||||
RWMOP5 | 2 | 4 | NaN (NaN) * | NaN (NaN) | ||
RWMOP9 | 2 | 4 | ||||
RWMOP11 | 5 | 3 | ||||
RWMOP14 | 2 | 5 | NaN (NaN) | |||
RWMOP16 | 2 | 2 | NaN (NaN) | |||
+/−/= | 1/14/0 | 1/15/1 | 2/12/4 |
* NaN indicates that no feasible solution was found. “+” and “−” indicate the number of test problems in which the compared algorithm shows significantly better performance of worse performance than MOBCA respectively. The symbol “=” indicates there is no significant difference between MOBCA and compared algorithms.
Result of hypervolume (HV) after 30 experimental runs.
Problem | M | D | MOEAD | MOPSO | NSGAIII | MOBCA |
---|---|---|---|---|---|---|
IMOP1 | 2 | 10 | ||||
IMOP2 | 2 | 10 | ||||
IMOP3 | 2 | 10 | ||||
IMOP4 | 3 | 10 | ||||
IMOP5 | 3 | 10 | ||||
IMOP6 | 3 | 10 | ||||
IMOP7 | 3 | 10 | ||||
IMOP8 | 3 | 10 | ||||
ZDT1 | 2 | 30 | ||||
ZDT2 | 2 | 30 | ||||
ZDT3 | 2 | 30 | ||||
ZDT4 | 2 | 10 | ||||
ZDT6 | 2 | 10 | ||||
RWMOP5 | 2 | 4 | NaN (NaN) * | NaN (NaN) | ||
RWMOP9 | 2 | 4 | ||||
RWMOP11 | 5 | 3 | ||||
RWMOP14 | 2 | 5 | NaN (NaN) | |||
RWMOP16 | 2 | 2 | NaN (NaN) | |||
+/−/= | 2/12/1 | 2/14/1 | 8/8/2 |
* NaN indicates that no feasible solution was found. “+” and “−” indicate the number of test problems in which the compared algorithm shows significantly better performance of worse performance than MOBCA respectively. The symbol “=” indicates there is no significant difference between MOBCA and compared algorithms.
References
1. Muthukumaran, S.; Geetha, P.; Ramaraj, E. Multi-Objective Optimization with Artificial Neural Network Based Robust Paddy Yield Prediction Model. Intell. Autom. Soft Comput.; 2023; 35, pp. 216-230. [DOI: https://dx.doi.org/10.32604/iasc.2023.027449]
2. Agushaka, J.O.; Ezugwu, A.E.; Abualigah, L. Gazelle Optimization Algorithm: A novel nature-inspired metaheuristic optimizer. Neural Comput. Appl.; 2023; 35, pp. 4099-4131. [DOI: https://dx.doi.org/10.1007/s00521-022-07854-6]
3. Huang, C.; Zhou, X.; Ran, X.; Liu, Y.; Deng, W.; Deng, W. Co-evolutionary competitive swarm optimizer with three-phase for large-scale complex optimization problem. Inf. Sci.; 2023; 619, pp. 2-18. [DOI: https://dx.doi.org/10.1016/j.ins.2022.11.019]
4. Rahimi, I.; Gandomi, A.H.; Chen, F.; Mezura-Montes, E. A Review on Constraint Handling Techniques for Population-based Algorithms: From single-objective to multi-objective optimization. Arch. Comput. Methods Eng.; 2023; 30, pp. 2181-2209. [DOI: https://dx.doi.org/10.1007/s11831-022-09859-9]
5. Mirjalili, S.Z.; Mirjalili, S.; Saremi, S.; Faris, H.; Aljarah, I. Grasshopper optimization algorithm for multi-objective optimization problems. Appl. Intell.; 2018; 48, pp. 805-820. [DOI: https://dx.doi.org/10.1007/s10489-017-1019-8]
6. Pilechiha, P.; Mahdavinejad, M.; Rahimian, F.P.; Carnemolla, P.; Seyedzadeh, S. Multi-objective optimisation framework for designing office windows: Quality of view, daylight and energy efficiency. Appl. Energy; 2020; 261, 114356. [DOI: https://dx.doi.org/10.1016/j.apenergy.2019.114356]
7. Branke, J.; Kaußler, T.; Schmeck, H. Guidance in evolutionary multi-objective optimization. Adv. Eng. Softw.; 2001; 32, pp. 499-507. [DOI: https://dx.doi.org/10.1016/S0965-9978(00)00110-1]
8. Mirjalili, S.; Saremi, S.; Mirjalili, S.M.; Coelho, L.d.S. Multi-objective grey wolf optimizer: A novel algorithm for multi-criterion optimization. Expert Syst. Appl.; 2016; 47, pp. 106-119. [DOI: https://dx.doi.org/10.1016/j.eswa.2015.10.039]
9. Konak, A.; Coit, D.W.; Smith, A.E. Multi-objective optimization using genetic algorithms: A tutorial. Reliab. Eng. Syst. Saf.; 2006; 91, pp. 992-1007. [DOI: https://dx.doi.org/10.1016/j.ress.2005.11.018]
10. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw.; 2017; 114, pp. 163-191. [DOI: https://dx.doi.org/10.1016/j.advengsoft.2017.07.002]
11. Mirjalili, S.; Jangir, P.; Saremi, S. Multi-objective ant lion optimizer: A multi-objective optimization algorithm for solving engineering problems. Appl. Intell.; 2017; 46, pp. 79-95. [DOI: https://dx.doi.org/10.1007/s10489-016-0825-8]
12. Liu, B.; Jiao, S.; Shen, Y.; Chen, Y.; Wu, G.; Chen, S. A dynamic hybrid trust network-based dual-path feedback consensus model for multi-attribute group decision-making in intuitionistic fuzzy environment. Inf. Fusion; 2022; 80, pp. 266-281. [DOI: https://dx.doi.org/10.1016/j.inffus.2021.09.020]
13. Li, X.; Yu, Y.; Huang, M. Multi-objective cooperative coevolution algorithm with a Master–Slave mechanism for Seru Production. Appl. Soft Comput.; 2022; 119, 108593. [DOI: https://dx.doi.org/10.1016/j.asoc.2022.108593]
14. Ghoddousi, P.; Eshtehardian, E.; Jooybanpour, S.; Javanmardi, A. Multi-mode resource-constrained discrete time–cost-resource optimization in project scheduling using non-dominated sorting genetic algorithm. Autom. Constr.; 2013; 30, pp. 216-227. [DOI: https://dx.doi.org/10.1016/j.autcon.2012.11.014]
15. Makhadmeh, S.N.; Alomari, O.A.; Mirjalili, S.; Al-Betar, M.A.; Elnagar, A. Recent advances in multi-objective grey wolf optimizer, its versions and applications. Neural Comput. Appl.; 2022; 34, pp. 19723-19749. [DOI: https://dx.doi.org/10.1007/s00521-022-07704-5]
16. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw.; 2014; 69, pp. 46-61. [DOI: https://dx.doi.org/10.1016/j.advengsoft.2013.12.007]
17. Kennedy, J.; Eberhart, R. Particle swarm optimization. Proceedings of the ICNN’95-International Conference on Neural Networks; Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942-1948.
18. Mirjalili, S. The ant lion optimizer. Adv. Eng. Softw.; 2015; 83, pp. 80-98. [DOI: https://dx.doi.org/10.1016/j.advengsoft.2015.01.010]
19. Coello, C.C.; Lechuga, M.S. MOPSO: A proposal for multiple objective particle swarm optimization. Proceedings of the 2002 Congress on Evolutionary Computation. CEC’02 (Cat. No. 02TH8600); Honolulu, HI, USA, 12–17 May 2002; Volume 2, pp. 1051-1056.
20. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw.; 2016; 95, pp. 51-67. [DOI: https://dx.doi.org/10.1016/j.advengsoft.2016.01.008]
21. Dhiman, G.; Singh, K.K.; Soni, M.; Nagar, A.; Dehghani, M.; Slowik, A.; Kaur, A.; Sharma, A.; Houssein, E.H.; Cengiz, K. MOSOA: A new multi-objective seagull optimization algorithm. Expert Syst. Appl.; 2021; 167, 114150. [DOI: https://dx.doi.org/10.1016/j.eswa.2020.114150]
22. Srinivas, N.; Deb, K. Muiltiobjective optimization using nondominated sorting in genetic algorithms. Evol. Comput.; 1994; 2, pp. 221-248. [DOI: https://dx.doi.org/10.1162/evco.1994.2.3.221]
23. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput.; 2002; 6, pp. 182-197. [DOI: https://dx.doi.org/10.1109/4235.996017]
24. Deb, K.; Jain, H. An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: Solving problems with box constraints. IEEE Trans. Evol. Comput.; 2013; 18, pp. 577-601. [DOI: https://dx.doi.org/10.1109/TEVC.2013.2281535]
25. Liang, J.J.; Yue, C.; Qu, B.Y. Multimodal multi-objective optimization: A preliminary study. Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC); Vancouver, BC, Canada, 24–29 July 2016; pp. 2454-2461.
26. Zhang, Q.; Li, H. MOEA/D: A multiobjective evolutionary algorithm based on decomposition. IEEE Trans. Evol. Comput.; 2007; 11, pp. 712-731. [DOI: https://dx.doi.org/10.1109/TEVC.2007.892759]
27. Cheng, R.; Jin, Y.; Olhofer, M.; Sendhoff, B. A reference vector guided evolutionary algorithm for many-objective optimization. IEEE Trans. Evol. Comput.; 2016; 20, pp. 773-791. [DOI: https://dx.doi.org/10.1109/TEVC.2016.2519378]
28. Horn, J.; Nafpliotis, N.; Goldberg, D.E. A niched Pareto genetic algorithm for multiobjective optimization. Proceedings of the first IEEE Conference on Evolutionary Computation. IEEE World Congress on Computational Intelligence; Orlando, FL, USA, 27–29 June 1994; pp. 82-87.
29. Zitzler, E.; Laumanns, M.; Thiele, L. SPEA2: Improving the strength Pareto evolutionary algorithm. Tik-Report; 2001; 103, [DOI: https://dx.doi.org/10.3929/ethz-a-004284029]
30. Schaffer, J.D. Multiple objective optimization with vector evaluated genetic algorithms. Proceedings of the First International Conference on Genetic Algorithms and Their Applications; Psychology Press: London, UK, 2014; pp. 93-100.
31. Xiao, Y.; Wu, Y. Robust visual tracking based on modified mayfly optimization algorithm. Image Vis. Comput.; 2023; 135, 104691. [DOI: https://dx.doi.org/10.1016/j.imavis.2023.104691]
32. Qian, L.; Khishe, M.; Huang, Y.; Mirjalili, S. SEB-ChOA: An improved chimp optimization algorithm using spiral exploitation behavior. Neural Comput. Appl.; 2024; 36, pp. 4763-4786. [DOI: https://dx.doi.org/10.1007/s00521-023-09236-y]
33. Zhang, Y.; Tian, Y.; Zhang, X. Improved SparseEA for sparse large-scale multi-objective optimization problems. Complex Intell. Syst.; 2023; 9, pp. 1127-1142. [DOI: https://dx.doi.org/10.1007/s40747-021-00553-0]
34. Li, W.; Zhang, T.; Wang, R.; Huang, S.; Liang, J. Multimodal multi-objective optimization: Comparative study of the state-of-the-art. Swarm Evol. Comput.; 2023; 77, 101253. [DOI: https://dx.doi.org/10.1016/j.swevo.2023.101253]
35. Liu, S.; Lin, Q.; Li, J.; Tan, K.C. A survey on learnable evolutionary algorithms for scalable multiobjective optimization. IEEE Trans. Evol. Comput.; 2023; 27, pp. 1941-1961. [DOI: https://dx.doi.org/10.1109/TEVC.2023.3250350]
36. Ngatchou, P.; Zarei, A.; El-Sharkawi, A. Pareto multi objective optimization. Proceedings of the 13th International Conference on, Intelligent Systems Application to Power Systems; Arlington, VA, USA, 6–10 November 2005; pp. 84-91.
37. Deb, K.; Agrawal, S.; Pratap, A.; Meyarivan, T. A fast elitist non-dominated sorting genetic algorithm for multi-objective optimization: NSGA-II. Parallel Problem Solving from Nature PPSN VI: 6th International Conference Paris, France, September 18–20, 2000; Proceedings 6 Springer: Berlin/Heidelberg, Germany, 2000; pp. 849-858.
38. Ishibuchi, H.; Imada, R.; Setoguchi, Y.; Nojima, Y. Reference point specification in inverted generational distance for triangular linear Pareto front. IEEE Trans. Evol. Comput.; 2018; 22, pp. 961-975. [DOI: https://dx.doi.org/10.1109/TEVC.2017.2776226]
39. Tariq, I.; AlSattar, H.A.; Zaidan, A.; Zaidan, B.; Abu Bakar, M.; Mohammed, R.; Albahri, O.S.; Alsalem, M.; Albahri, A.S. MOGSABAT: A metaheuristic hybrid algorithm for solving multi-objective optimisation problems. Neural Comput. Appl.; 2020; 32, pp. 3101-3115. [DOI: https://dx.doi.org/10.1007/s00521-018-3808-3]
40. Zhan, Z.H.; Shi, L.; Tan, K.C.; Zhang, J. A survey on evolutionary computation for complex continuous optimization. Artif. Intell. Rev.; 2022; 55, pp. 59-110. [DOI: https://dx.doi.org/10.1007/s10462-021-10042-y]
41. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput.; 1997; 1, pp. 67-82. [DOI: https://dx.doi.org/10.1109/4235.585893]
42. Van den Bergh, F.; Engelbrecht, A.P. A study of particle swarm optimization particle trajectories. Inf. Sci.; 2006; 176, pp. 937-971. [DOI: https://dx.doi.org/10.1016/j.ins.2005.02.003]
43. Tian, Y.; Cheng, R.; Zhang, X.; Jin, Y. PlatEMO: A MATLAB platform for evolutionary multi-objective optimization [educational forum]. IEEE Comput. Intell. Mag.; 2017; 12, pp. 73-87. [DOI: https://dx.doi.org/10.1109/MCI.2017.2742868]
44. Sierra, M.R.; Coello Coello, C.A. Improving PSO-based multi-objective optimization using crowding, mutation and ∈-dominance. Evolutionary Multi-Criterion Optimization: Third International Conference, EMO 2005, Guanajuato, Mexico, March 9–11, 2005; Proceedings 3 Springer: Berlin/Heidelberg, Germany, 2005; pp. 505-519.
45. López-Ruiz, S.; Hernández-Castellanos, C.I.; Rodríguez-Vázquez, K. Multi-objective optimization of neural network with stochastic directed search. Expert Syst. Appl.; 2024; 237, 121535. [DOI: https://dx.doi.org/10.1016/j.eswa.2023.121535]
46. Fromer, J.C.; Coley, C.W. Computer-aided multi-objective optimization in small molecule discovery. Patterns; 2023; 4, [DOI: https://dx.doi.org/10.1016/j.patter.2023.100678]
47. Palm, H.; Arndt, L. Reinforcement Learning-Based Hybrid Multi-Objective Optimization Algorithm Design. Information; 2023; 14, 299. [DOI: https://dx.doi.org/10.3390/info14050299]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
The besiege and conquer algorithm has shown excellent performance in single-objective optimization problems. However, there is no literature on the research of the BCA algorithm on multi-objective optimization problems. Therefore, this paper proposes a new multi-objective besiege and conquer algorithm to solve multi-objective optimization problems. The grid mechanism, archiving mechanism, and leader selection mechanism are integrated into the BCA to estimate the Pareto optimal solution and approach the Pareto optimal frontier. The proposed algorithm is tested with MOPSO, MOEA/D, and NSGAIII on the benchmark function IMOP and ZDT. The experiment results show that the proposed algorithm can obtain competitive results in terms of the accuracy of the Pareto optimal solution.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details



1 Center for Artificial Intelligence, Jilin University of Finance and Economics, Changchun 130117, China;
2 College of Foreign Languages, Jilin Agricultural University, Changchun 130118, China;