Abstract-This paper proposes an algorithm for solving multiobjective optimization problems using the attack technique of the Grey Wolf. It is a metaheuristic method called a Multiobjective Optimizer based on Grey Wolf Attack Technique (MOGWAT). In fact, it is inspired by the modified Hybrid Grey Wolf Optimizer and Genetic Algorithm (HmGWOGA), which is a single objective optimization algorithm specially designed for positive objective functions. The MOGWAT method combines the multiple objective functions of the initial problem into a single objective function, and then penalizes constraint functions to get an unconstrained single-objective optimization. The use of an effective single-objective optimizer allows reaching the optimal solutions. These solutions are also the Pareto optimal solutions of the initial problem according to some parameters. Through some theorems, we have established the theoretical foundation and performance of our method. Furthermore, in order to highlight the numerical performance of the method, we have tackled three groups of problems: 16 test problems from the Zitzler-Deb-Thiele benchmarks, 2 instances from the CEC 2009 benchmarks, and 2 real-world problems from literature. Our numerical results have been compared to the ones obtained with the NSGA-II method. This comparison was made using some computed performance parameters. The outcomes of the comparison have enabled us to prove the effectiveness and efficiency of our new approach in terms of speed and convergence.
Index Terms-Multiobjective optimization; Metaheuristic methods; Pareto Optimality; Grey Wolf optimizer
(ProQuest: ... denotes formulae omitted.)
I. Introduction
THE multiobjective optimization concept is extensively employed for the modeling and resolution of real-life problems. To solve a real-life problem using mathematical tools, two important steps must be taken: the mathematical formulation and the finding of an adapted method. Therefore, it is essential to master the resolution of multiobjective optimization problems. It should be noted that, there are no universal methods for these kinds of problems in literature. In these mathematical programs, several conflicting objectives are considered simultaneously, and this situation imposes that there is no optimal solution. The resolution of these problems leads to a set of solutions called Pareto optimal solutions [32]. In practice, the existing methods are predominanly methods for approximating solutions [15], [16], [25], and are evaluated in terms of performance on computational complexity, convergence, and distribution. Nowadays, it is almost impossible to find a method in the literature that can solve all multiobjective optimization problems efficiently. This is the reason why many researchers are always working on this topic.
The majority of existing methods for the solving of multiobjective optimization attempt to convert the initial problem into a single objective optimization problem through an aggregation function. The literature contains many aggregation functions [29], but in this work, we have chosen the 6-constraint approach. It is one of the best transformations that preserves the Pareto optimality of solutions. It consists of selecting only one objective function to optimize and converting the others into constraints. After that, the Lagrangian penalty function has been used to obtain an unconstrained single objective function. A good optimizer method is required to achieve optimal solutions at this time.
Many works in the literature propose methods for solving single objective optimization problems. We are especially interested in works that focus on the Grey Wolf Optimizer (GW0). In 2019, Sawadogo et al. [17] developed a modified Hybrid Grey Wolf optimizer and genetic algorithm (HmGWOGA) for global optimization of positive functions; Fu et al. [20] focused on dynamically dimension Search Grey Wolf Optimizer Based on Positional Interaction Information; Wen et al. [18] aimed to develop on an efficient and robust Grey Wolf Optimizer algorithm for large-scale numerical optimization; Muhammed et al. [19] produced some results on Grey Wolf Optimizer-Based Tuning of a hybrid LQR-PID controller for foot trajectory control of a quadruped robot. In 2020, Shubham et al. [14] presented their works on an enhanced leadership-inspired Grey Wolf Optimizer for the global optimization problem; In 2021, Farshad et al. [10] have focused on an enhanced Grey Wolf Optimizer with a velocity-aided global aided global search mechanism; Wei et al. [9] are about path planning of UAV based on Improved adaptive Grey Wolf Optimization algorithm; Amir et al. [13] provided improving algorithms of the Grey Wolf Optimizer to solve global optimization problem; In 2022, Xinyang et al. [6] are focused on dimensional learning strategy-based Grey Wold optimizer for solving the global optimization problem; Safora et al. [5] on a condition-based Grey Wolf Optimizer algorithm for the global optimization problems; Eslan et al. [7] are focused on hybrid Grey Wolf Optimization-based Gaussian process regression model for simulating deterioration behavior of highway tunnel components; Zeynab et al. [4] are focused on a new enhanced hybrid Grey Wolf Optimizer combined with elephant herding optimization algorithm for engineering optimization.
In all previous works, GWO was used to solve single objective optimization problems. According to Sawadogo et al. [17], the HmGWOGO method was built for positive functions without constraints, and some excellent optimal solutions were found. The challenge was to extend it in order to solve optimization problems with several objectives.
This work proposes a metaheuristic method based on Grey Wolf attack technique for catching prey to solve a multiobjective optimization problem. We call it Multiobjective Optimizer based on Grey Wolf Attack Technique (MOGWAT). MOGWAT is an extension of the HmGWOGA algorithm, which was originally designed to solve single-objective optimization problems. It arises from a combination of the 6-constraint approach and the HmGWOGA algorithm. In order to demonstrate the optimality of obtained solutions and the good complexity of our algorithm, we have proposed three theorems and some numerical results. We have computed the Pareto optimal solutions of twenty test problems taken from the literature [16], [31], [32], [34], [35]. This allows us to determine the computational time of our method. Moreover, we have computed some performance parameters about convergence and distribution for the obtained solutions. Based on these results, we have conducted a comparative study with the NSGA-II method. According to this comparison, our proposed method can be presented as the best choice for solving multiobjective optimization problems when decision makers need a fast and efficient convergence method.
This paper is organized around sections. After Section I denoted to the introductory paragraph, Section II, will describe materials and methods. Section III, will provide details on the MOGWAT method. Section IV-C, will present results and discussion of this work. The conclusion of the study will be drown in Section V.
II. Materials and methods
A. Multiobjective optimization concepts
Let us consider the multiobjective optimization problem in the following formulation:
... (P)
where f = Si, f2, · · · ,fp ^is the vector which components are objective functions that are subjected to the constraint function g -gi, g2,- * * , Qm . Let us set that / = {X g Rn : g(x) < 0} and Y = fQx) respectively the decision space and objective space of the problem (P).
Definition 1. An objective vector f(x·)e Y is said non dominated point if there is not another objective vector f(x) such as ffx) < ffx·) for all İ = 1, p and ffx) < ffx·) for at least one index k.
In this case, if X· e X, it is called Pareto optimal solution of the problem (P) and the set of non dominated points of problem (P) is called Pareto front.
Definition 2. The Pareto front is a direct representation of Pareto optimal solutions by objective functions.
Throughout the rest of this paper, we are going to denote by Xe the Pareto optimal solutions set of the problem (P) and Ye the set of the non dominated points.
For the solutions of the problem (P), there are many methods using an aggregation function to transform the problem into a single-objective optimization problem. Here, we have used the 6-constraint approach. It gives way rewording the initial problem (P) as follows:
... (Pe)
where G e Rp-1. Let us set D = (x e R : ffx) < Gt, i = īfp, /7= k and g fx) < o, j = i, m}.
Let us consider the problem (P) with p = 1. That implies that we must minimize a single-objective function. In the literature review, most methods for solving this kind of problems proceed by transforming value into an unconstrained optimization problem by using a penalty function [1], [11], [12], [22]. One of the most commonly used is the Lagrangian penalty function which combines the objective function and all constraint functions as follows:
...
where iļj> o, j = {L, 2, , m} are Lagrangian penalty parameters.
Various methods have been proposed today for obtaining the optimal solutions to a single-objective function like L/. In the case of non-linearity, where the exact solution is hard to find, we have to use the one that gives a good approximation.
B. HmGWOGA method
In recent works, a modified hybrid Grey Wolf Optimizer and genetic algorithm (HmGWOGA) [17] was proposed for the optimal solutions of positive functions. This algorithm is a combination of Grey Wolf Optimizer (GWO) [3], [18], [19], [23] and Genetic Algorithm (GA) [2], [8], [20], [31], [32]. Its principle is to use genetic operators to obtain a population with high-performance before applying the steps of hunting of grey wolves. The main steps of the Grey Wolves hunting technique are to pursue, hunt, approach prey, encircle and harass prey until it stops and finally attacks.
Note that, the family of grey wolves is organized into four levels, of which the first level is positioned by the appointed leader (a), who is assisted by the wolf (/3) at the second level. On the third level is the wolf (6) and on the fourth level, we have the rest of wolves called (id). In hunting, this hierarchy is respected, which makes wolves (Ct) the best hunting solution, followed by wolf (/3) and so on. The wolves initiate the pursuit, encircle the prey, and torment it until they immobilize it. At this moment, they can attack the prey. Mathematically, encirclement is modeled as follows [17], [19], [23]:
... (1)
where - denote the number of current iteration, ... a, C = 2Г2, a is a coefficient which decreases relative to iterations. It is defined by[17], [19], [23]:
... (1)
where i is the current iteration, d the space dimension, Maxlnter is the maximal number of iterations. Xp д vector given the position of the prey, X is the vector given the position of green wolves7n anďf2 are random vectors belong in [о, 1].
When |A I < 1, then the wolf (a) converges toward the p_rey to attack it as presented in the Fig. 1 (^) and when |A I > 1 the wolf (a) is looking for a prey as shown in the Fig. 1 (b) [13].
The position of wolves (а), (ß) and (6) are individually adjusted according to prey and those of wolves (co) follows the principle of hierarchy. The mathematical modeling of positions of these three wolves is [13], [17], [18], [19]:
... (3)
where C i, C 2 and C 3 are random vectors, X«, Xß, Xs are respectively the positions of (а), Qß) and (6). The new best position of wolves, which is the optimal solution, is:
... (4)
where:
... (12)
In practice, the HmGWOGA method has given better results (it is faster and more convergent) than the initial GWO method on single objective optimization problems [17].
C. Performance metrics
In the multiobjective optimization field, we have metrics that allow us to evaluate the performance of a given method. Some of these metrics evaluate the performance of a method and others are used to compare directly two methods at a time. For the first group we have used the generational distance metrics (y) [16], [29], [31], [32], spread metrics (A) [16], [29], [31], [32] and spacing metric (S) [34], [35]. For the second metrics group, we have used the contributionmetric (Conf) and C-metric for a direct comparison of MOGWAT and NSGA-II, this is a reference method, on some multiobjective optimization test problems.
Genera.tional distance (y)
y is a metric to evaluate the convergence of the method toward the Pareto optimal solutions. It consists is evaluating of the distance between obtained solutions and analytic solutions. It is denoted by y and its compute formula is :
... (6)
where K is the number of obtained solutions; I is an integer such that<l ¿ ; di, i = 1, К the Euclidean distance between obtained solution and the analytic solutions; the value of у is always between zero and one. When it is closed to zero, the convergence of the used method is good.
Spread. (A)
A measures the degree of uniform distribution achieved by the solutions obtained. A method has good uniform distribution of solutions if the value of this metric is close to zero. It is noted by A and is defined by:
... (7)
where di are Euclidean distances between neighboring solutions with d their average value; d is the distance between the extreme solution of analytic Pareto front with obtained Pareto front.
Spacing S
The spacing metric 5 is also used to measure the uniformity of the distribution of the Pareto optimal solutions. For a bi-objective problem, it is defined as follows:
...
wheren is the number of the obtained solutions,
...
and "d = _ ¿y when the value of spacing 5 is next to (9 /=i ,the distribution of obtained solution on the Pareto front is good. Indeed, if the Pareto front is discontinuous, the value of the parameter 5 will be very large.
Contri.bution (Cont)
The metric contribution is proposed by Zitzler and allows quick evaluation of the improvement brought by a methoed over another. We can use it to compare two sets of solutions obtained using two different methods. It is given by:
... (9)
where A and В denote methods; Px denote the set of identical solutions given by the two methods; Wa and Na denote respectively the sets of solutions of A that dominate at least one solution of В and the set of solution of A that are not comparable to those of B. Wb, Nb are defined in same logic[29], [30], [33].
C-metr.ic
The C metric is also a proposition of Zitzler to calculate the ratio of solutions from a method that dominate another method. It is defined by:
(10)
Where a < b means that the solution a is preferred to solution b.
D. Test problems
For us to evaluate the performance of our method, we have chosen some test problems to be solved[16], [32], [34], [35] from the literature review.
1) Test problems of Zitzler-Deb-Thiele:
We have dealt with eighteen test problems shared out into
two groups. The Table I gives the test problems of which we know the analytic front and Table II gives the test problems of which we do not know the analytic front.
2) Test problems from CEC 2009:
We have selected two test problems from the CEC 2009 test problems to evaluate our method in relation to NSGA-II.
3) Test problems from Engineering area:
Two problems have been dealt with : Four-bar truss design problem and Cantilever beam design problem.
Example 1. The four-bar truss design problem is a wellknown problem in the structural optimization field[34], [35], in which structural volume (fi) and displacement (f^) of a 4-bar truss should be minimized. As can be seen in the following equations, there are four design variables (Xi -^4) related to cross sectional area of members 1, 2, 3, and 4.
... (11)
Example 2. The cantilever beam design problem is another well known problem in the field of concrete engineering [34], [35], in which weight (f i) and end deflection (f^ of a cantilever beam should fie imfimized. There are two design variables: diameter (Xi) and length (X2/ 0
... (12)
with P = 1, E = 207000000, S = 300000,6 = 0, 005, P = 78ОО.
III. M0GWAT method
A. Description
The principle of MOGWAT method is to transform any multiobjective optimization problem into an unconstrained single objective optimization problem before its complete resolution. The main steps are described below.
Step 1: aggregation. It consists in using the E-constraint approach to transform multiple objective functions into single objective functions. Indeed, we choose one of the objective functions to minimize, while the others are transformed into constraint functions. This allows us to reword the initial problem (P) as follows: *
... (Pe)
where e g Rp-1. Let us set Q = {x g R : fi(x) < Gi, İ = TTP, /7= к and, q,fx) < о, / = 17ТЛ}. The component G , 11= k, / = i, p is irrelevant for (P J, but the convention to include it will be convenient in the following theorems.
Step 2: penalization. It aims to transform the problem (Pe) into an unconstrained problem by using a Lagrangian penalty function [16], [25], [27]. With which, the new formulation of (Pe) as follows:
... (Pen)
where ...
Step 3: resolution. It aims to propose solutions to the initial problem by solving the last formulation
(Pf). With the using of HmGWOGA algorithm, we can write :
...
B. Algorithm
Then, the algorithm of MOGWAT method can be summarize as follows.
IV. Results and discussion
A. Theoretical performance results
The theoretical performance study of the MOGWAT method has been done on the complexity of its algorithm and the optimality of obtained solutions.
The following two theorems guarantees the optimality of the solutions obtained by the using of MOGWAT method.
Theorem 1. Let Ei e [min fi(x), max fi(x)], İ = 1, p, İ1= xGx xGx K be some fixed parameters. Let Хёе я global optimal solutions set cf(Pe). Then Xe \ Xe= ®,
Theorem 1- Let X· e у such as X· e Xe \ X\ then, x· e X and x· / . As x· / E at least one x e Xе such ,as fk(x) < ff(xf) and for all ле iP= k, we have r (x) < e . Let is i E {1, 2, * * * , p} with set Ei = f(x·) with i e (i, 2, * * *, p} and ii= k, we obtain fk(x) < fk(x·) and f(x) É(x·). That is a contradiction with the assumption in which x£ Xе- Hence the result of the theorem. *
The following theorem is a sufficient condition proving that the formulation (Pf) conserves the optimality of its solutions for the initial problem:
Theorem 2. Let Ei e [min fi(x), max fi(x)], İ = 1, p, İ1= xGx xGx k be some fixed parameters and q a fixed large real number. Let X· be the global optimal solution cf (Pg). Then, X· is also the global optimal solution cf problem (Pf).
Proof of Theorem 2: Letx· be a global optimal solution of the problem C&Ecedil;F)- Then, for all x e Rn, L(f (x·), E, q)< L(f(x), E, q). That is equivalent to
...
Let us suppose that x and x· are taken in x then, f¡(x) - Ej + \ fj(x) - 67| = 0 and gfx) + |g7(x)| = o. In this case, fk(x·) < fk(x). Hence, x· e xeE- *
As the problem 2 is unconstrained single-objective optimization problem, then any good global numerical optimization method can be use to achieve the solutions.
The following Theorem 3 guarantees that the computational complexity of MOGWAT method is polynomial.
Theorem 3. Let M, n and Maxlnt be respectively the sizes cf the population cf solutions, the number cf variables, and the number cf iterations. Therefore, the computational complexity efthe MOGWAT method is O(M.n.Maxlter).
Procf cf Theorem 3: The computational complexity of the aggregation stage is 0(1) because it only consists of selecting one objective function. The complexity of the penalty operation is 0(m+p-i) because there are m+p-1 constraint functions. HmGWOGA method initialization stage requires 0(M.N ) as the computational time, where M is the size of the population of solutions and n is the number of variables. In order to compute the control parameters of HmGWGA and the update of the position of the grey wolf, the computational complexity is O(M.n)[32], The complexity of the operations for the computation of the fitness value of each grey wolf is also 0(M.n). Hence, the sum of these complexities gives O(M.n.Maxlter) as the computational complexity of MOGWAT. *
As the computational complexity of the NSGA-H method is O(p.n2) with p the number of objective functions and n the number of decision variables [31], we can conclude that MOGWAT is the best option.
B. Numerical performance results
The numerical performance study of the MOGWAT method has been done on the computational time, the convergence of,the,obtained,şplutiqns and the distribution of the obtained solutions. All of these parameters have been evaluated by using some test problems taken into the literature. In this work, these test problems have been shared out into three groups.
1) Results of Zitzler-Deb-Thiele test problems: this is the representation of optimal Pareto solutions of each test problem (see Fig. 2 and Fig. 3), the computational time of the used methods (see Table IV, Table V, Table X and Table XI), the convergence parameters (see Table VI and Table VII) and distribution parameters (see Table VIII and Table IX). As MOGWAT is a stochastic algorithm, we computed for, each problem one hundred solutions in ten times. This allows us to calculate the average ana variance of the performance parameters for all these ten times. We compare our results to those of NSGA-II because all of these test problems have been solved by this method [31], [32].
In the following we will sometimes set A=M0GWAT and B=NSGA-H.
Here are the results for the problems with analytic Pareto front. For them, we have plotted in the same figure the analytic Pareto front and those given by MOGWAT method.
Here are the results for the problem without analytic Pareto front.
2) Results of CEC 2009 test problems:
The Pareto optimal solutions obtained by applying the MOGWAT method are given in the Fig. 4
3) Results of Engineering problems:
For the Example 1, we have obtained the Pareto optimal solutions with MOGWAT and NSGA-II that we have given in the Fig. 5.
For the Example 2, MOGWAT and NSGA-II methods have been applied and the Pareto optimal solutions are given in the Fig. 6.
C. Discussions
According to the test problems with analytic Pareto front, we have the following results: the Table IV and Table V show that the MOGWAT method works faster than NSGA-II on all of these eight test problems; the Table VI and Table VII show that the MOGWAT method outperforms NSGA-II on all of these eight test problems in terms of converge; and the Table VIII and Table IX show that the MOGWAT method is better than NSGA-II on five test problems, namely PL3, PL5, PL6, PL7 and PŁ8 in terms of distribution.
According to the test problems without analytic Pareto front, we have the following results: the Table X and Table XI show that the MOGWAT method works faster than NSGA-II on all of these eight test problems; the Table XII and Table XIII show that the MOGWAT method is the better than NSGA-II on only five test problems, namely PL11, PŁ13, PL14, PL15 and PL16 in terms of contribution; and the Table XIV and Table XV show that the MOGWAT method outperforms NSGA-II on all of these eight test problems in terms of the number of dominated solutions ratio.
According to the CEC 2009 test problems, we have the following results: the Table XVI shows that the MOGWAT method converges better than NSGA-II; and the Table XVII shows MOGWAT method has a better distribution than NSGA-II.
According to the engineering problems, the Table XVIII shows that the MOGWAT method has a best distribution than NSGA-II.
Finally, with this combination of the 6-constraint approach and the HmGWOGA algorithm, we have created a method that is effective and efficient for solving multiobjective optimization problems. This is better than NSGA-II in terms of computational time and convergence of the obtained solutions. However, in terms of distribution, it is better than NSGA-II on some problems but not all. Then, improving the distribution of the solutions using the MOGWAT method will be a topic for future research.
V. Conclusion
A metaheuristic method for solving a nonlinear multiobjective optimization problem was proposed by this work. We named it MOGWAT. It is a combination of the econstraint approach and the HmGWOGA algorithm. On the one hand, we have demonstrated the theoretical foundation of our algorithm and its good computational complexity by proposing three theorems. On the other hand, we have substantiated its numerical performances by successfully resolving 20 test problems. The results have been compared to those of NSGA-II about the computational time, convergence of solutions, and distribution of solutions. Out of these 16 test problems of Zitler-Deb-Thiele, MOGWAT is faster on the 100%, converges better on also the 100%, and has a better distribution on the 62.5%. Therefore, MOGWAT is better than NSGA-II at 87% of these performance parameters. For the CEC 2009 test problems and Engineering problem, MOWGAT outperforms NSGA-II on convergence and distribution of Pareto optimal solutions. According to these theoretical and numerical results, MOGWAT is the best choice for solving multiobjective optimization problems.
Our future study will focus on enhancing the distribution of our approach.
Manuscript received May 2, 2023; revised January 4, 2024.
References
[1] Yiyuan Wang, Xianfeng Ding, Tingting Wei, Jiaxin Li and Chao Min, "Penalty Function Method for Solving Multiobjective Interval Bilevel linear Programming Problem," IAENG International Journal of Computer Science, vol. 50, no. 1, pp209-218, 2023.
[2] Xijun Zhang, Yunfang Zhong, Baoqi Zhang, and Shengyuan Nie, "Optimization of the NSGA-III Algorithm Using Adaptive Scheduling," Engineering Letters, vol. 31, no. 2, pp459-466, 2023.
[3] M. Premkumar, Pradeep Jangir, B. Santhosh Kumar, Mohammad A. Alqudah and Kottakkaran Sooppy Nisar, "Multi-Objective Grey Wolf Optimization Algorithm for Solving Real-World BLDG Motor Design Problem," Computers, Materials and Continua, vol. 70, no. 2, pp24362452, 2022.
[4] Zeynab Hoseini, Hesam Varaee, Mahdi Rafieizonooz, Jang-Но Jay Kim, "A New Enhanced Hybrid Grey Wolf Optimizer (GWO) Combined with Elephant Herding Optimization (EHO) Algorithm for Engineering Optimization," Journal of Soft Computing in Civil Engineering, vol. 06, no. 4, ppl-42, 2022.
[5] Safora Akhavan-Nasab, Zahra Beheshti, "A Condition-based Grey Wolf Optimizer Algorithm for Global Optimization Problems," Journal of Soft Computing and Information Technology, vol. 11, no. 2, pp2640, 2022.
[6] Xinyang Liu, Yifan Wang and Miaolei Zhou, "Dimensional Learning Strategy-Based Grey Wolf Optimizer for Solving the Global Optimization Problem," Computational Intelligence and Neuroscience, vol. 2022, pp2-31, 2022.
[7] Eslan Mohammed Abdelkader, Abobakr Al-Sakkaf, Nehal Elshaboury and Ghasan Alfalah, "Hybrid Grey Wolf Optimization-Based Gaussian Process Regression Model for Simulating Deterioration Behavior of Highway Tunnel Components," Processes, vol. 10, no. 1, ppl-38,2022.
[8] Minghui Zhao, Yongpeng Zhao, Changxi Ma, and Xu Wu, "Robust Optimization of Mixed-Load School Bus Route Based on MultiObjective Genetic Algorithm," Engineering Letters, vol. 30, no. 4, PPİ543-1550, 2022.
[9] Wei Zhang, Sai Zhang, Fengyan Wu and Yagang Wang, "Path Planning of UAV Based on Improved Adaptive Grey Wolf Optimization Algorithm," IEEE Access, vol. 9, IAN21082795, pp89400-89411, 2021.
[10] Farshad, Hamid Reza Safavi, Mohamed Abd Elaziz, Shaker H. Ali El-Sappagh, Mohammed Azmi Al-Betar and Tamer Abuhmed, "An Enhanced Grey Wolf Optimizer with a Velocity-Aided Global Search Mechanism," Mathematics, vol. 10, no. 3, ppl-32, 2021.
[11] A. Som, K. Some and A. Compaore, "Exponential penalty function with MOMA-Plus for the Multiobjective optimization problems," Applied Analysis and Optimization, vol.5, no.3, pp323-334, 2021.
[12] Bingzhuang Liu, "A New Family of Smoothing Exact Penalty Functions for the Constrained Optimization Problem," Engineering Letters, vol. 29, no. 3, pp984-989, 2021.
[13] Amir Seyyedabbasi, Farzad Kiani, 'T-GWO and Ex-GWO: improved algorithms of the Grey Wolf Optimizer to solve global optimization problems," Engineering with Computer, vol. 10, no. 1, pp509-532, 2021.
[14] Shubham Gupta, Kusum Deep, "Enhanced leadership-inspired Grey wolf optimizer for global optimization problem," Engineering with Computers, vol. 36, no.l, ppl777-1800, 2020.
[15] Yan-e Hou, Lanxue Dang, Yunfeng Kong, Zheye Wang and Qingjie Zhao, "A Hybrid Metaheuristic Algorithm for the Heterogeneous School Bus Routing Problem and a Real Case Study," IAENG International Journal of Computer Science, vol. 47, no. 4, pp775-785, 2020.
[16] Alexandre Som, Kounhinir Some, Abdoulaye Compaore, Blaise Some, "Performances assessment of MOMA-Plus method on multiobjective optimization problems," European Journal of pure and applied mathematics, vol. 13, no.l, pp48-68, 2020.
[17] W. O. Sawadogo, P. O. F. Oue'draogo, K. Some', N. Alaa and B. Some', "Modified hybrid grey wolf optimizer and genetic algorithm (HmGWOGA) for global optimization of positive functions," Advance in Differential Equations and Control Processes, vol. 20, no. 2, ppl87206, 2019.
[18] Wen Long, Shaohong Cai, Jianjun Jiao, Mingzhu Tang, "An efficient and robust grey wolf optimizer algorithm for large-scale numerical optimization," Soft Computing, vol. 24, no.2, pp997-1026, 2019.
[19] Muhammed Arif Sen, Mete Kalyoncu, "Grey Wolf Optimizer Based Tuning of a Hybrid LQR-PID Controller for Foot Trajectory Control of a Quadruped Robot," Gazi University Journal of Science, vol. 32, no. 2, pp674-684, 2019.
[20] Fu Yan, Jianzhong Xu, and Kumchol Yun, "Dynamically Dimensioned Search Grey Wolf Optimizer Based on Positional Interaction Information," Hindawi, Complexity, vol. 2019, Article ID 7189653, ppl-36, 2019.
[21] Hossam Faris, Ibrahim Aljarah, Mohammed Azmi Al-Betar, Seyedali Mirjalili. "Grey wolf optimizer: a review of recent variants and applications," Neural Computing and Applications, vol. 30, no. 1, pp413-435, 2018.
[22] Y. Zheng an Z. Meng, "A new augmented Lagrangian objective penalty function for constrained optimization problems," Open Journal of Optimization, vol. 6, no. 1, pp39-46, 2017.
[23] Long Wen, "Grey Wolf Optimizer based on Nonlinear Adjustment Control Parameter," Advances in Intelligent Systems Research, vol. 136, no. 1, pp643-648, 2016.
[24] Seyedali Mirjalili, Seyed Mohammad Mirjalili, Andrew Lewis, "Grey Wolf Optimizer," Advances in Engineering Software, vol. 69, no. 1, pp46-61, 2014.
[25] Kounhinir some, Berthold Ulungu, Wenddabo Olivier Sawadogo, Blaise Some, "A theoretical foundation metaheuristic method to solve some multiobjective optimization problems," International Journal of Applied Mathematical Research, vol.2, no. 4, pp464-475, 2013.
[26] Mohsen Ejday, "Metamodel based multi-objective optimization for the shaping process," Doctorate dissertation, Paris Tech, France, 2011.
[27] Kounhinir Some, Berthold Ulungu, Ibrahim Himidi Mohamed and Blaise Some, "A new method for solving nonlinear multiobjective optimization problems," JP Journal of Mathematical Sciences, vol. 2, no. 1&2, ppl-18, 2011.
[28] T. Gormen; C. Leiserson; R. Rivest and C. Stein, "Introduction to Algorithms," 3Eds., Cambridge, Massachusetts London, England: The MIT Press, 2009.
[29] Yann Collette-Patrick Siarry, "Multiobjective Optimization," Editions eyrolles, 61, Bld Saint-Germain, Paris Cedex 05, 2002.
[30] H. Meunier, "Parallel Evolutionary Algorithms for Multi-Objective Optimization of Mobile Telecommunications Networks," Doctorate dissertation, Universite' des Sciences et technologiques de Lille, France, 2002.
[31] К. Deb, A. Patrap, S. Agarwal and T. Meyarivan, "A Fast and elitist multiobjective genetic algorithm: NSGA-II," IEEE Transactions on evolutionary computation, vol. 6, no.2, ppl82-197, 2002.
[32] Kalynmoy Deb, "Multiobjective optimization using evolutionary algorithms," Chichester, UK: Wiley, 2001.
[33] E. Zitzler, "Evolutionary algorithms for multiobjective optimization: methods and applications," PhD dissertation, Swiss Federal Institute of Technology, Zurich, 1999.
[34] Seyedali Mirjalili, Pradeep Jangir, Shahrzad Saremi, "Multi-objective ant lion optimizer: a multi-objective optimization algorithm for solving engineering problems," Applied intelligence, vol. 46, no.l, pp79-95, 2017.
[35] Ali Sadollaha, Hadi Eskandarb, Joong Hoon Kima, "Water cycle algorithm for solving constrained multi-objective optimization problems," Applied Soft Computing, vol. 27, no.l, pp279-298, 2015.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2024. This work is published under https://creativecommons.org/licenses/by-nc-nd/4.0/ (the“License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
This paper proposes an algorithm for solving multiobjective optimization problems using the attack technique of the Grey Wolf. It is a metaheuristic method called a Multiobjective Optimizer based on Grey Wolf Attack Technique (MOGWAT). In fact, it is inspired by the modified Hybrid Grey Wolf Optimizer and Genetic Algorithm (HmGWOGA), which is a single objective optimization algorithm specially designed for positive objective functions. The MOGWAT method combines the multiple objective functions of the initial problem into a single objective function, and then penalizes constraint functions to get an unconstrained single-objective optimization. The use of an effective single-objective optimizer allows reaching the optimal solutions. These solutions are also the Pareto optimal solutions of the initial problem according to some parameters. Through some theorems, we have established the theoretical foundation and performance of our method. Furthermore, in order to highlight the numerical performance of the method, we have tackled three groups of problems: 16 test problems from the Zitzler-Deb-Thiele benchmarks, 2 instances from the CEC 2009 benchmarks, and 2 real-world problems from literature. Our numerical results have been compared to the ones obtained with the NSGA-II method. This comparison was made using some computed performance parameters. The outcomes of the comparison have enabled us to prove the effectiveness and efficiency of our new approach in terms of speed and convergence.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 PHD student of the Laboratoire de Mathematiques, Informatique et Applications of Universite' Norbert ZONGO, Burkina Faso
2 PHD graduate of the Laboratoire d'Analyse Numerique Informatique et de Biomathe'matique of Universite' Joseph KI-ZERBO, Burkina Faso
3 professor of the Unite' de Formation et de Recherche en Sciences et Technologies of Universite' Norbert ZONGO, BP 376 Koudougou, Burkina Faso