Content area

Abstract

This paper presents a novel meta-heuristic algorithm inspired by the visual capabilities of the mantis shrimp (Gonodactylus smithii), which can detect linearly and circularly polarized light signals to determine information regarding the polarized light source emitter. Inspired by these unique visual characteristics, the Mantis Shrimp Optimization Algorithm (MShOA) mathematically covers three visual strategies based on the detected signals: random navigation foraging, strike dynamics in prey engagement, and decision-making for defense or retreat from the burrow. These strategies balance exploitation and exploration procedures for local and global search over the solution space. MShOA’s performance was tested with 20 testbench functions and compared against 14 other optimization algorithms. Additionally, it was tested on 10 real-world optimization problems taken from the IEEE CEC2020 competition. Moreover, MShOA was applied to solve three studied cases related to the optimal power flow problem in an IEEE 30-bus system. Wilcoxon and Friedman’s statistical tests were performed to demonstrate that MShOA offered competitive, efficient solutions in benchmark tests and real-world applications.

Full text

Turn on search term navigation

1. Introduction

Metaheuristics are methods that can be implemented to solve optimization problems to obtain an optimal solution (near-to-optimal), including non-linear problems with non-linear constraints [1]; real-world engineering applications [2,3,4,5], such as optimization problems in power electronics [6]; or even those that traditional methods cannot solve [7]. However, as described in [8,9], there is no method capable of solving every optimization problem, as some of these approaches might perform better than others. Metaheuristics involve exploring the local and global solution search space efficiently [10]. The local exploration strategy refers to a search focused on a promising area of space or domain, whereas global exploration focuses on searching for new areas within the problem domain. A good balance between global and local searches can improve the metaheuristic performance.

Metaheuristics can be categorized as evolutionary-based, trajectory-based, and nature-inspired; see Figure 1. A short description of these metaheuristic types can be summarized as follows. The process of biological evolution inspires evolutionary algorithms [11,12,13,14,15]. Nature-inspired algorithms include swarm algorithms, which are metaheuristics based on the collective behavior of biological systems [16,17,18,19,20,21,22,23,24,25,26,27,28]. They also include physics-inspired algorithms that use physical principles for optimization [29,30,31,32,33,34], and algorithms inspired by human behavior [35,36,37,38]. Moreover, trajectory-based metaheuristics follow a trajectory through the solution space, aiming to improve the solution found at each step. Unlike population-based algorithms, which work with multiple solutions simultaneously, trajectory-based metaheuristics focus on a single solution that evolves and improves iteratively. A more extensive metaheuristic classification can be found in [39,40].

A schematic representation of a metaheuristic algorithm is shown in Figure 2. In the first stage, a set of vectors, also named as search agents, is randomly generated. The size of the vector represents the number of variables of the problem, where its value represents a possible solution.

In the second stage, vectors are updated by functions representing living organisms’ behavior or physical and chemical phenomena as bio-inspired models. These vector modifications present an additional opportunity to investigate novel regions within the solution search space.

In the third stage, vectors are evaluated by the objective function, where the calculated value is referred to as the fitness. In minimization, the ideal fitness vector is the minimum within the population.

In the fourth stage, during each iteration, the vector exhibiting the best fitness is thereafter compared with the fitness of the previously recombined vectors.

This process is repeated until the stop criterion is reached, either the number of interactions or the number of objective function evaluations is satisfied, and then the best achievable solution is found.

The contributions of this work are the following:

A novel bio-inspired optimization algorithm named MShOA, which is based on the mantis shrimp’s visual ability to detect polarized light, is proposed. Once the polarization state is determined, these detection capabilities allow the mantis shrimp to decide whether to search for food, hit predators, defend itself, or escape from its burrow.

The performance of MShOA was evaluated with a set of 20 unimodal and multimodal functions reported in the literature. Furthermore, it was implemented in 10 real-world constraint optimization applications from CEC2020 and 3 study cases of an electrical engineering optimal power flow problem.

In the Wilcoxon and Friedman statistical tests, MShOA outperformed 14 algorithms taken from the state of the art.

MShOA’s source code is available for the scientific community.

This paper is structured as follows: Section 2 shows the MShOA bio-inspiration, outlines the mathematical model, provides the pseudocode, and includes the general flowchart of the algorithm. Section 3 shows the performance of MShOA compared with 14 recent bio-inspired algorithms. Section 4 presents the results and discusses the algorithm. Section 5 describes the application of MShOA to real-world optimization problems. Finally, Section 6 summarizes the conclusions and future work.

2. Mantis Shrimp Optimization Algorithm (MShOA)

2.1. Biological Fundamentals

The purple-spotted mantis shrimp (Gonodactylus smithii) is a stomatopod crustacean that primarily inhabits shallow tropical marine waters worldwide. They are sedentary animals that spend most of the day hidden in burrows on the seabed. However, they make forays to feed, exhibiting a remarkable ability to return to their burrows [41]. During these outings, they behave in a cautious and observant manner, using their complex visual systems to detect dangers and opportunities, especially when searching for food [42]. They have the most complex eyes in the animal kingdom, with up to 12 types of photoreceptors that allow them to discriminate linearly and circularly polarized light [43,44,45]. Their eyes are also extremely mobile, capable of performing proactive torsional rotations of up to 90 degrees. These movements can be coordinated or independent during visual scanning, allowing them to individually adjust the orientation of each eye in response to specific stimuli, such as polarized light signals [46]. Mantis shrimps use these specific polarization signals, both linear and circular, in critical social contexts, such as mating and territorial defense [47,48]. This adaptation enables them to accurately orient themselves in their surroundings to detect objects, predators, prey, and communicate visually with other mantis shrimp, optimizing their visual perception and behavior in their environment [49]. Additionally, they are also characterized by their fierce competition for their burrows, which are highly valuable resources due to their multifunctional use. They use their acute vision to determine the size of the opponent and their fighting ability. The opponent’s fighting ability is decisive in deciding whether to fight for the burrow or avoid the confrontation [50]. Figure 3 summarizes most of the aforementioned mantis shrimps’ behaviors. Figure 3a illustrates how shrimp hide in their shelter to increase their chances of staying safe. The shrimp’s eyes gaze in multiple directions at once for further decision-making, which may include strike, defense or shelter, burrow, or cautiously forage, as seen in Figure 3b. Figure 3c shows a lateral view of a red mantis shrimp. Additionally, Figure 3d illustrates the impact of a mantis shrimp’s strike on a prey shell.

Overall, when observing its surroundings, the mantis shrimp can see independently with both eyes. In this way, each eye detects a predominant type of polarization (vertical, horizontal, or circular) within its field of view, which allows it to identify prey, a threat, or a potential mate. Subsequently, the shrimp compares the information captured by both eyes and selects the detected signal that is the most predominant, which then guides its subsequent behavior. Based on this process, this study considered that the mantis shrimp engages in foraging when vertical polarization dominates; attacks when horizontal polarization is more intense; and burrows, defends, or shelters when circular polarization is detected, as depicted in Figure 4.

2.2. Initialization

The algorithm’s initialization consists of two phases to obtain the initial values. As seen in Equation (1), a set of multidimensional solutions representing the initial population is randomly generated in the first phase:

(1)Xi,j=lb1,j+randi,j·(ub1,jlb1,j),fori=1,2,,Nandj=1,2,,dim

where Xi,j is the initial population, N is defined as the number of search agents, and dim represents the dimension of the problem (number of variables). Figure 5a presents a graphical representation of the initial population X. Finally, lb and ub represent the lower and upper bounds of the search space, respectively. As described in Equation (2), a vector is randomly generated in the second phase to represent the detected polarization’s Polarization Type Indicator (PTI):

(2)PTI=round[(1+2rand)]

In both phases, the rand function is assumed to follow a uniform distribution in the range [0,1]. The round function restricts the PTI value to 1, 2, or 3, as seen in Figure 5b. These values represent the reference angle set at (π/2), (0 or π), or (π/4 or 3π/4), which are related to vertical linearly, horizontal linearly, and circularly polarized light, respectively. Each of these polarization states is based on the visual capabilities of the mantis shrimp. In addition, a PTI value of 1 activates the mantis shrimp’s foraging strategy, while a PTI value of 2 activates the attack strategy. The final strategy, burrow, defense, or shelter, is triggered by a PTI of 3. Figure 5c illustrates the correlation of the PTI value and the type of polarized light detected, as well as its relationship with the forage; attack; or burrow, defense, or shelter strategies.

2.3. Polarization Type Identifier (PTI) Vector Update Process

The following steps outline how the PTI vector is updated. The whole process is summarized in Figure 6.

The calculation of the polarization angle was inspired by the visual system of the mantis shrimp, where each eye performs independent perception. The left eye employs vectors from both the initial (X) and updated population (X) to compute the Left Polarization Angle (LPA), as described in Equation (3). In contrast, the Right Polarization Angle (RPA) is determined through Equation (4).

(3)LPA=arccos(Xi·Xi)

(4)RPA=randπ

Furthermore, the angular difference between LPA and RPA is computed for subsequent consideration in the PTI update process.

Before the PTI value is updated, the detected left- and right-eye polarization types should be calculated. The Left Polarization Type (LPT) vector and the Right Polarization Type (RPT) vector are determined by the criteria described in Equation (5):

(5)Left eyeLPT=1,if3π8LPA5π82,if0LPAπ8or7π8LPAπ3,ifπ8<LPA<3π8or5π8<LPA<7π8Right eyeRPT=1,if3π8RPA5π82,if0RPAπ8or7π8RPAπ3,ifπ8<RPA<3π8or5π8<RPA<7π8

The Left-Eye Angular Difference (LAD) between the Left Polarization Angle (LPA) and the reference angles of the polarized light is calculated using Equation (6). Similarly, the Right-Eye Angular Difference (RAD) is calculated with the Right Polarization Angle (RPA).

(6)LAD=LPA,if0LPAπ8πLPA,if7π8LPAππ2LPA,if3π8LPA5π8π4LPA,ifπ8<LPA<3π83π4LPA,if5π8<LPA<7π8RAD=RPA,if0RPAπ8πRPA,if7π8RPAππ2RPA,if3π8RPA5π8π4RPA,ifπ8<RPA<3π83π4RPA,if5π8<RPA<7π8

Finally, by Equation (7), the PTI value is calculated. The pseudocode for the previously discussed polarization type identifier vector update process (PTI) is described in Algorithm 1 and in Figure 7.

(7)PTIi=LPTiifLADi<RADiRPTiifLADi>RADi

Algorithm 1 Polarization Type Identifier (PTI) Vector Update Process

  1:. procedure  PTI_Vector_Update

  2:.     Step 1: compute polarization angles.

  3:.     Left-Eye Polarization Angle (LPA) calculated by Equation (3).

  4:.     Right-Eye Polarization Angle (RPA) calculated by Equation (4).

  5:.     Step 2: determine polarization type.

  6:.     Left-Eye Polarization Type (LPT) calculated by Equation (5).

  7:.     Right-Eye Polarization Type (RPT) calculated by Equation (5).

  8:.     Step 3: compute angular differences.

  9:.     Left-Eye Angular Difference (LAD) calculated by Equation (6).

10:.     Right-Eye Angular Difference (RAD) calculated by Equation (6).

11:.     Step 4: update PTI vector.

12:.     if LAD<RAD then

13:.         PTIiLPTi

14:.     else

15:.         PTIiRPTi

16:.     end if

17:.     Output: updated PTI vector.

18:. end procedure

2.4. Mathematical Model and Optimization Algorithm

Overall, the mantis shrimp’s behavior is characterized by foraging trips, in which it remains cautious and observant to detect prey or predators. In its burrow, it remains vigilant for any danger or opportunity for food. As previously described, it performs all these activities thanks to its eye’s independent movement skills, making it capable of perceiving linearly horizontal, linearly vertical, and circularly polarized light. This study used the polarized light detection signal capabilities of the mantis shrimp to determine the crustacean’s movement strategy, whether it is a forage; attack; or burrow, defense, or shelter survival tactic.

A mathematical model that simulates each eye’s rotational and independent movements is introduced to explore the mantis shrimp’s polarized light detection skills. Each eye of the mantis shrimp is considered a polarizing filter. The left eye is set to observe a known environment, while the right eye randomly explores an area, searching for new interactions within the environment. The intensity of the polarized light detected by each eye is meticulously compared, and then, the final decision is based on the signal with the highest intensity, which is directly related to the proximity of the polarization angle. Table 1 summarizes the polarized light detection capabilities of the mantis shrimp and their relationships with their behavioral strategies.

2.5. Strategy 1: Foraging

The characteristic movement of the mantis shrimp when searching for food can be described as Brownian motion. Therefore, we can analyze this behavior starting with the generalized Langevin equation. This equation describes the motion of a particle without external forces, known as a free Brownian particle [51]. The dynamics of this particle are described by Equation (8). In this case, by assuming that no external forces are acting on the particle, W(q) is set to 0, which simplifies Equation (8) to Equation (9). This represents the equilibrium between friction (ζ0) and the noise term R(t). A detailed explanation of this simplification process can be found in [51].

(8)μq¨=dWdq0tq(τ)ζ(tτ)dτ+R(t).

(9)μq¨=ζ0q˙+R(t)

Given that only q¨ and q˙ appear in the equation of motion, Equation (9) can be rewritten in terms of the velocity v=q˙ to obtain Equation (10), where the quantity ζ(t) in the Langevin equation is called the dynamic friction kernel, while R(t) is known as a random force. In Equation (11), a diffusion constant D, which allows the random component of the model to be scaled, is then incorporated:

(10)μv˙=ζ0v+R(t)

(11)μv˙=ζ0v+D·R(t)

Moreover, the current position is updated by adding the random movement defined by the Langevin equation. For this work, the value of ζ(t) is proposed to be 1; thus, the equation can be rewritten as

(12)xi(t+1)=xbestv+D·R(t);v=xi(t)xbest;R(t)=xr(t)xi(t)

where xi(t+1) represents the new position of the mantis shrimp in the (t+1)-th iteration; xbest represents the best position found for the mantis shrimp; v is defined by the difference between the current position xi(t) and the best position xbest; R(t) represents the difference between xr(t) and the current vector xi(t), where r[1, population size], ri; and D is a random diffusion value set within [1,1]. The foraging strategy is summarized in Figure 8.

2.6. Strategy 2: Attack

The mantis shrimp’s strike is renowned in biology for its capacity to fracture hard shells, comparable to the impact of a bullet [52,53,54,55]. Equation (13) shows how this hit can be represented by a circular motion parametric equation in a two-dimensional plane:

(13)xt=rcosθ

where the vector r represent the mantis shrimp’s front appendages and θ is the angular strike motion.

Finally, the mantis shrimp’s attack can be expressed as follows:

(14)xi(t+1)=xbestcosθ

where xi(t+1) represents the new position of the mantis shrimp in the (t+1)-th iteration, xbest represents the best position found for the mantis shrimp, and θ is randomly generated with θ[π,2π]. The graphical representation of the attack strategy is shown in Figure 9.

2.7. Strategy 3: Burrow, Defense, or Shelter

The mantis shrimp bases its decision-making strategy regarding defending, sheltering, or burrowing on its exceptional visual ability to assess an opponent’s threat. Using their keen vision, they determine whether the opponent is significantly larger and likely stronger, choosing to flee, or if similar in size or smaller, they aggressively decide to defend or shelter in their territory [56,57,58,59,60]. These animals hide in their shelters to increase the chance of staying safe [61]. Equation (15) is used to describe the mantis shrimp’s defense or shelter strategy:

(15)Defense:xi(t+1)=xbest+k(xbest),k[0,0.3]Shelter:xi(t+1)=xbestk(xbest),k[0,0.3]

where xi(t+1) represents the new position of the mantis shrimp in the (t+1)-th iteration, xbest represents the best position found for the mantis shrimp, and k is a scaling factor randomly generated between 0 and 0.3. The graphic representation of the burrow, defense, or shelter strategy is shown in Figure 10.   

Sensitivity Analysis of k Parameter

The third strategy introduces a scaling parameter k fixed at 0.3. To evaluate the algorithm’s sensitivity to this parameter, nine different values of k running from 0.1 to 0.9 in increments of 0.1 were studied. The analysis included 10 unimodal and 10 multimodal benchmark functions; see Table A1 and Table A2. Each test was executed independently 30 times, with a population size of 30 and 200 iterations. The statistical results of this sensitivity analysis are presented in Table 2, Table 3, Table 4, Table 5 and Table 6. Figure 11 illustrates the behavior of MShOA when using different values of k, as tested on one representative unimodal function and one multimodal function.

Subsequently, the nonparametric Wilcoxon signed-rank test was applied to determine whether any alternative value of k led to statistically significant performance differences when compared with k=0.3. The results of this test, as summarized in Table 7, indicate that no statistically significant differences were observed at the 5% significance level.

2.8. Pseudocode for MShOA

The pseudocode and flowchart of MShOA are described in Algorithm 2 and Figure 12, respectively.

Algorithm 2 Optimization Algorithm of the Mantis Shrimp

  1:. Procedure MShOA

  2:. Initialization of parameters. Specify the number of search agents, population size, and maximum number of iterations.

  3:. Randomly generate the initial population.

  4:. Randomly generate the Polarization Type Indicator (PTI).

  5:. while iteration < max number of iterations do

  6:.     if PTIi=1 then                   % vertical linearly polarized light

  7:.         Strategy 1: foraging (Equation (12)).

  8:.     else if PTIi=2 then             % horizontal linearly polarized light

  9:.         Strategy 2: attack (Equation (14)).

10:.     else if PTIi=3 then             % circularly polarized light

11:.         Strategy 3: burrow, defense, or shelter (Equation (15)).

12:.     end if

13:.     Update the Polarization Type Identifier (PTI) vector (Algorithm 1).

14:.     Update the population.

15:.     Calculate the fitness value from the new population.

16:.     Update the fitness and the best solution found.

17:.     iteration = iteration+1.

18:. end while

19:. Display the best solution found.

20:. end procedure

MShOA’s Time Complexity

The computational complexity of an optimization method is characterized by a function that relates the method’s runtime to the size of the problem’s input. Big-O notation functions as a widely accepted representation. The time complexity of MShOA is described as follows:

(16)O(MShOA)=O(Initialpopulation)+O(GeneratePolarizationTypeIndicator)+O(Strategy1)+O(Strategy2)+O(Strategy3)+O(UpdatePolarizationTypeIndicatorbyAlgorithm1)+O(Updatepopulation)+O(Updatefitness)

In addition to Equation (16), the time complexity of MShOA depends on the number of iterations (MaxIter), the population size of mantis shrimps (nMS), the dimensionality of the problem (dim), and the cost of the objective function (f). Table 8 describes each part of Equation (16).

Therefore, the overall time complexity of MshOA can be computed as follows:

(17)O(MShOA)=O(nMSdim)+O(dim1)+O(MaxIter1)+O(MaxIter1)+O(MaxIter1)+O(MaxIter1)+O(MaxIternMS)+O(MaxIternMSdimf)

In O notation, when we add several terms, the fastest-growing one dominates. Hence, the time complexity of MShOA can be expressed as follows:

(18)O(MShOA)=O(MaxIternMSdimf)

3. Experimental Setup

The efficiency and stability of MShOA were evaluated by solving 20 optimization functions from the literature; see Appendix A. MShOA was compared with the 14 bio-inspired algorithms described below:

Ant Lion Optimizer (ALO): The algorithm presents the predatory behavior of the ant lion in nature and mathematically models five stages: random movement, building traps, entrapment of ants in traps, catching prey, and rebuilding traps [21].

Arithmetic Optimization Algorithm (AOA): This algorithm takes advantage of the distribution behavior of fundamental arithmetic operations in mathematics, including multiplication (M), division (D), subtraction (S), and addition (A) [62].

Beluga Whale Optimization (BWO): This algorithm was inspired by the natural behaviors of beluga whales and mathematically models their behavior in pair swimming, preying, and whale fall [63].

Dandelion Optimizer (DO): This algorithm simulates the long-distance flight of dandelion seeds relying on the wind and is divided into three stages: the rising stage, the descending stage, and the landing stage [64].

Evolutionary Mating Algorithm (EMA): the evolutionary algorithm is based on a random mating concept from the Hardy–Weinberg equilibrium [65].

Grey Wolf Optimizer (GWO): This algorithm is inspired by grey wolves (Canis lupus) and mimics their leadership hierarchy and hunting mechanisms in nature [22].

Liver Cancer Algorithm (LCA): This algorithm mimics liver tumor’s growth and takeover processes and mathematically models their ability to replicate and spread to other organs [66].

Mexican Axolotl Optimization Algorithm (MAO): This algorithm was inspired by the way axolotls live in their aquatic environment and is modeled after their processes of regeneration, reproduction, and tissue restoration [67].

Marine Predators Algorithm (MPA): This algorithm is inspired by the interactions between predators and prey in marine ecosystems and models the widespread foraging strategies and optimal encounter rate policies [68].

Salp Swarm Algorithm (SSA): This algorithm is inspired by the behavior of salps in nature and primarily models their swarming behavior when navigating and foraging in oceans [17].

Synergistic Swarm Optimization Algorithm (SSOA): This algorithm combines swarm intelligence with synergistic cooperation to find optimal solutions. It mathematically models a cooperation mechanism, where particles exchange information and learn from one another, enhancing their search behaviors and improving the overall performance [69].

Tunicate Swarm Algorithm (TSA): this algorithm imitates the behavior of tunicates and models their use of jet propulsion and collective movements while navigating and searching for food [70].

Whale Optimization Algorithm (WOA): this algorithm imitates the social behavior of humpback whales in nature and mathematically models their bubble-net hunting strategy [18].

Catch Fish Optimization Algorithm (CFOA): this algorithm is inspired by traditional rural fishing practices and models the strategic process of fish capture through two main phases: an exploration phase combining individual intuition and group collaboration, and an exploitation phase based on coordinated collective action [71].

Each algorithm was executed independently 30 times, with a population size of 30 and 200 iterations. The initial configuration parameters of all the employed algorithms are detailed in Table 9. The Wilcoxon test was applied to compare the performances of the algorithms. The four best-ranked algorithms, as computed using the Friedman test, were selected for further evaluation on 10 optimization problems taken from the CEC2020 benchmark suite detailed in Table 24. Moreover, three engineering study cases based on the optimal power flow optimization problem were also studied.

All experiments were conducted on a standard desktop with the following specifications: Intel Core i9-13900K 5.8 GHz processor, 192 GB RAM, Linux Ubuntu 24.04 LTS operating system, and MATLAB R2024a compiler.

4. Results and Discussion

The computational results of MShOA, GWO, BWO, DO, WOA, MPA, LCA, SSA, EMA, ALO, MAO, AOA, SSOA, TSA, and CFOA on 20 benchmark test functions are presented in Table 10, Table 11, Table 12, Table 13, Table 14, Table 15, Table 16 and Table 17, where the average, standard deviation, and best values are provided for comparison measurements.

Three stages of nonparametric Wilcoxon signed-rank tests, at a 5% significance level, and Friedman tests determined the algorithms’ performances. In stage 1, 10 unimodal functions were analyzed, whereas in stage 2, 10 multimodal functions were studied. The unimodal and multimodal functions evaluated the algorithms’ capabilities in exploitation and exploration in the solution space, respectively. In stage 3, the set of 20 previously used functions—unimodal and multimodal—was analyzed with the Wilcoxon and Friedman statistical tests; see Table A1 and Table A2. The Wilcoxon test assessed whether MShOA exhibited statistically superior performance; a p-value below 0.05 signified that MShOA outperformed the algorithm under comparison. The Friedman test evaluated algorithms by ranking them according to their average performance and benchmark scores.

In stage 1, according to the Wilcoxon test (see Table 18), the MShOA algorithm was better than all the others in local searches. The results of the Friedman test (see Table 19) show that MShOA ranked first among the analyzed algorithms. Figure 13 presents the convergence curves for each unimodal function.

In stage 2, the algorithms’ performances were evaluated with 10 multimodal functions; see Table A1. The Wilcoxon test results (see Table 20) indicate that MShOA outperformed SSA, EMA, ALO, MAO, and CFOA; meanwhile, they also demonstrate competitive results compared with GWO, BWO, DO, WOA, MPA, LCA, AOA, SSOA, and TSA. The Friedman test results (see Table 21) indicate that the BWO algorithm ranked first. In the global search, MShOA demonstrated a competitive performance by ranking in second place. Figure 14 presents the convergence curves for each multimodal function.

In stage 3, the set of 20 previously used functions—unimodal and multimodal—was analyzed with the Wilcoxon and Friedman statistical tests; see Table A1. The Wilcoxon test results, shown in Table 22, indicate that MShOA outperformed the following optimization algorithms: ALO, DO, EMA, GWO, LCA, MAO, MPA, SSA, TSA, and WOA. However, no significant differences were observed with BWO, AOA, and SSOA. The Friedman test analysis (see Table 23) shows that BWO ranked first, MShOA ranked second, and SSOA and AOA ranked third and fourth, respectively.

5. Real-World Applications

The performance of MShOA was evaluated by solving 10 optimization problems from CEC 2020 [72]. These problems are presented in Table 24 and described in Section 5.2, Section 5.3, Section 5.4, Section 5.5, Section 5.6, Section 5.7, Section 5.8, Section 5.9, Section 5.10 and Section 5.11. In addition, MShOA was tested on three different cases of the optimal power flow problem for the IEEE bus 30 configuration. These cases were fuel cost, active power, and reactive power. The results for each of these engineering problems were obtained with MShOA and the top three algorithms ranked according to the Friedman test: BWO, SSOA, and AOA; see Table 23. The results for each real-world optimization problem described in Table 24 are summarized in table form to include the decision variables and the feasible objective function value found, as can be seen in Table 25, Table 26, Table 27, Table 28, Table 29, Table 30, Table 31, Table 32, Table 33 and Table 34. The constraint-handling method used on each of the real-world optimization problems is described in Section 5.1.

5.1. Constraint Handling

The penalization method taken from [39], which is applied to engineering problems with constraints, is presented in Equation (19):

(19)F(x)=f(x),ifMCV(x)0fmax+MCV(x),    otherwise.

where f(x) is the fitness function value of a viable solution (i.e., a solution that satisfies all the constraints). In other matters, fmax represents the fitness function value of the worst solution in the population, and MCV(x) is the Mean Constraint Violation [39] represented in Equation (20):

(20)MCV(x)=i=1pGi(x)+j=1mHj(x)p+m

In this case, MCV(x) represents the average sum of the inequality constraints (Gi(x)) and the equality constraints (Hj(x)), as shown in Equations (21) and (22), respectively. It is important to note that the inequality constraints gi(x) and the equality constraints hj(x) only have single value, which is the penalty applied when the constraint is violated.

(21)Gi(x)=0,ifgi(x)0gi(x),    otherwise.

(22)Hj(x)=0,if|hj(x)|δ0|hj(x)|,otherwise

5.2. Process Synthesis Problem

The process synthesis problem includes two decision variables and two inequality constraints. The mathematical representation of the problem is shown in Equation (23). The best-known feasible objective value taken from Table 24 is 2.

(23)Minimizef(x¯)=x2+2x1Subjecttog1(x¯)=x12x2+1.250g2(x¯)=x1+x21.6withbounds:0x11.6x2{0,1}

Table 25 compares MShOA, BWO, AOA, and SSOA, all of which provided effective and competitive solutions. Nonetheless, SSOA and AOA did not satisfy one or more problem constraints. MShOA’s difference from the best-known feasible objective function value was 7.2546608720×1009. The convergence graph is shown in Figure 15.

5.3. Process Synthesis and Design Problem

The process synthesis and design problem included three decision variables, one inequality, and one equality constraint. The mathematical representation of the problem is shown in Equation (24). The best-known feasible objective value taken from Table 24 is 2.5576545740.

(24)Minimizef(x¯)=x3+x2+2x1Subjectto:h1(x¯)=2exp(x2)+x1=0g1(x¯)=x2x1+x30withbounds:0.5x1,x21.4x3{0,1}

Table 26 compares MShOA, BWO, AOA, and SSOA, all of which provided effective and competitive solutions. However, only the results of MShOA did not violate any problem constraints. MShOA’s difference from the best-known feasible objective function value was 1.8912987813×1004. The convergence graph is shown in Figure 16.

5.4. Process Flow Sheeting Problem

The process flow sheeting problem includes three decision variables and three inequality constraints. The mathematical representation of the problem is shown in Equation (25). The best-known feasible objective value from Table 24 is 1.0765430833.

(25)Minimizef(x¯)=0.7x3+0.8+5(0.5x1)2Subjectto:g1(x¯)=exp(x10.2)x20g2(x¯)=x2+1.1x31.0g3(x¯)=x1x30.2withbounds:2.22554x210.2x11x3{0,1}

Table 27 compares MShOA, BWO, AOA, and SSOA, all of which provided effective and competitive solutions. Nonetheless, BWO and AOA did not satisfy one or more problem constraints. MShOA’s difference from the best-known feasible objective function value was 3.4569167×1003. The convergence graph is shown in Figure 17.

5.5. Weight Minimization of a Speed Reducer

The weight minimization of a speed reducer includes seven decision variables and eleven inequality constraints. The mathematical representation of the problem is shown in Equation (26). The best-known feasible objective value taken from Table 24 is 2.9944244658×10+03.

(26)Minimizef(x¯)=0.7854x22x114.9334x343.0934+3.3333x32+0.7854(x5x72+x4x62)1.508x1(x72+x82)+7.477(x73+x63)Subjectto:g1(x¯)=x1x22x3+270g2(x¯)=x1x22x32+397.50g3(x¯)=x2x64x3x43+1.930g4(x¯)=x2x74x3x53+1.930g5(x¯)=10x6316.91×106+745x4x21x31211000g6(x¯)=10x73157.5×106+745x5x21x3128500g7(x¯)=x2x3400g8(x¯)=x1x21+50g9(x¯)=x1x21120g10(x¯)=1.5x6x4+1.90g11(x¯)=1.1x7x5+1.90withbounds:0.7x20.8,17x328,2.6x13.6,5x75.5,7.3x5,x48.3,2.9x63.9.

Table 28 compares MShOA, BWO, AOA, and SSOA, all of which provided effective and competitive solutions. Nonetheless, SSOA did not satisfy one or more problem constraints. MShOA’s difference from the best-known feasible objective function value was 8.3485313744×10+1. The convergence graph is shown in Figure 18.

5.6. Tension/Compression Spring Design (Case 1)

The tension/compression spring design (case 1) problem includes three decision variables and four inequality constraints. The mathematical representation of the problem is shown in Equation (27). The best-known feasible objective value taken from Table 24 is 1.2665232788×1002.

(27)Minimizef(x¯)=x12x2(2+x3)Subjectto:g1(x¯)=1x23x371785x140,g2(x¯)=4x22x1x212566(x2x13x14)+15108x1210,g3(x¯)=1140.45x1x2x30,g4(x¯)=x1+x21.510,Withbounds:0.05x12.00,0.25x21.30,2.00x315.00.

Table 29 compares MShOA, BWO, AOA, and SSOA, all of which provided effective and competitive solutions. Nonetheless, SSOA did not satisfy one or more problem constraints. MShOA’s difference from the best-known feasible objective function value was 1.3085251562×1004. The convergence graph is shown in Figure 19.

5.7. Welded Beam Design

The welded beam design includes four decision variables and five inequality constraints. The mathematical representation of the problem is shown in Equation (28). The best-known feasible objective value from Table 24 is 1.6702177263.

(28)Minimizef(x¯)=0.04811x3x4(x2+14)+1.10471x12x2,Subjectto:g1(x¯)=x1x40g2(x¯)=δ(x¯)δmax0g3(x¯)=PPc(x¯)g4(x¯)=τmaxτ(x¯)g5(x¯)=σ(x¯)σmax0Where:τ=τ2+τ2+2ττx22R,τ=RMJ,τ=P2x2x1,M=Px22+L,R=x224+x1+x322,J=2x224+x1+x3222x1x2,σ(x¯)=6PLx4x32,δ(x¯)=6PL3Ex32x4,Pc(x¯)=4.013Ex3x436L21x32LE4G,Constants:L=14in,P=6000lb,E=30×106psi,σmax=30,000psi,τmax=13,600psi,G=12.10×106psi,δmax=0.25in,Withbounds:0.125x12,0.1x2,x310,0.1x42.

Table 30 compares MShOA, BWO, AOA, and SSOA, all of which provided effective and competitive solutions. Nonetheless, BWO and SSOA did not satisfy one or more problem constraints. MShOA’s difference from the best-known feasible objective function value was 6.7148143232×1002. The convergence graph is shown in Figure 20.

5.8. Multiple Disk Clutch Brake Design Problem

The multiple disk clutch brake design problem includes five decision variables and eight inequality constraints. The mathematical representation of the problem is shown in Equation (29). The best-known feasible objective value taken from Table 24 is 2.3524245790×1001.

(29)Minimizef(x¯)=πx22x12x3(x5+1)ρSubjectto:g1(x¯)=pmax+ptr0,g2(x¯)=przVsrVsr,maxpmax0,g3(x¯)=R+x1x20,g4(x¯)=Lmax+(x5+1)(x3+δ)0,g5(x¯)=sMsMh0,g6(x¯)=T0,g7(x¯)=Vsr,max+Vsr0,g8(x¯)=TTmax0.Where:Mh=23μx4x5x23x13x22x12N.mm,ω=πn30rad/s,A=π(x22x12)mm2,prz=x4AN/mm2,Vsr=πRsrn30mm/s,Rsr=23x23x13x22x12mm,T=IzωMh+Mf,Constants:R=20mm,Lmax=30mm,μ=0.6,Vsr,max=10m/s,δ=0.5mm,s=1.5,Tmax=15s,n=250rpm,Iz=55kg.m2.Ms=40Nm,Mf=3Nm,andpmax=1Withbounds:60x180,90x2110,1x330x41000,2x59

Table 31 compares MShOA, BWO, AOA, and SSOA, all of which provided effective and competitive solutions. Nonetheless, BWO, SSOA, and AOA did not satisfy one or more problem constraints. MShOA’s difference from the best-known feasible objective function value was 3.8493744275×1004. The convergence graph is shown in Figure 21.

5.9. Planetary Gear Train Design Optimization Problem

The planetary gear train design optimization problem includes six decision variables and eleven inequality constraints. The mathematical representation of the problem is shown in Equation (30). The best-known feasible objective value from Table 24 is 5.2576870748×1001.

(30)Minimizef(x¯)=max|iki0k|,k={1,2,,R}Subjectto:g1(x¯)=m3N6+2.5Dmax0,g2(x¯)=m1N1+N2+m1N2+2Dmax0,g3(x¯)=m3N4+N5+m3N5+2Dmax0,g4(x¯)=m1N1+N2m3N6N3m1m30,g5(x¯)=N1+N2sin(π/p)+N2+2+δ220,g6(x¯)=N6N3sin(π/p)+N3+2+δ330,g7(x¯)=N4+N5sin(π/p)+N5+2+δ550,g8(x¯)=N3+N5+2+δ352N6N32N4+N52+2N6N3N4+N5cos2πpβ0,g9(x¯)=N4N6+2N5+2δ56+40,g10(x¯)=2N3N6+N4+2δ34+40,h1(x¯)=N6N4p=integer,Where:i1=N6N4,i01=3.11,i2=N6N1N3+N2N4N1N3N6N4,i0R=3.11,IR=N2N6N1N3,i02=1.84,x¯=p,N6,N5,N4,N3,N2,N1,m2,m1δ22=δ33=δ55=δ35=δ56=0.5.β=cos1N4+N52+N6N32N3+N522N6N3N4+N5,Dmax=220,Withbounds:p=(3,4,5),m1=(1.75,2.0,2.25,2.5,2.75,3.0),m3=(1.75,2.0,2.25,2.5,2.75,3.0),17N196,14N254,14N35117N446,14N551,48N6124,andNi=integer.

Table 32 compares MShOA, BWO, AOA, and SSOA, all of which provided effective and competitive solutions. Nonetheless, SSOA and AOA did not satisfy one or more problem constraints. MShOA’s difference with the best-known feasible objective function value was 4.2312925200×1003. The convergence graph is shown in Figure 22.

5.10. Tension/Compression Spring Design (Case 2)

The tension/compression spring design (case 2) problem includes three decision variables and eight inequality constraints. The mathematical representation of the problem is shown in Equation (31). The best-known feasible objective value taken from Table 24 is 2.6138840583.

(31)Minimizef(x¯)=π2x2x32x1+24Subjectto:g1(x¯)=8000Cfx2πx331890000,g2(x¯)=lf140,g3(x¯)=0.2x30,g4(x¯)=x230,g5(x¯)=3x2x30,g6(x¯)=σp60,g7(x¯)=σp+700K+1.05x1+2x3lf0,g8(x¯)=1.25700K0,Where:Cf=4x2x314x2x34+0.615x3x2,K=11.5×106x348x1x23,σp=300K,lf=1000K+1.05x1+2x3Withbounds:1x1(integer)70,x3(discreate){0.009,0.0095,0.0104,0.0118,0.0128,0.0132,0.014,0.015,0.0162,0.0173,0.018,0.020,0.023,0.025,0.028,0.032,0.035,0.041,0.047,0.054,0.063,0.072,0.080,0.092,0.0105,0.120,0.135,0.148,0.162,0.177,0.192,0.207,0.225,0.244,0.263,0.283,0.307,0.0331,0.362,0.394,0.4375,0.500}0.6x2(continuous)3.

Table 33 compares MShOA, BWO, AOA, and SSOA, all of which provided effective and competitive solutions. MShOA’s difference from the best-known feasible objective function value was 8.5610383350×1002. The convergence graph is shown in Figure 23.

5.11. Himmelblau’s Function

The Himmelblau’s function problem includes five decision variables and six inequality constraints. The mathematical representation of the problem is shown in Equation (32). The best-known feasible objective value taken from Table 24 is 3.0665538672×10+04.

(32)Minimizef(x¯)=5.3578547x32+0.8356891x1x5+37.293239x140792.141Subjectto:g1(x¯)=G10,g2(x¯)=G1920,g3(x¯)=90G20,g4(x¯)=G21100,g5(x¯)=20G30,g6(x¯)=G3250,Where:G1=85.334407+0.0056858x2x5+0.0006262x1x40.0022053x3x5,G2=80.51249+0.00713172x5+0.0029955x1x2+0.0021813x32,G3=9.300961+0.0047026x3x5+0.00125447x1x3+0.0019085x3x4Withbounds:78x1102,33x245,27x345,27x445,27x545.

Table 34 compares MShOA, BWO, AOA, and SSOA, all of which provided effective and competitive solutions. Nonetheless, BWO, SSOA, and AOA did not satisfy one or more problem constraints. MShOA’s difference with the best-known feasible objective function value was 1.9624794082×10+02. The convergence graph is shown in Figure 24.

5.12. Optimal Power Flow

The optimal power flow (OPF) problem was initially considered an annex to the conventional economic dispatch (ED) problem because both problems were solved simultaneously [73]. The OPF has changed over time into a non-linear optimization problem that tries to find the operating conditions of an electrical system while considering the power balance equations and the constraints of the transmission network [74,75]. The mathematical formulation of the OPF is described in Equation (33); this equation is called the canonical form:

(33)MinimizeJ(x,u),Subjectto:g(x,u)=0,h(x,u)0Where:J(x,u)=the oobjective function,g(x,u)=the set of equality constraints,h(x,u)=the inequality constraints,

In Equation (34), the set of control variables u is defined, which includes PG (active power generation at PV buses), VG (voltage magnitudes at PV buses), QC (VAR compensators), and T (transformer tap settings). The parameter NG is the number of generators, NC is the number of VAR compensators, and NT is the number of regulating transformers.

(34)uT=PG2PGNG,VG1VGNG,QC1QCNC,T1TNT

In Equation (35), the set of state variables xT are

PG1—active output power (generation) of the reference node.

VL—voltage at the load node.

QG—reactive power at the output (generation) of all generators.

Sl—transmission line.

(35)xT=PG1,VL1VLNL,QG1QGNC,Sl1Slnl

where the parameter NL is the number of load nodes and nl is the number of transmission lines.

The constraints of the active power equality (P) are defined in Equation (36), which indicates that the active power injected into node (i) is equal to the sum of the active power demanded at node (i) plus the active power losses in the transmission lines connected to the node:

(36)PGiPDiVij=1NBVjGijcos(θij)+Bijsin(θij)=0

The constraints of the reactive power equality (Q) are defined in Equation (37), which indicates that the reactive power injected into node (i) is equal to the reactive power demanded at node (i) plus the reactive power associated with the flows in the transmission lines connected to the node:

(37)QGiQDiVij=1NBVjGijsin(θij)+Bijcos(θij)=0

The real and reactive power equality constraints are defined in Equations (36) and (37), respectively, where

PG—the active power generation.

QG—the reactive power generation.

PD—the active load demand.

QD—the reactive load demand.

NB—the number of buses.

Gij—the conductance between buses i and j.

Bij—the susceptance between buses i and j.

Yij=Gij+jBij (the admittance matrix).

The inequality constraints in an optimal power flow (OPF) problem [76] cover the operational limits of the system’s components. The generator constraints are represented by Equation (38), and these constraints are shown in Table 35:

(38)VGiminVGiVGimax,i=1,,NGPGiminPGiPGimax,i=1,,NGQGiminQGiQGimax,i=1,,NG

The transformer constraints are described in Equation (39) and in Table 36:

(39)TiminTiTimax,i=1,,NT

The shunt VAR compensator constraints are defined by Equation (40) and in Table 37:

(40)QCiminQGCiQCimax,i=1,,NG

Finally, the security constraints are represented in Equation (41):

(41)VLiminVLiVLimax,i=1,,NLSliSlimax,i=1,,nl

Following the previously described methodology in Section 5, the Mantis Shrimp Optimization Algorithm (MShOA), the Beluga Whale Optimization (BWO), the Arithmetic Optimization Algorithm (AOA), and the Synergistic Swarm Optimization Algorithm (SSOA) are applied to the optimal power flow problem for the IEEE-30 Bus test system; see Figure 25. Under these tests, three case studies were analyzed: the optimization of the total fuel cost for generation, the optimization of active power losses, and the optimization of reactive power losses.

Case 1 included the total fuel cost of the six generating units connected at buses 1, 2, 5, 8, 11, and 13. The objective function was defined as follows:

(42)J=i=1NGfi($/h)

where the quadratic function fi is described in [76].

The objective functions for Case 2 and Case 3, which involved optimizing the active power losses and optimizing the reactive power losses, respectively, are described in Equations (43) and (44):

(43)J=i=1NBPi=i=1NBPGii=1NBPDi

(44)J=i=1NBQi=i=1NBQGii=1NBQDi

The convergence graphs analysis showed that MShOA outperformed the Beluga Whale Optimization (BWO), Arithmetic Optimization Algorithm (AOA), and Synergistic Swarm Optimization Algorithm (SSOA) in all three cases of this study, as illustrated in Figure 26, Figure 27 and Figure 28, respectively. The tests were conducted with a population size of 30 and 500 iterations. The minimum values obtained by each algorithm for each case are summarized in Table 38.

6. Conclusions

This paper discusses a novel metaheuristic optimization technique inspired by the behavior of the mantis shrimp, called the Mantis Shrimp Optimization Algorithm (MShOA). The shrimp’s behavior includes three strategies: forage (food search); attack (the mantis shrimp strike); and burrow, defense, and shelter (hole defense and shelter). These strategies are triggered by the type of polarized light detected by the mantis shrimp.

The algorithm’s performance was evaluated on 20 testbench functions, on 10 real-world optimization problems taken from IEEE CEC2020, and on 3 study cases related to the optimal power flow problem: optimization of fuel cost in electric power generation, and optimization of active and reactive power losses. In addition, the algorithm’s performance was compared with fourteen algorithms selected from the scientific literature.

The statistical analysis results indicate that MShOA outperformed CFOA, ALO, DO, EMA, GWO, LCA, MAO, MPA, SSA, TSA, and WOA. In addition, it proved to be competitive with BWO, AOA, and SSOA.

Results and conclusions of this study:

The mantis shrimp’s biological strategies modeled in this study included polarization principles in optics as new methods for optimization goals.

MShOA has no parameters to configure in the modeling of strategies. The only parameters are those common to all bio-inspired algorithms, e.g., population size and number of iterations.

The Wilcoxon rank and Friedman tests were performed to analyze the 10 unimodal and 10 multimodal functions. The Wilcoxon test results indicate that MShOA outperformed all the other algorithms in unimodal functions. In addition, it ranked first in the Friedman test. Furthermore, in the Wilcoxon test, MShOA outperformed the following algorithms: SSA, EMA, ALO, MAO, and CFOA in multimodal functions. Additionally, it ranked second in the Friedman statistical tests. These results demonstrate that MShOA has a remarkable balance between local and global searches.

The set of 20 functions, unimodal and multimodal, was analyzed with Wilcoxon and Friedman statistical tests. The Wilcoxon test results indicate that MShOA outperformed the following optimization algorithms: CFOA, ALO, DO, EMA, GWO, LCA, MAO, MPA, SSA, TSA, and WOA. On the other hand, the Friedman test analysis shows that MShOA was ranked second.

MShOA demonstrated outstanding results in 80% of the real-world IEEE CEC2020 engineering problems: process synthesis problem, process synthesis and design problem, process flow sheeting problem, welded beam design, planetary gear train design optimization problem, and tension/compression spring design.

In the optimal power flow (OPF) problem study cases, MShOA obtained better solutions than BWO, AOA, and SSOA.

MShOA was able to effectively solve real-world problems with unknown search spaces.

The proposed algorithm demonstrated competitive performances. However, it has some limitations that open avenues for future work. Currently, it is designed for single-objective optimization and has not yet been extended to handle multi-objective scenarios. Additionally, while it performed well on the tested optimal power flow (OPF) cases, its generalization to more complex or large-scale OPF models remains to be fully explored. Future work will address these limitations by extending the algorithm to multi-objective optimization and evaluating its applicability to broader and more diverse OPF problem instances.

Author Contributions

Conceptualization, J.A.S.C., H.P.V. and A.F.P.D.; Methodology, H.P.V. and A.F.P.D.; Software, J.A.S.C.; Validation, H.P.V. and A.F.P.D.; Formal analysis, J.A.S.C., H.P.V. and A.F.P.D.; Investigation, J.A.S.C.; Resources, H.P.V.; Writing—original draft preparation, J.A.S.C.; Writing—review and editing, H.P.V. and A.F.P.D.; Visualization, J.A.S.C.; Supervision, H.P.V. and A.F.P.D.; Project administration, H.P.V. and A.F.P.D.; Funding acquisition, H.P.V. All authors have read and agreed to the published version of this manuscript.

Data Availability Statement

The source code used to support the findings of this study has been deposited in the MathWorks repository at https://www.mathworks.com/matlabcentral/fileexchange/180937-mantis-shrimp-optimization-algorithm-mshoa, available since 30 April 2025.

Acknowledgments

The first author acknowledges support from SECIHTI to pursue his Ph.D. in advanced technology at the Instituto Politécnico Nacional (IPN)–CICATA Altamira.

Conflicts of Interest

The authors declared no potential conflicts of interest with respect to the research, authorship, funding, and/or publication of this article.

Footnotes

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Figures and Tables

Figure 1 Metaheuristic classification.

View Image -

Figure 2 Schematic representation of a metaheuristic algorithm.

View Image -

Figure 3 (a) The shrimp inside its shelter. Photograph (published under a CC BY license). Author unknown. (b) The shrimp observing in different directions simultaneously. Photograph (published under a CC BY-SA license). Author unknown. (c) The purple-spotted mantis shrimp (Gonodactylus smithii). Photograph by Roy L. Caldwell (published under a CC BY-SA license). (d) Strike of a mantis shrimp. Photograph (published under a CC BY license). Author unknown.

View Image -

Figure 4 Shrimp strategies based on detected polarized light.

View Image -

Figure 5 (a) Schematic representation of the initial population, where dim stands for the vector size; (b) Polarization Type Identifier (PTI) vector representation; (c) strategy activated by the type of polarized light detected.

View Image -

Figure 6 Schematic diagram of the Polarization Type Identifier (PTI) vector update process.

View Image -

Figure 7 Algorithm 1’s Polarization Type Identifier (PTI) vector update process—flowchart.

View Image -

Figure 8 Foraging strategy.

View Image -

Figure 9 Mantis shrimp’s attack strike.

View Image -

Figure 10 Burrow, defense, or shelter strategy.

View Image -

Figure 11 Convergence performances of unimodal function F3 and multimodal function F14.

View Image -

Figure 12 MShOA flowchart.

View Image -

Figure 13 Convergence curves of 10 unimodal functions.

View Image -

Figure 14 Convergence curves of 10 multimodal functions.

View Image -

Figure 15 Convergence graph of the process synthesis problem.

View Image -

Figure 16 Convergence graph of the process synthesis and design problem.

View Image -

Figure 17 Convergence graph of the process flow sheeting problem.

View Image -

Figure 18 Convergence graph of the weight minimization of a speed reducer.

View Image -

Figure 19 Convergence graph of the tension/compression spring design (case 1).

View Image -

Figure 20 Convergence graph of the welded beam design.

View Image -

Figure 21 Convergence graph of the multiple disk clutch brake design problem.

View Image -

Figure 22 Convergence graph of the planetary gear train design optimization problem.

View Image -

Figure 23 Convergence graph of the tension/compression spring design (case 2).

View Image -

Figure 24 Convergence graph of Himmelblau’s function.

View Image -

Figure 25 IEEE 30-bus system.

View Image -

Figure 26 Case 1: minimization of fuel cost.

View Image -

Figure 27 Case 2: minimization of active power transmission losses.

View Image -

Figure 28 Case 3: minimization of reactive power transmission losses.

View Image -

Behavioral strategies of the mantis shrimp based on detected polarized light.

Type of Detected Light by Mantis Shrimp Action to Take
Vertical linearly polarized light Foraging
Horizontal linearly polarized light Attack
Circularly polarized light Burrow, defense, or shelter

Comparative performances of MShOA with k=0.1 to 0.9 on benchmark functions.

K Value K = 0.1 K = 0.2
F f min Best Ave Std Best Ave Std
F1 0 0 0 0 0 0 0
F2 0 0 0 0 0 0 0
F3 0 0 3.84 × 10 279 0 0 1.08 × 10 283 0
F4 0 0 2.15 × 10 290 0 0 1.05 × 10 289 0
F5 0 0 8.90 × 10 285 0 0 5.42 × 10 287 0
F6 0 0 0 0 0 0 0
F7 0 0 0 0 0 0 0
F8 0 0 0 0 0 0 0
F9 −1 1.00 × 10 + 00 1.00 × 10 + 00 0 1.00 × 10 + 00 1.00 × 10 + 00 0
F10 0 0 0 0 0 0 0
F11 0 4.44 × 10 16 4.44 × 10 16 0 4.44 × 10 16 4.44 × 10 16 0
F12 −4.59 4.59 × 10 + 00 2.89 × 10 + 00 1.65 × 10 + 00 4.59 × 10 + 00 2.85 × 10 + 00 1.60 × 10 + 00
F13 0.9 9.00 × 10 01 9.00 × 10 01 4.52 × 10 16 9.00 × 10 01 9.00 × 10 01 4.52 × 10 16
F14 0 4.22 × 10 06 1.72 × 10 04 1.42 × 10 04 1.63 × 10 05 2.36 × 10 04 2.02 × 10 04
F15 0 0 0 0 0 0 0
F16 0 2.90 × 10 + 01 2.90 × 10 + 01 8.43 × 10 03 2.89 × 10 + 01 2.90 × 10 + 01 2.63 × 10 02
F17 0 0 0 0 0 0 0
F18 0 1.02 × 10 161 3.43 × 10 33 1.88 × 10 32 1.59 × 10 169 4.68 × 10 13 1.96 × 10 12
F19 0 7.51 × 10 08 1.17 × 10 05 2.59 × 10 05 8.19 × 10 08 6.22 × 10 06 1.20 × 10 05
F20 −1 1.00 × 10 + 00 1.00 × 10 + 00 0 1.00 × 10 + 00 1.00 × 10 + 00 0

Continuation of comparative performances of MShOA with k=0.1 to 0.9 on benchmark functions.

K Value K = 0.3 K = 0.4
F f min Best Ave Std Best Ave Std
F1 0 0 0 0 0 0 0
F2 0 0 0 0 0 0 0
F3 0 0 1.99 × 10 283 0 0 2.44 × 10 281 0
F4 0 0 3.83 × 10 285 0 0 1.92 × 10 286 0
F5 0 0 5.36 × 10 285 0 0 9.14 × 10 287 0
F6 0 0 0 0 0 0 0
F7 0 0 0 0 0 0 0
F8 0 0 0 0 0 0 0
F9 −1 1.00 × 10 + 00 1.00 × 10 + 00 0 1.00 × 10 + 00 1.00 × 10 + 00 0
F10 0 0 0 0 0 0 0
F11 0 4.44 × 10 16 4.44 × 10 16 0 4.44 × 10 16 4.44 × 10 16 0
F12 −4.59 4.59 × 10 + 00 3.39 × 10 + 00 1.22 × 10 + 00 4.59 × 10 + 00 2.72 × 10 + 00 1.61 × 10 + 00
F13 0.9 9.00 × 10 01 9.00 × 10 01 4.52 × 10 16 9.00 × 10 01 9.00 × 10 01 4.52 × 10 16
F14 0 7.37 × 10 06 1.23 × 10 04 1.33 × 10 04 7.17 × 10 06 1.55 × 10 04 1.61 × 10 04
F15 0 0 0 0 0 0 0
F16 0 2.90 × 10 + 01 2.90 × 10 + 01 1.16 × 10 02 2.89 × 10 + 01 2.90 × 10 + 01 1.77 × 10 02
F17 0 0 0 0 0 0 0
F18 0 3.41 × 10 170 2.45 × 10 24 1.31 × 10 23 5.08 × 10 174 4.79 × 10 34 2.62 × 10 33
F19 0 5.96 × 10 08 7.01 × 10 06 8.81 × 10 06 2.10 × 10 238 1.99 × 10 05 5.03 × 10 05
F20 −1 1.00 × 10 + 00 1.00 × 10 + 00 0 1.00 × 10 + 00 1.00 × 10 + 00 0

Continuation of comparative performances of MShOA with k=0.1 to 0.9 on benchmark functions.

K Value K = 0.5 K = 0.6
F f min Best Ave Std Best Ave Std
F1 0 0 0 0 0 0 0
F2 0 0 0 0 0 0 0
F3 0 0 5.68 × 10 286 0 0 6.66 × 10 284 0
F4 0 0 1.35 × 10 286 0 0 5.04 × 10 285 0
F5 0 0 7.30 × 10 281 0 0 1.52 × 10 285 0
F6 0 0 0 0 0 0 0
F7 0 0 0 0 0 0 0
F8 0 0 0 0 0 0 0
F9 −1 1.00 × 10 + 00 1.00 × 10 + 00 0 1.00 × 10 + 00 1.00 × 10 + 00 0
F10 0 0 0 0 0 0 0
F11 0 4.44 × 10 16 4.44 × 10 16 0 4.44 × 10 16 4.44 × 10 16 0
F12 −4.59 4.59 × 10 + 00 2.48 × 10 + 00 1.61 × 10 + 00 4.59 × 10 + 00 3.18 × 10 + 00 1.49 × 10 + 00
F13 0.9 9.00 × 10 01 9.00 × 10 01 4.52 × 10 16 9.00 × 10 01 9.00 × 10 01 4.52 × 10 16
F14 0 4.95 × 10 06 1.87 × 10 04 1.89 × 10 04 1.09 × 10 06 1.69 × 10 04 1.37 × 10 04
F15 0 0 0 0 0 0 0
F16 0 2.89 × 10 + 01 2.90 × 10 + 01 2.02 × 10 02 2.89 × 10 + 01 2.90 × 10 + 01 1.69 × 10 02
F17 0 0 0 0 0 0 0
F18 0 2.35 × 10 157 1.69 × 10 37 9.26 × 10 37 9.64 × 10 158 1.53 × 10 07 8.35 × 10 07
F19 0 1.87 × 10 07 9.20 × 10 06 1.16 × 10 05 3.15 × 10 07 1.85 × 10 05 2.58 × 10 05
F20 −1 1.00 × 10 + 00 1.00 × 10 + 00 0 1.00 × 10 + 00 1.00 × 10 + 00 0

Continuation of comparative performances of MShOA with k=0.1 to 0.9 on benchmark functions.

K Value K = 0.7 K = 0.8
F f min Best Ave Std Best Ave Std
F1 0 0 0 0 0 0 0
F2 0 0 0 0 0 0 0
F3 0 0 9.43 × 10 284 0 0 5.55 × 10 290 0
F4 0 0 1.55 × 10 279 0 0 6.40 × 10 286 0
F5 0 0 9.67 × 10 291 0 0 5.62 × 10 293 0
F6 0 0 0 0 0 0 0
F7 0 0 0 0 0 0 0
F8 0 0 0 0 0 0 0
F9 −1 1.00 × 10 + 00 1.00 × 10 + 00 0 1.00 × 10 + 00 1.00 × 10 + 00 0
F10 0 0 0 0 0 0 0
F11 0 4.44 × 10 16 4.44 × 10 16 0 4.44 × 10 16 4.44 × 10 16 0
F12 −4.59 4.59 × 10 + 00 3.19 × 10 + 00 1.28 × 10 + 00 4.58 × 10 + 00 2.91 × 10 + 00 1.47 × 10 + 00
F13 0.9 9.00 × 10 01 9.00 × 10 01 4.52 × 10 16 9.00 × 10 01 9.00 × 10 01 4.52 × 10 16
F14 0 5.75 × 10 06 1.78 × 10 04 1.76 × 10 04 5.16 × 10 06 1.34 × 10 04 1.54 × 10 04
F15 0 0 0 0 0 0 0
F16 0 2.89 × 10 + 01 2.90 × 10 + 01 1.80 × 10 02 2.90 × 10 + 01 2.90 × 10 + 01 1.19 × 10 02
F17 0 0 0 0 0 0 0
F18 0 8.57 × 10 166 1.04 × 10 23 5.67 × 10 23 4.63 × 10 152 4.56 × 10 11 2.50 × 10 10
F19 0 3.55 × 10 07 2.48 × 10 05 5.23 × 10 05 9.23 × 10 08 1.00 × 10 05 1.41 × 10 05
F20 −1 1.00 × 10 + 00 1.00 × 10 + 00 0 1.00 × 10 + 00 1.00 × 10 + 00 0

Continuation of comparative performances of MShOA with k=0.1 to 0.9 on benchmark functions.

K Value K = 0.9
F f min Best Ave Std
F1 0 0 0 0
F2 0 0 0 0
F3 0 0 8.01 × 10 289 0
F4 0 0 5.30 × 10 291 0
F5 0 0 7.29 × 10 290 0
F6 0 0 0 0
F7 0 0 0 0
F8 0 0 0 0
F9 −1 1.00 × 10 + 00 1.00 × 10 + 00 0
F10 0 0 0 0
F11 0 4.44 × 10 16 4.44 × 10 16 0
F12 −4.59 4.59 × 10 + 00 2.77 × 10 + 00 1.66 × 10 + 00
F13 0.9 9.00 × 10 01 9.00 × 10 01 4.52 × 10 16
F14 0 1.71 × 10 06 1.51 × 10 04 1.31 × 10 04
F15 0 0 0 0
F16 0 2.90 × 10 + 01 2.90 × 10 + 01 1.11 × 10 02
F17 0 0 0 0
F18 0 2.01 × 10 165 3.79 × 10 14 2.07 × 10 13
F19 0 4.03 × 10 08 6.07 × 10 06 7.16 × 10 06
F20 − 1 1.00 × 10 + 00 1.00 × 10 + 00 0

Statistical analysis of the Wilcoxon signed-rank test that compared different values of the parameter k in MShOA for 10 unimodal and 10 multimodal functions using a 5% significance level.

k = 0.3 vs. k = 0.1 k = 0.3 vs. k = 0.2 k = 0.3 vs. k = 0.4
(+/=/−) p-Value (+/=/−) p-Value (+/=/−) p-Value
(5/13/2) 1.76 × 10 01 (3/13/4) 6.12 × 10 01 (4/13/3) 2.37 × 10 01
k = 0.3 vs. k = 0.5 k = 0.3 vs. k = 0.6 k = 0.3 vs. k = 0.7
(+/=/−) p-Value (+/=/−) p-Value (+/=/−) p-Value
(4/13/3) 2.37 × 10 01 (5/13/2) 1.28 × 10 01 (5/13/2) 6.30 × 10 02
k = 0.3 vs. k = 0.8 k = 0.3 vs. k = 0.9
(+/=/−) p-Value (+/=/−) p-Value
(4/13/3) 1.76 × 10 01 (3/13/4) 6.12 × 10 01

Description of the MShA algorithm’s computational complexity.

Instruction Big-O Notation Observation
Initial population O ( n M S d i m ) No loop is related.
Polarization Type Indicator O ( d i m 1 ) No loop is related.
Strategy 1: foraging (Equation (12)) O ( M a x I t e r 1 ) Consists of a sequence of constant operations. It does not depend on the input size or iterable data structures, and the loop is related to MaxIter.
Strategy 2: attack (Equation (14)) O ( M a x I t e r 1 ) Consists of a sequence of constant operations. It does not depend on the input size or iterable data structures, and the loop is related to MaxIter.
Strategy 3: burrow, defense, or shelter (Equation (15)) O ( M a x I t e r 1 ) Consists of a sequence of constant operations. It does not depend on the input size or iterable data structures, and the loop is related to MaxIter.
Update Polarization Type Indicator (PTI) by Algorithm 1 O ( M a x I t e r 1 ) Algorithm 1 consists of a sequence of constant operations. It does not depend on the input size or iterable data structures, and the loop is relatedto MaxIter.
Update population O ( M a x I t e r n M S ) The loop is related to MaxIter.
Update fitness O ( M a x I t e r n M S d i m f ) The loop is related to MaxIter.

Initial parameters for each algorithm.

Algorithm Parameters Value
For all algorithms Population size for all problems 30
Maximum iterations for testbench functions and real-world problems 200
Number of repetition for testbench functions 30
MShOA Does not use additional parameters -
ALO I ratio 10
w 2.0–0.6
AOA α 5.0
μ 0.5
BWO W f [0.1, 0.05]
DO α [0, 1]
k [0, 1]
EMA r [0, 0.2]
C r [0, 1]
GWO α 2.0–0.0
LCA f 1
MAO d p 0.5
r p 0.1
k 3
λ 0.5
MPA P 0.5
F A D s 0.2
SSA v 0.0
SSOA w 0.7
C 1 2.0
C 2 2.0
k 0.5
TSA P m i n 1
P m a x 4
WOA α 2.0–0.0
b 2.0
CFOA Does not use additional parameters

Comparative performances of MShOA and 14 algorithms on benchmark functions.

Algorithms MShOA GWO
F f min Best Ave Std Best Ave Std
F1 0 0 0 0 1.12 × 10 12 1.51 × 10 11 1.39 × 10 11
F2 0 0 0 0 1.07 × 10 09 1.08 × 10 02 1.54 × 10 02
F3 0 0 1.16 × 10 286 0 2.14 × 10 05 4.62 × 10 05 1.73 × 10 05
F4 0 0 1.21 × 10 288 0 4.92 × 10 03 3.08 × 10 02 1.86 × 10 02
F5 0 0 5.63 × 10 284 0 3.26 × 10 05 8.29 × 10 05 3.22 × 10 05
F6 0 0 0 0 1.43 × 10 39 2.07 × 10 31 9.76 × 10 31
F7 0 0 0 0 4.10 × 10 13 1.95 × 10 11 2.26 × 10 11
F8 0 0 0 0 1.97 × 10 10 1.22 × 10 09 1.51 × 10 09
F9 −1 −1.00 −1.00 0 9.96 × 10 01 9.97 × 10 01 2.53 × 10 04
F10 0 0 0 0 1.73 × 10 02 3.47 × 10 01 3.22 × 10 01
F11 0 4.44 × 10 16 4.44 × 10 16 0 6.44 × 10 06 1.67 × 10 05 7.44 × 10 06
F12 −4.59 −4.59 2.79 1.36 −4.59 −4.59 1.73 × 10 06
F13 0.9 9.00 × 10 01 9.00 × 10 01 4.52 × 10 16 1.19 2.36 1.95
F14 0 3.68 × 10 07 1.51 × 10 04 1.46 × 10 04 2.39 × 10 03 7.00 × 10 03 4.16 × 10 03
F15 0 0 0 0 4.43 1.62 × 10 + 01 8.43
F16 0 2.89 × 10 + 01 2.90 × 10 + 01 1.58 × 10 02 2.61 × 10 + 01 2.73 × 10 + 01 7.56 × 10 01
F17 0 0 0 0 2.00 × 10 01 2.98 × 10 01 5.61 × 10 02
F18 0 2.75 × 10 162 1.33 × 10 27 7.27 × 10 27 1.40 × 10 16 9.98 × 10 06 5.46 × 10 05
F19 0 4.34 × 10 07 1.54 × 10 05 2.70 × 10 05 1.67 × 10 10 1.27 × 10 07 2.58 × 10 07
F20 −1 −1.00 −1.00 0 3.56 × 10 15 7.86 × 10 15 3.17 × 10 15

Continuation of comparative performances of MShOA and 14 algorithms on benchmark functions.

Algorithms BWO DO
F f min Best Ave Std Best Ave Std
F1 0 9.60 × 10 117 7.51 × 10 107 2.97 × 10 106 1.18 × 10 05 9.70 × 10 05 5.74 × 10 05
F2 0 0 0 0 1.25 × 10 02 6.76 × 10 02 4.38 × 10 02
F3 0 1.97 × 10 56 4.47 × 10 52 1.09 × 10 51 2.29 × 10 01 4.54 × 10 01 1.57 × 10 01
F4 0 4.29 × 10 55 5.33 × 10 52 1.00 × 10 51 2.71 1.09 × 10 + 01 5.30
F5 0 2.07 × 10 56 1.95 × 10 52 4.28 × 10 52 1.77 × 10 01 8.69 4.33 × 10 + 01
F6 0 0 0 0 5.34 × 10 16 1.34 × 10 09 5.13 × 10 09
F7 0 5.11 × 10 118 7.58 × 10 106 3.97 × 10 105 1.57 × 10 05 5.59 × 10 05 4.28 × 10 05
F8 0 1.01 × 10 112 9.62 × 10 103 5.19 × 10 102 1.22 × 10 03 3.74 × 10 03 2.10 × 10 03
F9 −1 −1.00 −1.00 0 9.95 × 10 01 9.96 × 10 01 3.48 × 10 04
F10 0 1.47 × 10 101 3.55 × 10 93 8.37 × 10 93 1.77 8.49 6.83
F11 0 4.44 × 10 16 4.44 × 10 16 0 1.02 × 10 02 3.83 × 10 01 6.86 × 10 01
F12 −4.59 −4.59 4.58 1.53 × 10 02 −4.59 −4.59 1.35 × 10 10
F13 0.9 9.00 × 10 01 9.00 × 10 01 4.52 × 10 16 1.03 1.13 7.30 × 10 02
F14 0 1.24 × 10 06 2.76 × 10 04 2.13 × 10 04 2.20 × 10 02 6.61 × 10 02 3.19 × 10 02
F15 0 0 0 0 5.38 4.29 × 10 + 01 2.29 × 10 + 01
F16 0 2.26 × 10 04 7.66 × 10 03 1.16 × 10 02 2.57 × 10 + 01 2.76 × 10 + 01 7.95 × 10 01
F17 0 6.65 × 10 51 5.03 × 10 43 1.72 × 10 42 7.00 × 10 01 1.08 2.25 × 10 01
F18 0 4.73 × 10 36 1.84 × 10 17 1.01 × 10 16 1.49 × 10 05 1.73 × 10 01 2.36 × 10 01
F19 0 3.51 × 10 12 3.69 × 10 12 7.63 × 10 13 3.04 × 10 11 1.26 × 10 10 1.07 × 10 10
F20 −1 −1.00 −1.00 0 1.20 × 10 15 3.28 × 10 15 1.44 × 10 15

Continuation of comparative performances of MShOA and 14 algorithms on benchmark functions.

Algorithms WOA MPA
F f min Best Ave Std Best Ave Std
F1 0 4.28 × 10 38 3.44 × 10 31 1.06 × 10 30 1.24 × 10 10 1.09 × 10 09 9.38 × 10 10
F2 0 0 3.68 × 10 02 1.41 × 10 01 1.83 × 10 07 1.15 × 10 05 4.85 × 10 05
F3 0 1.10 × 10 23 2.81 × 10 18 1.02 × 10 17 1.90 × 10 04 1.27 × 10 03 7.01 × 10 04
F4 0 6.78 5.70 × 10 + 01 2.16 × 10 + 01 2.03 × 10 03 4.22 × 10 03 1.16 × 10 03
F5 0 3.53 × 10 22 3.44 × 10 17 1.63 × 10 16 2.66 × 10 04 1.28 × 10 03 5.84 × 10 04
F6 0 1.65 × 10 102 4.46 × 10 66 1.96 × 10 65 7.15 × 10 41 3.13 × 10 36 1.40 × 10 35
F7 0 2.56 × 10 38 4.80 × 10 27 2.62 × 10 26 7.58 × 10 11 1.29 × 10 09 9.13 × 10 10
F8 0 5.50 × 10 34 2.96 × 10 28 1.24 × 10 27 2.71 × 10 09 6.67 × 10 08 5.53 × 10 08
F9 −1 −1.00 −1.00 1.09 × 10 16 9.73 × 10 01 2.56 × 10 01 8.42 × 10 01
F10 0 3.47 × 10 + 02 5.08 × 10 + 02 1.02 × 10 + 02 4.35 × 10 01 1.39 6.78 × 10 01
F11 0 3.11 × 10 15 2.23 × 10 14 1.81 × 10 14 6.58 × 10 05 1.57 × 10 04 6.56 × 10 05
F12 −4.59 −4.59 −4.59 2.17 × 10 05 −4.59 −4.59 2.08 × 10 13
F13 0.9 9.00 × 10 01 1.41 6.32 × 10 01 1.01 1.14 1.00 × 10 01
F14 0 6.84 × 10 04 1.08 × 10 02 8.88 × 10 03 1.44 × 10 03 3.89 × 10 03 1.89 × 10 03
F15 0 0 1.33 × 10 14 3.23 × 10 14 4.65 × 10 07 1.30 × 10 02 3.78 × 10 02
F16 0 2.76 × 10 + 01 2.85 × 10 + 01 2.97 × 10 01 2.62 × 10 + 01 2.69 × 10 + 01 4.03 × 10 01
F17 0 9.89 × 10 12 1.30 × 10 01 6.51 × 10 02 2.00 × 10 01 2.07 × 10 01 2.54 × 10 02
F18 0 6.80 × 10 16 6.75 × 10 + 03 3.70 × 10 + 04 6.95 × 10 16 1.11 × 10 08 3.87 × 10 08
F19 0 3.51 × 10 12 5.63 × 10 12 3.42 × 10 12 3.51 × 10 12 8.06 × 10 12 4.12 × 10 12
F20 −1 −1.00 1.67 × 10 01 3.79 × 10 01 1.47 × 10 14 1.46 × 10 13 1.01 × 10 13

Continuation of comparative performances of MShOA and 14 algorithms on benchmark functions.

Algorithms LCA SSA
F f min Best Ave Std Best Ave Std
F1 0 1.22 × 10 06 7.62 × 10 04 1.10 × 10 03 3.82 × 10 02 2.85 4.62
F2 0 5.22 × 10 03 3.62 × 10 01 3.26 × 10 01 1.08 1.45 4.03 × 10 01
F3 0 1.39 × 10 01 2.13 1.59 2.75 × 10 + 01 8.03 × 10 + 01 2.92 × 10 + 01
F4 0 1.72 × 10 03 8.44 × 10 02 6.41 × 10 02 1.00 × 10 + 01 1.70 × 10 + 01 4.35
F5 0 2.93 × 10 01 1.98 1.45 4.20 × 10 + 03 1.17 × 10 + 19 4.18 × 10 + 19
F6 0 4.60 × 10 35 1.66 × 10 16 8.89 × 10 16 1.01 × 10 03 1.67 × 10 + 01 2.80 × 10 + 01
F7 0 2.22 × 10 05 7.77 × 10 04 1.47 × 10 03 2.29 × 10 02 1.32 × 10 01 8.38 × 10 02
F8 0 5.85 × 10 05 5.90 × 10 02 1.51 × 10 01 7.19 3.18 × 10 + 01 2.17 × 10 + 01
F9 −1 −1.00 9.97 × 10 01 4.25 × 10 03 9.96 × 10 01 9.96 × 10 01 1.61 × 10 04
F10 0 1.09 × 10 02 7.77 1.25 × 10 + 01 1.34 × 10 + 02 2.39 × 10 + 02 7.25 × 10 + 01
F11 0 2.96 × 10 03 1.57 × 10 01 1.70 × 10 01 2.90 5.38 1.41
F12 −4.59 4.56 3.62 9.77 × 10 01 −4.59 −4.59 7.94 × 10 12
F13 0.9 9.00 × 10 01 5.48 4.76 1.01 1.16 1.51 × 10 01
F14 0 9.68 × 10 05 1.73 × 10 03 1.54 × 10 03 1.38 × 10 01 3.52 × 10 01 1.58 × 10 01
F15 0 6.90 × 10 04 1.53 × 10 01 2.86 × 10 01 2.43 × 10 + 01 5.58 × 10 + 01 2.03 × 10 + 01
F16 0 2.54 × 10 03 1.34 × 10 01 1.32 × 10 01 4.63 × 10 + 01 1.18 × 10 + 02 4.82 × 10 + 01
F17 0 1.01 × 10 01 2.23 × 10 01 1.18 × 10 01 2.90 4.59 7.56 × 10 01
F18 0 9.52 × 10 06 1.15 × 10 04 1.08 × 10 04 8.17 × 10 02 8.98 × 10 + 01 2.71 × 10 + 02
F19 0 3.51 × 10 12 3.52 × 10 12 1.27 × 10 14 1.70 × 10 11 1.40 × 10 10 1.87 × 10 10
F20 −1 9.66 × 10 01 8.06 × 10 01 1.43 × 10 01 8.25 × 10 14 4.88 × 10 13 2.67 × 10 13

Continuation of comparative performances of MShOA and 14 algorithms on benchmark functions.

Algorithms EMA ALO
F f min Best Ave Std Best Ave Std
F1 0 1.16 × 10 04 1.50 × 10 03 1.26 × 10 03 1.21 × 10 05 2.72 × 10 02 5.38 × 10 02
F2 0 4.72 × 10 02 5.24 × 10 01 2.43 × 10 01 1.17 9.93 6.33
F3 0 2.87 × 10 02 2.64 × 10 01 1.82 × 10 01 1.73 × 10 + 02 2.19 × 10 + 02 3.65 × 10 + 01
F4 0 3.06 × 10 + 01 6.15 × 10 + 01 1.70 × 10 + 01 1.47 × 10 + 01 2.30 × 10 + 01 5.59
F5 0 1.40 × 10 02 2.77 × 10 01 1.90 × 10 01 4.80 × 10 + 02 3.42 × 10 + 24 1.54 × 10 + 25
F6 0 9.16 × 10 06 6.04 × 10 + 01 2.20 × 10 + 02 8.06 × 10 01 3.01 × 10 + 03 1.12 × 10 + 04
F7 0 2.10 × 10 04 6.44 × 10 03 9.76 × 10 03 1.06 × 10 01 2.52 1.76
F8 0 6.14 × 10 03 9.74 × 10 02 9.60 × 10 02 1.34 × 10 + 01 1.11 × 10 + 02 5.17 × 10 + 01
F9 −1 9.95 × 10 01 9.95 × 10 01 5.22 × 10 16 −1.00 9.83 × 10 01 2.85 × 10 02
F10 0 2.93 × 10 + 02 4.40 × 10 + 02 8.74 × 10 + 01 1.74 × 10 + 02 3.85 × 10 + 02 9.94 × 10 + 01
F11 0 1.81 × 10 02 1.12 × 10 01 7.69 × 10 02 7.97 1.24 × 10 + 01 2.09
F12 −4.59 −4.59 4.53 2.23 × 10 01 −4.59 −4.59 2.61 × 10 11
F13 0.9 2.51 3.37 6.31 × 10 01 9.40 × 10 01 1.07 5.54 × 10 02
F14 0 4.97 × 10 02 1.45 × 10 01 7.41 × 10 02 4.80 × 10 01 1.04 5.72 × 10 01
F15 0 1.29 × 10 01 1.58 × 10 + 01 2.08 × 10 + 01 3.81 × 10 + 01 8.90 × 10 + 01 2.55 × 10 + 01
F16 0 2.88 × 10 + 01 3.53 × 10 + 01 7.84 8.56 × 10 + 01 3.69 × 10 + 02 2.39 × 10 + 02
F17 0 8.03 × 10 01 1.15 2.30 × 10 01 5.30 7.89 1.49
F18 0 1.03 × 10 06 4.35 × 10 02 8.95 × 10 02 6.14 × 10 + 03 3.37 × 10 + 09 7.40 × 10 + 09
F19 0 2.07 × 10 11 2.51 × 10 11 1.88 × 10 12 0 3.27 × 10 12 6.66 × 10 12
F20 −1 3.27 × 10 13 9.86 × 10 13 4.08 × 10 13 1.02 × 10 14 6.56 × 10 13 2.72 × 10 12

Continuation of comparative performances of MShOA and 14 algorithms on benchmark functions.

Algorithms MAO AOA
F f min Best Ave Std Best Ave Std
F1 0 9.33 × 10 + 01 2.68 × 10 + 02 1.47 × 10 + 02 2.27 6.62 8.88
F2 0 3.40 × 10 + 01 6.16 × 10 + 01 1.45 × 10 + 01 9.00 × 10 03 3.99 × 10 01 2.05 × 10 01
F3 0 2.78 × 10 + 02 3.57 × 10 + 02 4.38 × 10 + 01 2.03 × 10 35 3.33 × 10 10 1.82 × 10 09
F4 0 2.59 × 10 + 01 3.59 × 10 + 01 4.45 1.11 × 10 17 2.80 × 10 02 2.15 × 10 02
F5 0 4.35 × 10 + 20 8.75 × 10 + 25 2.39 × 10 + 26 7.48 × 10 46 6.62 × 10 08 3.62 × 10 07
F6 0 1.26 × 10 + 05 7.67 × 10 + 05 6.73 × 10 + 05 0 0 0
F7 0 1.27 × 10 + 01 1.86 × 10 + 01 3.71 0 0 0
F8 0 5.66 × 10 + 02 9.44 × 10 + 02 2.35 × 10 + 02 0 2.62 × 10 240 0
F9 −1 9.99 × 10 01 9.99 × 10 01 1.01 × 10 04 −1.00 7.34 × 10 01 6.91 × 10 01
F10 0 1.36 × 10 + 02 7.07 × 10 + 05 2.25 × 10 + 06 2.52 × 10 + 02 3.98 × 10 + 02 6.68 × 10 + 01
F11 0 1.29 × 10 + 01 1.44 × 10 + 01 8.01 × 10 01 4.44 × 10 16 4.44 × 10 16 0
F12 −4.59 −4.59 3.96 5.87 × 10 01 −4.59 −4.59 1.14 × 10 05
F13 0.9 6.83 8.54 8.00 × 10 01 9.00 × 10 01 9.00 × 10 01 4.52 × 10 16
F14 0 1.00 2.31 7.98 × 10 01 1.19 × 10 07 1.44 × 10 04 1.31 × 10 04
F15 0 2.00 × 10 + 02 2.38 × 10 + 02 1.49 × 10 + 01 0 0 0
F16 0 2.18 × 10 + 03 3.90 × 10 + 03 1.23 × 10 + 03 2.87 × 10 + 01 2.88 × 10 + 01 4.70 × 10 02
F17 0 6.91 8.86 9.47 × 10 01 9.99 × 10 02 9.99 × 10 02 6.56 × 10 08
F18 0 2.75 × 10 + 01 2.25 × 10 + 05 1.09 × 10 + 06 0 0 0
F19 0 2.06 × 10 07 3.66 × 10 06 4.38 × 10 06 6.21 × 10 08 4.46 × 10 05 1.16 × 10 04
F20 −1 1.65 × 10 11 8.97 × 10 11 7.30 × 10 11 1.13 × 10 08 9.19 × 10 08 6.28 × 10 08

Continuation of comparative performances of MShOA and 14 algorithms on benchmark functions.

Algorithms SSOA TSA
F f min Best Ave Std Best Ave Std
F1 0 1.45 × 10 170 7.68 × 10 155 4.18 × 10 154 4.61 × 10 79 4.25 × 10 76 6.57 × 10 76
F2 0 0 0 0 0 3.11 × 10 03 5.48 × 10 03
F3 0 6.92 × 10 84 6.63 × 10 78 3.31 × 10 77 3.38 × 10 40 2.07 × 10 38 2.36 × 10 38
F4 0 7.10 × 10 82 3.21 × 10 76 9.69 × 10 76 3.94 × 10 37 4.64 × 10 36 5.72 × 10 36
F5 0 6.86 × 10 83 2.68 × 10 78 9.53 × 10 78 7.84 × 10 41 2.26 × 10 38 2.74 × 10 38
F6 0 0 0 0 0 0 0
F7 0 2.75 × 10 168 1.05 × 10 156 5.72 × 10 156 3.56 × 10 82 8.62 × 10 78 2.69 × 10 77
F8 0 6.39 × 10 167 9.21 × 10 157 3.74 × 10 156 9.46 × 10 82 3.99 × 10 76 1.62 × 10 75
F9 −1 9.98 × 10 01 9.99 × 10 01 1.69 × 10 04 9.97 × 10 01 9.98 × 10 01 2.59 × 10 04
F10 0 1.39 × 10 104 8.76 × 10 92 2.42 × 10 91 2.31 × 10 74 3.94 × 10 71 9.39 × 10 71
F11 0 4.44 × 10 16 4.44 × 10 16 0 3.11 × 10 15 3.35 × 10 15 9.01 × 10 16
F12 −4.59 −4.59 4.46 1.77 × 10 01 −4.59 4.32 4.11 × 10 01
F13 0.9 9.00 × 10 01 9.00 × 10 01 4.52 × 10 16 4.00 7.48 1.33
F14 0 4.57 × 10 06 2.09 × 10 04 2.25 × 10 04 2.62 × 10 05 2.90 × 10 04 2.01 × 10 04
F15 0 0 0 0 0 1.58 × 10 + 01 4.69 × 10 + 01
F16 0 2.88 × 10 + 01 2.89 × 10 + 01 7.06 × 10 02 2.79 × 10 + 01 2.87 × 10 + 01 3.02 × 10 01
F17 0 3.20 × 10 69 8.63 × 10 02 3.96 × 10 02 9.99 × 10 02 1.10 × 10 01 3.05 × 10 02
F18 0 3.37 × 10 75 1.24 × 10 31 6.82 × 10 31 7.70 × 10 23 4.18 × 10 07 1.64 × 10 06
F19 0 1.56 × 10 07 2.28 × 10 05 2.25 × 10 05 2.80 × 10 08 7.30 × 10 06 1.82 × 10 05
F20 −1 3.99 × 10 10 1.84 × 10 09 1.24 × 10 09 1.99 × 10 11 1.18 × 10 10 1.34 × 10 10

Continuation of comparative performances of MShOA and 14 algorithms on benchmark functions.

Algorithms CFOA
F f min Best Ave Std
F1 0 2.45 × 10 + 02 3.89 × 10 + 04 1.33 × 10 + 05
F2 0 6.35 × 10 + 01 1.23 × 10 + 02 5.28 × 10 + 01
F3 0 3.15 × 10 + 02 4.98 × 10 + 02 1.06 × 10 + 02
F4 0 2.92 × 10 + 01 4.77 × 10 + 01 9.91 × 10 + 00
F5 0 5.05 × 10 + 24 6.42 × 10 + 33 2.83 × 10 + 34
F6 0 2.35 × 10 + 05 5.12 × 10 + 07 1.21 × 10 + 08
F7 0 8.95 × 10 + 00 3.36 × 10 + 01 1.45 × 10 + 01
F8 0 6.66 × 10 + 02 1.64 × 10 + 03 7.98 × 10 + 02
F9 −1 9.95 × 10 01 9.96 × 10 01 5.13 × 10 04
F10 0 2.73 × 10 + 02 1.01 × 10 + 05 5.46 × 10 + 05
F11 0 1.16 × 10 + 01 1.57 × 10 + 01 1.46 × 10 + 00
F12 −4.59 4.59 × 10 + 00 3.36 × 10 + 00 1.16 × 10 + 00
F13 0.9 6.40 × 10 + 00 8.54 × 10 + 00 1.23 × 10 + 00
F14 0 1.41 × 10 + 00 7.13 × 10 + 00 5.45 × 10 + 00
F15 0 1.87 × 10 + 02 2.65 × 10 + 02 4.74 × 10 + 01
F16 0 2.85 × 10 + 03 1.31 × 10 + 04 9.58 × 10 + 03
F17 0 7.40 × 10 + 00 1.20 × 10 + 01 2.76 × 10 + 00
F18 0 3.40 × 10 + 01 5.11 × 10 + 09 2.06 × 10 + 10
F19 0 1.60 × 10 09 3.38 × 10 07 6.10 × 10 07
F20 −1 5.66 × 10 11 5.60 × 10 10 8.06 × 10 10

Statistical analysis of the Wilcoxon signed-rank test comparing MShOA with other algorithms for 10 unimodal functions using a 5% significance level.

MShOA–GWO MShOA–BWO MShOA–DO
(+/=/−) p-Value (+/=/−) p-Value (+/=/−) p-Value
(10/0/0) 5 . 06 × 10 03 (7/3/0) 1 . 80 × 10 02 (10/0/0) 5 . 06 × 10 03
MShOA–WOA MShOA–MPA MShOA–LCA
(+/=/−) p-Value (+/=/−) p-Value (+/=/−) p-Value
(9/1/0) 7 . 69 × 10 03 (10/0/0) 5 . 06 × 10 03 (10/0/0) 5 . 06 × 10 03
MShOA–SSA MShOA–EMA MShOA–ALO
(+/=/−) p-Value (+/=/−) p-Value (+/=/−) p-Value
(10/0/0) 5 . 06 × 10 03 (10/0/0) 5 . 06 × 10 03 (10/0/0) 5 . 06 × 10 03
MShOA–MAO MShOA–AOA MShOA–SSOA
(+/=/−) p-Value (+/=/−) p-Value (+/=/−) p-Value
(10/0/0) 5 . 06 × 10 03 (8/2/0) 1 . 17 × 10 02 (8/2/0) 1 . 17 × 10 02
MShOA–TSA MShOA–CFOA
(+/=/−) p-Value (+/=/−) p-Value
(9/1/0) 7 . 69 × 10 03 (10/0/0) 5 . 06 × 10 03

The bold numbers in this table indicate a significant difference between the two compared algorithms, where MShOA demonstrated a superior performance.

Performance comparison on 10 unimodal functions by Friedman test.

Algorithm MShOA GWO BWO DO WOA
Mean of ranks 1 . 45 7.20 2 . 90 9.15 6.90
Global ranking 1 8 2 10 7
Algorithm MPA LCA SSA EMA ALO
Mean of ranks 6.80 8.70 11.15 10.80 11.50
Global ranking 6 9 12 11 13
Algorithm MAO AOA SSOA TSA CFOA
Mean of ranks 13.95 6.45 3 . 65 5 . 10 14.30
Global ranking 14 5 3 4 15

The five algorithms with the best overall performance according to the Friedman test are highlighted in bold.

Statistical analysis of the Wilcoxon signed-rank test comparing MShOA with other algorithms for 10 multimodal functions using a 5% significance level.

MShOA–GWO MShOA–BWO MShOA–DO
(+/=/−) p-Value (+/=/−) p-Value (+/=/−) p-Value
(7/0/3) 3.86 × 10 01 (3/4/3) 4.63 × 10 01 (7/0/3) 3.33 × 10 01
MShOA–WOA MShOA–MPA MShOA–LCA
(+/=/−) p-Value (+/=/−) p-Value (+/=/−) p-Value
(7/0/3) 3.33 × 10 01 (7/0/3) 5.08 × 10 01 (7/0/3) 3.86 × 10 01
MShOA–SSA MShOA–EMA MShOA–ALO
(+/=/−) p-Value (+/=/−) p-Value (+/=/−) p-Value
(8/0/2) 2 . 84 × 10 02 (8/0/2) 4 . 69 × 10 02 (8/0/2) 2 . 84 × 10 02
MShOA–MAO MShOA–AOA MShOA–SSOA
(+/=/−) p-Value (+/=/−) p-Value (+/=/−) p-Value
(8/0/2) 1 . 66 × 10 02 (3/3/4) 8.66 × 10 01 (4/3/3) 8.66 × 10 01
MShOA–TSA MShOA–CFOA
(+/=/−) p-Value (+/=/−) p-Value
(7/0/3) 2.85 × 10 01 (8/0/2) 1 . 25 × 10 02

The bold numbers in this table indicate a significant difference between the two compared algorithms, where MShOA demonstrated a superior performance.

Performance comparison on 10 multimodal functions by Friedman test.

Algorithm MShOA GWO BWO DO WOA
Mean of ranks 5 . 30 7.40 3 . 10 7.90 6.50
Global ranking 2 8 1 9 6
Algorithm MPA LCA SSA EMA ALO
Mean of ranks 5 . 90 7.10 9.90 9.55 9.80
Global ranking 4 7 13 11 12
Algorithm MAO AOA SSOA TSA CFOA
Mean of ranks 13.20 5 . 55 6.25 8.45 14.10
Global ranking 14 3 5 10 15

The four algorithms with the best overall performance according to the Friedman test are highlighted in bold.

Statistical analysis of the Wilcoxon signed-rank test comparing MShOA with other algorithms for 20 functions using a 5% significance level.

MShOA–GWO MShOA–BWO MShOA–DO
(+/=/−) p-Value (+/=/−) p-Value (+/=/−) p-Value
(17/0/3) 1 . 69 × 10 02 (10/7/3) 4.63 × 10 01 (17/0/3) 5 . 73 × 10 03
MShOA–WOA MShOA–MPA MShOA–LCA
(+/=/−) p-Value (+/=/−) p-Value (+/=/−) p-Value
(16/1/3) 2 . 18 × 10 02 (17/0/3) 2 . 76 × 10 02 (17/0/3) 1 . 11 × 10 02
MShOA–SSA MShOA–EMA MShOA–ALO
(+/=/−) p-Value (+/=/−) p-Value (+/=/−) p-Value
(18/0/2) 2 . 93 × 10 04 (18/0/2) 6 . 81 × 10 04 (18/0/2) 2 . 93 × 10 04
MShOA–MAO MShOA–AOA MShOA–SSOA
(+/=/−) p-Value (+/=/−) p-Value (+/=/−) p-Value
(18/0/2) 1 . 63 × 10 04 (11/5/4) 7.83 × 10 02 (12/5/3) 1.40 × 10 01
MShOA–TSA MShOA–TSA
(+/=/−) p-Value (+/=/−) p-Value
(16/1/3) 2 . 69 × 10 02 (18/0/2) 1 . 40 × 10 04

The bold values in this table indicate a significant difference between the two compared algorithms, where MShOA demonstrated a superior performance.

Performance comparison of algorithms on 20 unimodal and multimodal functions by Friedman test.

Algorithm MShOA GWO BWO DO WOA
Mean of ranks 3 . 38 7.30 3 . 00 8.53 6.70
Global ranking 2 8 1 10 6
Algorithm MPA LCA SSA EMA ALO
Mean of ranks 6.35 7.90 10.53 10.18 10.65
Global ranking 5 9 12 11 13
Algorithm MAO AOA SSOA TSA CFOA
Mean of ranks 13.58 6 . 00 4 . 95 6.78 14.20
Global ranking 14 4 3 7 15

The four algorithms with the best overall performance according to the Friedman test are highlighted in bold.

Real-world optimization problems from the CEC2020 benchmark, where D is the problem dimension, g is the number of inequality constraints, h is the number of equality constraints, and f(x¯*) is the best-known feasible objective function value.

Prob Name D g h f(x¯*)
CEC20-RC08 Process synthesis problem 2 2 0 2.0000000000
CEC20-RC09 Process synthesis and design problem 3 1 1 2.5576545740
CEC20-RC10 Process flow sheeting problem 3 3 0 1.0765430833
CEC20-RC15 Weight minimization of a speed reducer 7 11 0 2.9944244658 × 10 + 03
CEC20-RC17 Tension/compression spring design (case 1) 3 3 0 1.2665232788 × 10 02
CEC20-RC19 Welded beam design 4 5 0 1.6702177263
CEC20-RC21 Multiple disk clutch brake design problem 5 8 0 2.3524245790 × 10 01
CEC20-RC22 Planetary gear train design optimization problem 9 10 1 5.2576870748 × 10 01
CEC20-RC30 Tension/compression spring design (case 2) 3 8 0 2.6138840583
CEC20-RC32 Himmelblau’s function 5 6 0 3.0665538672 × 10 + 04

Results of the process synthesis problem.

Algorithms x 1 x 2 f min
BWO 5.00 × 10 01 0.149 × 10 01 0.200 × 10 01
MShOA 5.00 × 10 01 9.37 × 10 01 0 . 200 × 10 01
SSOA 0 8.94 × 10 01 0.201×1001 *
AOA 4.98 × 10 01 9.24 × 10 01 0.200×1001 *

* This solution does not satisfy one or more constraints.

Comparison results of the process synthesis and design problem.

Algorithms x 1 x 2 x 3 f min
BWO 8.60 × 10 01 8.44 × 10 01 1.09 × 10 01 0.257×1001 *
MShOA 8.53 × 10 01 8.52 × 10 01 8.45 × 10 02 0 . 256 × 10 01
SSOA 9.03 × 10 01 7.93 × 10 01 3.65 × 10 01 0.261×1001 *
AOA 8.62 × 10 01 8.41 × 10 01 2.95 × 10 04 0.257×1001 *

* This solution does not satisfy one or more constraints.

Comparison results of the process flow sheeting problem.

Algorithms x 1 x 2 x 3 f min
BWO 9.48 × 10 01 0.209 × 10 01 6.11 × 10 01 0.118×1001 *
MShOA 9.43 × 10 01 0.210 × 10 01 8.50 × 10 01 0 . 108 × 10 01
SSOA 9.67 × 10 01 0.211 × 10 01 0.142 × 10 01 0.119 × 10 01
AOA 9.38 × 10 01 0.209 × 10 01 0.149 × 10 01 0.114×1001 *

* This solution does not satisfy one or more constraints.

Comparison results of the weight minimization of a speed reducer.

Parameters BWO MShOA SSOA AOA
x 1 0.354 × 10 01 0.350 × 10 01 0.358 × 10 01 0.360 × 10 01
x 2 7.00 × 10 01 7.00 × 10 01 7.13 × 10 01 7.00 × 10 01
x 3 1.70 × 10 + 01 1.72 × 10 + 01 2.56 × 10 + 01 1.70 × 10 + 01
x 4 0.790 × 10 01 0.734 × 10 01 0.766 × 10 01 0.730 × 10 01
x 5 0.815 × 10 01 0.796 × 10 01 0.777 × 10 01 0.830 × 10 01
x 6 0.340 × 10 01 0.339 × 10 01 0.385 × 10 01 0.350 × 10 01
x 7 0.529 × 10 01 0.534 × 10 01 0.532 × 10 01 0.531 × 10 01
f m i n 3 . 04 × 10 + 03 3.08 × 10 + 03 6.04×10+03 * 3.10 × 10 + 03

* This solution does not satisfy one or more constraints.

Comparison results of the tension/compression spring design (case 1).

Algorithms x 1 x 2 x 3 f min
BWO 5.00 × 10 02 3.17 × 10 01 1.41 × 10 + 01 1 . 27 × 10 02
MShOA 5.43 × 10 02 4.23 × 10 01 0.827 × 10 01 1.28 × 10 02
SSOA 6.17 × 10 02 7.09 × 10 01 0.351 × 10 01 2.68×1002 *
AOA 5.00 × 10 02 3.11 × 10 01 1.50 × 10 + 01 1.32 × 10 02

* This solution does not satisfy one or more constraints.

Comparison results of the welded beam design.

Algorithms x 1 x 2 x 3 x 4 f min
BWO 2.40 × 10 01 0.280 × 10 01 0.959 × 10 01 2.00 × 10 01 0.178×1001 *
MShOA 1.89 × 10 01 0.394 × 10 01 0.917 × 10 01 2.00 × 10 01 0 . 174 × 10 01
SSOA 2.58 × 10 01 0.383 × 10 01 0.763 × 10 01 3.74 × 10 01 0.273×1001 *
AOA 1.25 × 10 01 0.604 × 10 01 1.00 × 10 + 01 1.96 × 10 01 0.199 × 10 01

* This solution does not satisfy one or more constraints.

Comparison results of the multiple disk clutch brake design problem.

Variables BWO MShOA SSOA AOA
x 1 7.00 × 10 + 01 7.01 × 10 + 01 6.61 × 10 + 01 7.00 × 10 + 01
x 2 9.00 × 10 + 01 9.01 × 10 + 01 9.00 × 10 + 01 9.00 × 10 + 01
x 3 0.10 × 10 01 0.10 × 10 01 0.10 × 10 01 0.10 × 10 01
x 4 7.71 × 10 + 01 2.04 × 10 + 02 0.415 × 10 01 1.00 × 10 + 03
x 5 0.20 × 10 01 0.20 × 10 01 0.20 × 10 01 0.20 × 10 01
f m i n 2.35×1001 * 2.36 × 10 01 2.74×1001 * 2.35×1001 *

* This solution does not satisfy one or more constraints.

Comparison results of the planetary gear train design optimization problem (transposed).

Variables BWO MShOA SSOA AOA
x 1 2.44 × 10 + 01 4.53 × 10 + 01 2.40 × 10 + 01 2.27 × 10 + 01
x 2 1.35 × 10 + 01 3.05 × 10 + 01 1.35 × 10 + 01 1.35 × 10 + 01
x 3 1.35 × 10 + 01 2.42 × 10 + 01 1.35 × 10 + 01 1.35 × 10 + 01
x 4 1.65 × 10 + 01 2.48 × 10 + 01 1.65 × 10 + 01 1.65 × 10 + 01
x 5 1.35 × 10 + 01 1.82 × 10 + 01 1.35 × 10 + 01 1.35 × 10 + 01
x 6 5.57 × 10 + 01 9.11 × 10 + 01 5.87 × 10 + 01 5.81 × 10 + 01
x 7 5.10 × 10 01 9.06 × 10 01 5.10 × 10 01 0.105 × 10 01
x 8 0.207 × 10 01 0.101 × 10 01 5.10 × 10 01 0.649 × 10 01
x 9 5.10 × 10 01 0.164 × 10 01 6.15 × 10 01 0.371 × 10 01
f m i n 7.77 × 10 01 5 . 30 × 10 01 8.70×1001 * 6.66×1001 *

* This solution does not satisfy one or more constraints.

Comparison results of the tension/compression spring design (case 2).

Algorithms x 1 x 2 x 3 f min
BWO 1.05 × 10 + 01 0.116 × 10 01 3.55 × 10 + 01 0.299 × 10 01
MShOA 0.548 × 10 01 0.166 × 10 01 3.69 × 10 + 01 0 . 270 × 10 01
SSOA 0.631 × 10 01 0.163 × 10 01 3.73 × 10 + 01 0.304 × 10 01
AOA 1.72 × 10 + 01 8.98 × 10 01 3.45 × 10 + 01 0.291 × 10 01

Comparison results of Himmelblau’s function.

Variables BWO MShOA SSOA AOA
x 1 7.80 × 10 + 01 7.87 × 10 + 01 8.33 × 10 + 01 7.80 × 10 + 01
x 2 3.30 × 10 + 01 3.44 × 10 + 01 3.67 × 10 + 01 3.36 × 10 + 01
x 3 3.01 × 10 + 01 3.08 × 10 + 01 3.25 × 10 + 01 3.07 × 10 + 01
x 4 4.50 × 10 + 01 4.40 × 10 + 01 3.79 × 10 + 01 4.50 × 10 + 01
x 5 3.65 × 10 + 01 3.50 × 10 + 01 3.43 × 10 + 01 3.42 × 10 + 01
f m i n 3.06×10+04 * 3.05 × 10 + 04 2.96×10+04 * 3.02×10+04 *

* This solution does not satisfy one or more constraints.

Generator inequality constraints.

Min Max Initial
P G 1 50 200 99.23
P G 2 20 80 80
P G 5 15 50 50
P G 8 10 35 20
P G 11 10 30 20
P G 13 12 40 20
V G 1 0.95 1.1 1.05
V G 2 0.95 1.1 1.04
V G 5 0.95 1.1 1.01
V G 8 0.95 1.1 1.01
V G 11 0.95 1.1 1.05
V G 13 0.95 1.1 1.05

Transformer inequality constraints.

Min Max Initial
T 11 0.9 1.1 1.078
T 12 0.9 1.1 1.069
T 15 0.9 1.1 1.032
T 36 0.9 1.1 1.068

Shunt VAR compensator inequality constraints.

Min Max Initial
Q C 10 0 5 0
Q C 12 0 5 0
Q C 15 0 5 0
Q C 17 0 5 0
Q C 20 0 5 0
Q C 21 0 5 0
Q C 23 0 5 0
Q C 24 0 5 0
Q C 29 0 5 0

Obtained results.

Minimization Studies BWO MShOA SSOA AOA
Fuel cost 2.237 × 10 4 8.054 × 10 2 1.797 × 10 4 8.102 × 10 2
Active power 7.640 × 10 0 4.758 × 10 0 1.158 × 10 1 4.936 × 10 0
Reactive power 1.065 × 10 1 1.923 × 10 1 3.288 × 10 1 6.280 × 10 1

Appendix A

Classification of testbench functions.

ID Function Name Unimodal Multimodal n-Dimensional Non-Separable Convex Differentiable Continuous Non-Convex Non-Differentiable Separable Random
F1 Brown x x x x x
F2 Griewank x x x x x
F3 Schwefel 2.20 x x x x x x
F4 Schwefel 2.21 x x x x x x
F5 Schwefel 2.22 x x x x x x
F6 Schwefel 2.23 x x x x x x
F7 Sphere x x x x x x
F8 Sum Squares x x x x x x
F9 Xin-She Yang N. 3 x x x x x
F10 Zakharov x x x x
F11 Ackley x x x x x
F12 Ackley N. 4 x x x x x
F13 Periodic x x x x x x
F14 Quartic x x x x x x
F15 Rastrigin x x x x x x
F16 Rosenbrock x x x x x x
F17 Salomon x x x x x x
F18 Xin-She Yang x x x x x
F19 Xin-She Yang N. 2 x x x x x
F20 Xin-She Yang N. 4 x x x x x

Descriptions of the testbench functions.

ID Function Dim Interval f min
F1 f ( x ) = f x 1 , , x n = i = 1 n 1 x i 2 x i + 1 2 + 1 + x i + 1 2 x i 2 + 1 30 [ 1 , 4 ] 0
F2 f ( x ) = f x 1 , , x n = 1 + i = 1 n x i 2 4000 i = 1 n cos x i i 30 [ 600 , 600 ] 0
F3 f ( x ) = f x 1 , , x n = i = 1 n x i 30 [ 100 , 100 ] 0
F4 f ( x ) = f x 1 , , x n = max i = 1 , , n x i 30 [ 100 , 100 ] 0
F5 f ( x ) = f x 1 , , x n = i = 1 n x i + i = 1 n x i 30 [ 100 , 100 ] 0
F6 f ( x ) = f x 1 , , x n = i = 1 n x i 10 30 [ 10 , 10 ] 0
F7 f ( x ) = f x 1 , x 2 , , x n = i = 1 n x i 2 30 [ 5.12 , 5.12 ] 0
F8 f ( x ) = f x 1 , , x n = i = 1 n i x i 2 30 [ 10 , 10 ] 0
F9 f ( x ) = f x 1 , , x n = exp i = 1 n x i / β 2 m 2 exp i = 1 n x i 2 i = 1 n cos 2 x i 30 [2π,2π],m=5, β=15 −1
F10 f ( x ) = f x 1 , . . , x n = i = 1 n x i 2 + i = 1 n 0.5 i x i 2 + i = 1 n 0.5 i x i 4 30 [ 5 , 10 ] 0
F11 f ( x ) = f x 1 , , x n = a · exp b 1 n i = 1 n x i 2 exp 1 d i = 1 n cos c x i + a + exp ( 1 ) 30 [32,32],a=20, b=0.3,c=2π 0
F12 f ( x ) = f x 1 , , x n = i = 1 n 1 e 0.2 x i 2 + x i + 1 2 + 3 cos 2 x i + sin 2 x i + 1 2 [ 35 , 35 ] 5.901 × 10 14
F13 f ( x ) = f x 1 , , x n = 1 + i = 1 n sin 2 x i 0.1 e i = 1 n x i 2 30 [ 10 , 10 ] 0.9
F14 f ( x ) = f x 1 , , x n = i = 1 n i x i 4 + random [ 0 , 1 ) 30 [ 1.28 , 1.28 ] 0+ random noise
F15 f ( x , y ) = 10 n + i = 1 n x i 2 10 cos 2 π x i 30 [ 5.12 , 5.12 ] 0
F16 f x 1 x n = i = 1 n 1 100 x i 2 x i + 1 2 + 1 x i 2 30 [ 5 , 10 ] 0
F17 f ( x ) = f x 1 , , x n = 1 cos 2 π i = 1 D x i 2 + 0.1 i = 1 D x i 2 30 [ 100 , 100 ] 0
F18 f ( x ) = f x 1 , , x n = i = 1 n ϵ i x i i 30 [5,5],ε random 0
F19 f ( x ) = f x 1 , , x n = i = 1 n x i exp i = 1 n sin x i 2 30 [ 2 π , 2 π ] 0
F20 f ( x ) = f x 1 , , x n = i = 1 n sin 2 x i e i = 1 n x i 2 e i = 1 n sin 2 x i 30 [ 10 , 10 ] −1

References

1. Sang-To, T.; Le-Minh, H.; Wahab, M.A.; Thanh, C.L. A new metaheuristic algorithm: Shrimp and Goby association search algorithm and its application for damage identification in large-scale and complex structures. Adv. Eng. Softw.; 2023; 176, 103363. [DOI: https://dx.doi.org/10.1016/j.advengsoft.2022.103363]

2. Lodewijks, G.; Cao, Y.; Zhao, N.; Zhang, H. Reducing CO2 Emissions of an Airport Baggage Handling Transport System Using a Particle Swarm Optimization Algorithm. IEEE Access; 2021; 9, pp. 121894-121905. [DOI: https://dx.doi.org/10.1109/ACCESS.2021.3109286]

3. Kumar, A. Chapter 5—Application of nature-inspired computing paradigms in optimal design of structural engineering problems—A review. Nature-Inspired Computing Paradigms in Systems; Intelligent Data-Centric Systems Mellal, M.A.; Pecht, M.G. Academic Press: Cambridge, MA, USA, 2021; pp. 63-74. [DOI: https://dx.doi.org/10.1016/B978-0-12-823749-6.00010-6]

4. Sharma, S.; Saha, A.K.; Lohar, G. Optimization of weight and cost of cantilever retaining wall by a hybrid metaheuristic algorithm. Eng. Comput.; 2022; 38, pp. 2897-2923. [DOI: https://dx.doi.org/10.1007/s00366-021-01294-x]

5. Peraza-Vázquez, H.; Peña-Delgado, A.; Ranjan, P.; Barde, C.; Choubey, A.; Morales-Cepeda, A.B. A bio-inspired method for mathematical optimization inspired by arachnida salticidade. Mathematics; 2022; 10, 102. [DOI: https://dx.doi.org/10.3390/math10010102]

6. Peña-Delgado, A.F.; Peraza-Vázquez, H.; Almazán-Covarrubias, J.H.; Cruz, N.T.; García-Vite, P.M.; Morales-Cepeda, A.B.; Ramirez-Arredondo, J.M. A Novel Bio-Inspired Algorithm Applied to Selective Harmonic Elimination in a Three-Phase Eleven-Level Inverter. Math. Probl. Eng.; 2020; 8856040. [DOI: https://dx.doi.org/10.1155/2020/8856040]

7. Tzanetos, A.; Dounias, G. Nature inspired optimization algorithms or simply variations of metaheuristics?. Artif. Intell. Rev.; 2021; 54, pp. 1841-1862. [DOI: https://dx.doi.org/10.1007/s10462-020-09893-8]

8. Joyce, T.; Herrmann, J.M. A Review of no free lunch theorems, and their implications for metaheuristic optimisation. Nature-Inspired Algorithms and Applied Optimization; Yang, X.S. Springer International Publishing: Cham, Switzerland, 2018; pp. 27-51. [DOI: https://dx.doi.org/10.1007/978-3-319-67669-2_2]

9. Almazán-Covarrubias, J.H.; Peraza-Vázquez, H.; Peña-Delgado, A.F.; García-Vite, P.M. An Improved Dingo Optimization Algorithm Applied to SHE-PWM Modulation Strategy. Appl. Sci.; 2022; 12, 992. [DOI: https://dx.doi.org/10.3390/app12030992]

10. Abualigah, L.; Hanandeh, E.S.; Zitar, R.A.; Thanh, C.L.; Khatir, S.; Gandomi, A.H. Revolutionizing sustainable supply chain management: A review of metaheuristics. Eng. Appl. Artif. Intell.; 2023; 126, 106839. [DOI: https://dx.doi.org/10.1016/j.engappai.2023.106839]

11. Beyer, H.G.; Beyer, H.G.; Schwefel, H.P.; Schwefel, H.P. Evolution strategies—A comprehensive introduction. Nat. Comput.; 2002; 1, pp. 3-52. [DOI: https://dx.doi.org/10.1023/A:1015059928466]

12. Yao, X.; Liu, Y.; Lin, G. Evolutionary programming made faster. IEEE Trans. Evol. Comput.; 1999; 3, pp. 82-102. [DOI: https://dx.doi.org/10.1109/4235.771163]

13. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. J. Glob. Optim.; 1997; 11, pp. 341-359. [DOI: https://dx.doi.org/10.1023/A:1008202821328]

14. Holland, J.H. Genetic algorithms. Sci. Am.; 1992; 267, pp. 66-73. [DOI: https://dx.doi.org/10.1038/scientificamerican0792-66]

15. Koza, J.R. Genetic programming as a means for programming computers by natural selection. Stat. Comput.; 1994; 4, pp. 87-112. [DOI: https://dx.doi.org/10.1007/BF00175355]

16. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst.; 2019; 97, pp. 849-872. [DOI: https://dx.doi.org/10.1016/j.future.2019.02.028]

17. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw.; 2017; 114, pp. 163-191. [DOI: https://dx.doi.org/10.1016/j.advengsoft.2017.07.002]

18. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw.; 2016; 95, pp. 51-67. [DOI: https://dx.doi.org/10.1016/j.advengsoft.2016.01.008]

19. Askarzadeh, A. A novel metaheuristic method for solving constrained engineering optimization problems: Crow search algorithm. Comput. Struct.; 2016; 169, pp. 1-12. [DOI: https://dx.doi.org/10.1016/j.compstruc.2016.03.001]

20. Mirjalili, S. Dragonfly algorithm: A new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput. Appl.; 2016; 27, pp. 1053-1073. [DOI: https://dx.doi.org/10.1007/s00521-015-1920-1]

21. Mirjalili, S. The ant lion optimizer. Adv. Eng. Softw.; 2015; 83, pp. 80-98. [DOI: https://dx.doi.org/10.1016/j.advengsoft.2015.01.010]

22. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw.; 2014; 69, pp. 46-61. [DOI: https://dx.doi.org/10.1016/j.advengsoft.2013.12.007]

23. Kaveh, A.; Farhoudi, N. A new optimization method: Dolphin echolocation. Adv. Eng. Softw.; 2013; 59, pp. 53-70. [DOI: https://dx.doi.org/10.1016/j.advengsoft.2013.03.004]

24. Gandomi, A.H.; Yang, X.S.; Alavi, A.H.; Talatahari, S. Bat algorithm for constrained optimization tasks. Neural Comput. Appl.; 2013; 22, pp. 1239-1255. [DOI: https://dx.doi.org/10.1007/s00521-012-1028-9]

25. Gandomi, A.H.; Alavi, A.H. Krill herd: A new bio-inspired optimization algorithm. Commun. Nonlinear Sci. Numer. Simul.; 2012; 17, pp. 4831-4845. [DOI: https://dx.doi.org/10.1016/j.cnsns.2012.05.010]

26. Yang, X.S. Firefly algorithms for multimodal optimization. Stochastic Algorithms: Foundations and Applications; Lecture Notes in Computer Science Springer: Berlin/Heidelberg, Germany, 2009; Volume 5792, [DOI: https://dx.doi.org/10.1007/978-3-642-04944-6_14]

27. Dorigo, M.; Caro, G.D. Ant Colony Optimization: A New Meta-Heuristic. Proceedings of the 1999 Congress on Evolutionary Computation (CEC 1999); Washington, DC, USA, 6–9 July 1999; [DOI: https://dx.doi.org/10.1109/CEC.1999.782657]

28. Kennedy, J.; Eberhart, R. Particle Swarm Optimization. Proceedings of the IEEE International Conference on Neural Networks; Perth, WA, Australia, 27 November–1 December 1995; [DOI: https://dx.doi.org/10.4018/ijmfmp.2015010104]

29. Kaveh, A.; Dadras, A. A novel meta-heuristic optimization algorithm: Thermal exchange optimization. Adv. Eng. Softw.; 2017; 110, pp. 69-84. [DOI: https://dx.doi.org/10.1016/j.advengsoft.2017.03.014]

30. Kaveh, A.; Bakhshpoori, T. Water Evaporation Optimization: A novel physically inspired optimization algorithm. Comput. Struct.; 2016; 167, pp. 69-85. [DOI: https://dx.doi.org/10.1016/j.compstruc.2016.01.008]

31. Kashan, A.H. A new metaheuristic for optimization: Optics inspired optimization (OIO). Comput. Oper. Res.; 2015; 55, pp. 99-125. [DOI: https://dx.doi.org/10.1016/j.cor.2014.10.011]

32. Rashedi, E.; Nezamabadi-pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci.; 2009; 179, pp. 2232-2248. [DOI: https://dx.doi.org/10.1016/j.ins.2009.03.004]

33. Formato, R.A. Central force optimization: A new metaheuristic with applications in applied electromagnetics. Prog. Electromagn. Res.; 2007; 77, pp. 425-491. [DOI: https://dx.doi.org/10.2528/PIER07082403]

34. Erol, O.K.; Eksin, I. A new optimization method: Big Bang-Big Crunch. Adv. Eng. Softw.; 2006; 37, pp. 106-111. [DOI: https://dx.doi.org/10.1016/j.advengsoft.2005.04.005]

35. Zhu, A.; Gu, Z.; Hu, C.; Niu, J.; Xu, C.; Li, Z. Political optimizer with interpolation strategy for global optimization. PLoS ONE; 2021; 16, e0251204. [DOI: https://dx.doi.org/10.1371/journal.pone.0251204]

36. Fadakar, E.; Ebrahimi, M. A new metaheuristic football game inspired algorithm. Proceedings of the 1st Conference on Swarm Intelligence and Evolutionary Computation (CSIEC 2016); Bam, Iran, 9–11 March 2016; [DOI: https://dx.doi.org/10.1109/CSIEC.2016.7482120]

37. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching-learning-based optimization: A novel method for constrained mechanical design optimization problems. CAD—Comput.-Aided Des.; 2011; 43, pp. 303-315. [DOI: https://dx.doi.org/10.1016/j.cad.2010.12.015]

38. Atashpaz-Gargari, E.; Lucas, C. Imperialist Competitive Algorithm: An Algorithm for Optimization Inspired by Imperialistic Competition. Proceedings of the 2007 IEEE Congress on Evolutionary Computation (CEC 2007); Singapore, 25–28 September 2007; [DOI: https://dx.doi.org/10.1109/CEC.2007.4425083]

39. Peraza-Vázquez, H.; Peña-Delgado, A.; Merino-Treviño, M.; Morales-Cepeda, A.B.; Sinha, N. A novel metaheuristic inspired by horned lizard defense tactics. Artif. Intell. Rev.; 2024; 57, pp. 1-65. [DOI: https://dx.doi.org/10.1007/s10462-023-10653-7]

40. Harifi, S.; Mohammadzadeh, J.; Khalilian, M.; Ebrahimnejad, S. Giza Pyramids Construction: An ancient-inspired metaheuristic algorithm for optimization. Evol. Intell.; 2021; 14, pp. 1743-1761. [DOI: https://dx.doi.org/10.1007/s12065-020-00451-3]

41. Patel, R.N.; Khil, V.; Abdurahmonova, L.; Driscoll, H.; Patel, S.; Pettyjohn-Robin, O.; Shah, A.; Goldwasser, T.; Sparklin, B.; Cronin, T.W. Mantis shrimp identify an object by its shape rather than its color during visual recognition. J. Exp. Biol.; 2021; 224, 242256. [DOI: https://dx.doi.org/10.1242/jeb.242256] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33737389]

42. Patel, R.N.; Cronin, T.W. Landmark navigation in a mantis shrimp. Proc. R. Soc. B; 2020; 287, 20201898. [DOI: https://dx.doi.org/10.1098/rspb.2020.1898] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33023415]

43. Streets, A.; England, H.; Marshall, J. Colour vision in stomatopod crustaceans: More questions than answers. J. Exp. Biol.; 2022; 225, jeb243699. [DOI: https://dx.doi.org/10.1242/jeb.243699]

44. Thoen, H.H.; How, M.J.; Chiou, T.H.; Marshall, J. A different form of color vision in mantis shrimp. Science; 2014; 343, pp. 411-413. [DOI: https://dx.doi.org/10.1126/science.1245824]

45. Chiou, T.H.; Kleinlogel, S.; Cronin, T.; Caldwell, R.; Loeffler, B.; Siddiqi, A.; Goldizen, A.; Marshall, J. Circular polarization vision in a stomatopod crustacean. Curr. Biol.; 2008; 18, pp. 429-434. [DOI: https://dx.doi.org/10.1016/j.cub.2008.02.066]

46. Zhong, B.; Wang, X.; Gan, X.; Yang, T.; Gao, J. A biomimetic model of adaptive contrast vision enhancement from mantis shrimp. Sensors; 2020; 20, 4588. [DOI: https://dx.doi.org/10.3390/s20164588]

47. Cronin, T.W.; Chiou, T.H.; Caldwell, R.L.; Roberts, N.; Marshall, J. Polarization signals in mantis shrimps. Proceedings of the Polarization Science and Remote Sensing IV; San Diego, CA, USA, 3–4 August 2009; Volume 7461, [DOI: https://dx.doi.org/10.1117/12.828492]

48. Wang, T.; Wang, S.; Gao, B.; Li, C.; Yu, W. Design of Mantis-Shrimp-Inspired Multifunctional Imaging Sensors with Simultaneous Spectrum and Polarization Detection Capability at a Wide Waveband. Sensors; 2024; 24, 1689. [DOI: https://dx.doi.org/10.3390/s24051689]

49. Daly, I.M.; How, M.J.; Partridge, J.C.; Temple, S.E.; Marshall, N.J.; Cronin, T.W.; Roberts, N.W. Dynamic polarization vision in mantis shrimps. Nat. Commun.; 2016; 7, pp. 1-9. [DOI: https://dx.doi.org/10.1038/ncomms12140] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/27401817]

50. Gagnon, Y.L.; Templin, R.M.; How, M.J.; Marshall, N.J. Circularly polarized light as a communication signal in mantis shrimps. Curr. Biol.; 2015; 25, pp. 3074-3078. [DOI: https://dx.doi.org/10.1016/j.cub.2015.10.047] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/26585281]

51. Alavi, S. Statistical Mechanics: Theory and Molecular Simulation. By Mark E. Tuckerman. Angew. Chem. Int. Ed.; 2011; 50, pp. 12138-12139. [DOI: https://dx.doi.org/10.1002/anie.201105752]

52. Patek, S.N.; Caldwell, R.L. Extreme impact and cavitation forces of a biological hammer: Strike forces of the peacock mantis shrimp Odontodactylus scyllarus. J. Exp. Biol.; 2005; 208, pp. 3655-3664. [DOI: https://dx.doi.org/10.1242/jeb.01831]

53. Patek, S.N.; Nowroozi, B.N.; Baio, J.E.; Caldwell, R.L.; Summers, A.P. Linkage mechanics and power amplification of the mantis shrimp’s strike. J. Exp. Biol.; 2007; 210, pp. 3677-3688. [DOI: https://dx.doi.org/10.1242/jeb.006486]

54. DeVries, M.S.; Murphy, E.A.; Patek, S.N. Research article: Strike mechanics of an ambush predator: The spearing mantis shrimp. J. Exp. Biol.; 2012; 215, pp. 4374-4384. [DOI: https://dx.doi.org/10.1242/jeb.075317]

55. Cox, S.M.; Schmidt, D.; Modarres-Sadeghi, Y.; Patek, S.N. A physical model of the extreme mantis shrimp strike: Kinematics and cavitation of Ninjabot. Bioinspiration Biomimetics; 2014; 9, 016014. [DOI: https://dx.doi.org/10.1088/1748-3182/9/1/016014] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/24503516]

56. Caldwell, R.L.; Dingle, J. The Influence of Size Differential On Agonistic Encounters in the Mantis Shrimp, Gonodactylus Viridis. Behaviour; 2008; 69, pp. 255-264. [DOI: https://dx.doi.org/10.1163/156853979X00502]

57. Caldwell, R.L. Cavity occupation and defensive behaviour in the stomatopod Gonodactylus festai: Evidence for chemically mediated individual recognition. Anim. Behav.; 1979; 27, pp. 194-201. [DOI: https://dx.doi.org/10.1016/0003-3472(79)90139-8]

58. Caldwell, R.L.; Dingle, H. Ecology and evolution of agonistic behavior in stomatopods. Die Naturwissenschaften; 1975; 62, pp. 214-222. [DOI: https://dx.doi.org/10.1007/BF00603166]

59. Berzins, I.K.; Caldwell, R.L. The effect of injury on the agonistic behavior of the Stomatopod, Gonodactylus Bredini (manning). Mar. Behav. Physiol.; 1983; 10, pp. 83-96. [DOI: https://dx.doi.org/10.1080/10236248309378609]

60. Steger, R.; Caldwell, R.L. Intraspecific deception by bluffing: A defense strategy of newly molted stomatopods (Arthropoda: Crustacea). Science; 1983; 221, pp. 558-560. [DOI: https://dx.doi.org/10.1126/science.221.4610.558] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/17830957]

61. Caldwell, R.L. The Deceptive Use of Reputation by Stomatopods; State University of New York Press: New York, NY, USA, 1986; pp. 129-145.

62. Abualigah, L.; Diabat, A.; Mirjalili, S.; Elaziz, M.A.; Gandomi, A.H. The Arithmetic Optimization Algorithm. Comput. Methods Appl. Mech. Eng.; 2021; 376, 113609. [DOI: https://dx.doi.org/10.1016/j.cma.2020.113609]

63. Zhong, C.; Li, G.; Meng, Z. Beluga whale optimization: A novel nature-inspired metaheuristic algorithm. Knowl.-Based Syst.; 2022; 251, 109215. [DOI: https://dx.doi.org/10.1016/j.knosys.2022.109215]

64. Zhao, S.; Zhang, T.; Ma, S.; Chen, M. Dandelion Optimizer: A nature-inspired metaheuristic algorithm for engineering applications. Eng. Appl. Artif. Intell.; 2022; 114, 105075. [DOI: https://dx.doi.org/10.1016/j.engappai.2022.105075]

65. Sulaiman, M.H.; Mustaffa, Z.; Saari, M.M.; Daniyal, H.; Mirjalili, S. Evolutionary mating algorithm. Neural Comput. Appl.; 2023; 35, pp. 487-516. [DOI: https://dx.doi.org/10.1007/s00521-022-07761-w]

66. Houssein, E.H.; Oliva, D.; Samee, N.A.; Mahmoud, N.F.; Emam, M.M. Liver Cancer Algorithm: A novel bio-inspired optimizer. Comput. Biol. Med.; 2023; 165, 107389. [DOI: https://dx.doi.org/10.1016/j.compbiomed.2023.107389] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/37678138]

67. Villuendas-Rey, Y.; Velázquez-Rodríguez, J.L.; Alanis-Tamez, M.D.; Moreno-Ibarra, M.A.; Yáñez-Márquez, C. Mexican axolotl optimization: A novel bioinspired heuristic. Mathematics; 2021; 9, 781. [DOI: https://dx.doi.org/10.3390/math9070781]

68. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl.; 2020; 152, 113377. [DOI: https://dx.doi.org/10.1016/j.eswa.2020.113377]

69. Alzoubi, S.; Abualigah, L.; Sharaf, M.; Daoud, M.S.; Khodadadi, N.; Jia, H. Synergistic Swarm Optimization Algorithm. CMES Comput. Model. Eng. Sci.; 2024; 139, pp. 2557-2604. [DOI: https://dx.doi.org/10.32604/cmes.2023.045170]

70. Kaur, S.; Awasthi, L.K.; Sangal, A.L.; Dhiman, G. Tunicate Swarm Algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. Artif. Intell.; 2020; 90, 103541. [DOI: https://dx.doi.org/10.1016/j.engappai.2020.103541]

71. Jia, H.; Wen, Q.; Wang, Y.; Mirjalili, S. Catch fish optimization algorithm: A new human behavior algorithm for solving clustering problems. Clust. Comput.; 2024; 27, pp. 13295-13332. [DOI: https://dx.doi.org/10.1007/s10586-024-04618-w]

72. Kumar, A.; Wu, G.; Ali, M.Z.; Mallipeddi, R.; Suganthan, P.N.; Das, S. A test-suite of non-convex constrained optimization problems from the real-world and some baseline results. Swarm Evol. Comput.; 2020; 56, 100693. [DOI: https://dx.doi.org/10.1016/j.swevo.2020.100693]

73. Lin, J.; Magnago, F.H. Optimal power flow In Electricity Markets: Theories and Applications; Wiley: Hoboken, NJ, USA, 2017; pp. 147-171. [DOI: https://dx.doi.org/10.1002/9781119179382.CH6]

74. Nucci, C.A.; Borghetti, A.; Napolitano, F.; Tossani, F. Basics of Power systems analysis. Springer Handbook of Power Systems; Papailiou, K.O. Springer: Singapore, 2021; pp. 273-366. [DOI: https://dx.doi.org/10.1007/978-981-32-9938-2_5]

75. Huneault, M.; Galiana, F.D. A Survey Of The Optimal Power Flow LiteratureA Survey Of The Optimal Power Flow Literature. IEEE Trans. Power Syst.; 1991; 6, pp. 762-770. [DOI: https://dx.doi.org/10.1109/59.76723]

76. Ela, A.A.A.E.; Abido, M.A.; Spea, S.R. Optimal power flow using differential evolution algorithm. Electr. Power Syst. Res.; 2010; 80, pp. 878-885. [DOI: https://dx.doi.org/10.1016/j.epsr.2009.12.018]

© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.