Content area
Optimization techniques have received significant attention for reliably addressing practical problems. A potential meta-heuristic called elk herd optimizer (EHO) was created, inspired by the social behavior and reproduction of elks. EHO has drawbacks, including poor convergence competency and a tendency to fall into local extrema in various optimization problems. Furthermore, this algorithm does not account for the memory of its search agents and has difficulty effectively balancing exploration and exploitation, which can lead to early convergence toward a local optimum. This study addresses the above issues by proposing an ameliorated EHO (AEHO) by incorporating several modifications into the basic EHO algorithm, which can be described as follows: A new hybrid memory-based EHO is developed that uses the particle swarm optimization (PSO) algorithm to guide EHO to search for reasonable candidate solutions. This hybrid approach was proposed to enhance EHO’s diversity and balance search capabilities to achieve strong search performance. Initially, a memory component was added to EHO using the idea of pbest from PSO to tap into promising search regions, which focuses on improving the best solutions and preventing the algorithm from getting stuck in a local optimum. In addition, the PSO concepts of (gbest) and (pbest) are used to enhance the best placements of the search agents in EHO. Finally, a greedy selection method was used to improve the efficiency of exhaustive exploration in AEHO, using the fitness values before and after updates as an indicator for efficacy of the best solutions. To evaluate the performance of the AEHO algorithm against a group of well-known competitors, we use ten complex test functions from the global CEC2022 test suite and thirty complex test functions from the global CEC2014 test suite. Based on the analysis of the experimental findings, AEHO performed optimally on 84% of the CEC2014 functions and 74% of the CEC2022 functions, ranking first in both suites with an average ranking of 3.11 and 1.62, respectively. The mean computation time of AEHO is about one-third of the average computation time for the first-ranked method, indicating that AEHO not only performs very well in global searches but also exhibits greater search efficiency when compared to newer optimization algorithms. The applicability and reliability of AEHO were thoroughly studied on four constrained engineering design problems and a real-world industrial process. The results demonstrate the superiority and promising potential of AEHO in addressing a wide range of challenging real-world problems.
Introduction
In recent years, optimization methods have been extensively studied in a variety of real-world problems (Panagant et al. 2023; Kumar et al. 2024). Optimization is a frequent procedure in both work and daily life that involves figuring out the best possible combination of decision factors to solve optimization problems (Rezk et al. 2024). It has become increasingly necessary to find efficient and effective solutions to optimization problems (Tejani et al. 2016). Numerous optimization problems are probably nonlinear and non-convex, with many decision variables and occasionally complex objective functions subject to several constraints. Moreover, such optimization problems may have variable or abrupt peaks, as well as multiple local optimal points (Sowmya et al. 2024). Every branch of research and engineering needs to solve optimization problems to identify ever-stronger solutions (Nonut et al. 2022). Accordingly, reasonable methods are required to meet the complexity of such contemporary scientific and engineering problems (Zouache et al. 2024; Izci et al. 2025; Abdel-Salam et al. 2025). The need for precise and efficient optimization approaches is growing as optimization problems become more complex and multifaceted (Cao et al. 2020; Aye et al. 2023). Thus, throughout the past decade, scholars have studied optimization methods such as dynamic programming, machine learning, and linear programming. Optimization methods are essential to significantly increase problem solving efficiency, reduce computational burden, and protect financial and computational resources (Tejani et al. 2017).
A comprehensive analysis of optimization methods in the current literature reveals a variety of approaches (Braik 2021; Zhu et al. 2024). Each approach has its own benefits and drawbacks, ranging from traditional linear and nonlinear mathematical methods (Vagaská and Gombár 2021) to nature-inspired methods (Braik et al. 2021; Zhu et al. 2024). Mathematical approaches encompass various optimization strategies that continually find the best solution to an optimization problem using a clearly established mathematical model with a baseline condition. Mathematical methods include Newton’s method (Bertsekas 2022) and the Nelder-Mead algorithm (Shirgir et al. 2024). Large-scale optimization problems were solved with some degree of success using standard methods (Alavi et al. 2021). Nevertheless, these techniques require a feasible initial vector inside the search space and are prone to an intrinsic dependency on gradient information. These approaches may not be the most effective way to solve contemporary large-scale multimodal optimization problems and often require a thorough understanding of the problem (Rezk et al. 2024). Any of the conventional methods may only produce local optimum solutions when used to solve complex optimization problems with nonlinear search spaces, numerous constraints, or multiple decision variables. Finding local optimum solutions is simple, yet there is no assurance that they will discover the global optimal solution. Nature-inspired algorithms, or meta-heuristics, are algorithmic frameworks that employ stochastic operators and heuristics inspired by natural phenomena (Comert and Yazgan 2023). The benefits of unpredictability and simplicity of use are balanced by the drawbacks of mathematical approaches by meta-heuristics. The use of meta-heuristics to address challenging scientific and technical problems has gained favor in academia (Rezk et al. 2024).
The artificial lemming algorithm (ALA), a newly published meta-heuristic algorithm, mathematically simulates four different lemming behaviors found in nature: long-distance migration, hole-digging, foraging, and predator avoidance (Xiao et al. 2025). Compared with other promising state-of-the-art meta-heuristics, Xiao et al. (Xiao et al. 2025) stated that ALA exhibits robust and reliable overall optimization efficiency, achieving superior solution accuracy, convergence speed, and stability on the CEC2017 and CEC2022 benchmark test sets. Another recently launched meta-heuristic method is snow ablation optimizer (SAO), which simulates the melting and sublimation processes of snow to find the best solution to challenging optimization problems (Deng and Liu 2023). Xiao et al. (2024) introduced a new captivating version of SAO called multi-strategy boosted SAO (MSOA), which incorporates four enhancement strategies into SOA to address the negative limitations of SOA, such as premature convergence and lack of population diversity, especially when dealing with high-dimensional complex problems.
In many real-world problems, meta-heuristics can be highly successful, yet they may not be able to solve other problems adequately (Ghoneim et al. 2025; Abualigah et al. 2025). This is partly because these methods are similar and involve local or suboptimal solutions (Kumar et al. 2023; Ghasemi et al. 2024). The literature is now replete with meta-heuristics inspired by natural, artificial, and sometimes supernatural events (Sharma and Raju 2024). Potential motivations for the development of innovative optimization techniques range from simulated annealing to evolutionary theory, swarm intelligence, and even human behavior, as well as the influence of musicians. Natural processes have played a significant role in the history of meta-heuristics (Gendreau et al 2010). However, over the last 20 years, numerous self-described innovative algorithms based on metaphors have been presented in the literature. In most cases, it is unfortunately uncertain why these metaphors are used and what additional information they provide to the community of meta-heuristics. In research exploring so-called novel metaphor-based methods, some of the more demanding components are as follows: First, they introduce a new term through a metaphor, redefining previously known areas in the optimization field. Second, mathematical models are constructed based on the proposed metaphor; however, these models are simplistic and overly reliant on the metaphor itself. Thus, they do not correctly represent the metaphors. Further, the proposed algorithms are often inconsistent together with the mathematical models generated by the metaphor. Next, they employ arguments such as “it has never been used before” or “the mathematical models are different from those used in the past” to defend the utilization of an unfamiliar metaphor rather than providing a solid scientific basis and explaining the optimization strategy the metaphor implements and how it can be used to provide practical design choices in the presented algorithm. Last but not least, they provide biased evaluations and comparisons using different strategies, like empirical evaluation on the basis of a small number of problems of low complexity and/or comparing of the presented approach with older techniques whose operations are far from modern ones (Camacho-Villalón et al. 2023).
As per the points above, any newly developed optimization approach is assumed to improve current algorithms and offer unique benefits not present in other algorithms previously documented in the literature. To tackle complex real-world problems, innovative optimization approaches provide an opportunity for information exchange. New optimization methods usually use fast integration techniques or engines to improve the efficiency of current optimization techniques. Thus, the optimization community benefits from innovative optimization techniques that remain relevant for testing different search strategies for specific real-world problems (Abdollahzadeh et al. 2024). These essential findings are the main driving force behind the current study. The three fundamental methods for developing meta-heuristic algorithms in the literature include developing hyper-heuristics, combining existing optimization algorithms, and developing new optimization algorithms. It is complementing rather than adversarial to create new optimization algorithms and integrate them with preexisting ones. In addition to making up the flaws of current optimization methods in certain situations or while dealing with certain problems, new optimization techniques may also provide superior solutions to complex real-world problems. Many contemporary optimization methods use operators or procedures together with unique search properties. These elements reveal some of the ways to raise the degrees of accuracy of current optimization methods. Exploitation and exploration are the two most important elements of meta-heuristic algorithms for addressing optimization problems (Daliri et al. 2024). The subsection below explains these elements.
Exploration and exploitation
Regardless of the variations in meta-heuristic methods, they adhere to the same approach to estimating or finding global optimal solutions. The optimization process starts with random solutions that must be quickly and easily combined and modified. Consequently, the results spread over the search space. Due to abrupt changes, solutions target various areas of the search space during this stage, known as “exploration” of the search space (Askarzadeh 2016). The main objectives of this stage are to leave the local optimum, break out of the local optima slump, and locate the most promising areas inside the search space. After carefully studying the search area, solutions move systematically and locally toward the most efficient solutions, potentially improving the quality of the solutions. The primary objective of this stage, referred to as “exploitation” (Xue and Shen 2020), is to improve the effectiveness of the best solutions found during exploration. The search region is still less covered in the exploitation phase than in the exploration stage, even though local optima may be avoided. In this case, solutions steer clear of local solutions close to the global optimum. It follows that the exploration and exploitation stages have different objectives.
The most popular method for assessing the feasibility of a novel meta-heuristic algorithm with respect to exploration, exploitation, and equilibrium behaviors is to illustrate how reliable it is in solving optimization problems in comparison to mathematical programming approaches and other currently used meta-heuristics. Although current meta-heuristics have shown their effectiveness in consistently determining global optimum solutions through the optimization of many real-life problems, they cannot effectively identify the global optimum for all types of problems (Youfa et al. 2024). According to the “*”no-free-lunch (NFL) hypothesis (Wolpert and Macready 1997), no generic optimization algorithm can identify the optimal solutions for all types of problems. Accordingly, it is critical for researchers to find, develop, or advocate novel meta-heuristics that can significantly improve existing optimization methods for handling optimization problems.
Motivations of the proposed work
This work aims to improve the elk herd optimizer (EHO) Al-Betar et al. (2024), which is a cutting-edge meta-heuristic. The capacity of EHO to divide the solution set into subsets according to patterns of animal behavior led to its selection as an optimizer in this investigation. This algorithm is influenced by the natural reproductive process of elk herds found in nature. The EHO algorithm divides the entire working process into two main breeding seasons: rutting and calving. Although EHO has demonstrated impressive results in many optimization problems (Al-Betar et al. 2024), it still encounters difficulties related to insufficient search productivity and a tendency to stumble upon local optimums, leading to poor outputs. This encourages us to create an enhanced EHO algorithm that incorporates noticeable enhancements to the original EHO algorithm??to overcome its shortcomings. To overcome these shortcomings, we propose an Ameliorated version of EHO, referred to as AEHO, which operates in four phases: First, two adaptive functions were presented to replace two random numbers generated in EHO during the iterative loops of EHO. These functions are essential for improving the exploration and exploitation abilities of EHO. This stage will strike a greater equilibrium between exploration and exploitation in AEHO. Second, a memory element is added to the elks in EHO, which is reinforced by employing the optimum solutions of the search agents of the particle swarm optimization (PSO) algorithm. This preserves the positions in the solution space that correspond to the best solutions that EHO has produced to date. The exploitation capability in AEHO will be considerably improved during this phase. Third, EHO is introduced with the best solutions of PSO for future use. The most significant positions of Elks are now being identified and enhanced using the best solutions (pbest) and global best solutions (gbest) of PSO. This phase will further improve AEHO’s exploration ability. Lastly, AEHO’s global exploration was made more successful by employing an aggressive greedy selection strategy, which uses the search agents’ fitness values before and after updates to measure the proximity or effectiveness of the best solutions. Therefore, for both EHO and PSO, the iteration-level hybridization mechanism ensures initial exploration and eventual exploitation. This undoubtedly helps AEHO achieve global optimal with an outstanding level of performance. This method significantly increases EHO’s ability to quickly arrive at optimal or nearly optimal solutions, while enabling a wide range of solutions. Furthermore, there are not many results from a comprehensive search for EHO advances to increase its local and global capabilities across different domains. The presented AEHO algorithm was evaluated on demanding global optimization problems faced in the CEC2014 and CEC2022 test suites. The optimization community largely acknowledges these benchmarks for their difficult search space. The applicability of the AEHO algorithm was demonstrated by comparing it with other optimization methods and solving well-known engineering design and industrial problems.
Contributions of the work
The main goal of this study is to develop the AEHO algorithm and use it to solve global optimization benchmarks and other real-world applications to investigate its efficiency and applicability. The contributions to this study are as follows:
The AEHO algorithm, which combines the memory component, hybridization method, greedy selection strategy, and adaptive search parameters in the original EHO algorithm, is presented.
The proposed AEHO algorithm aims to improve exploration and exploitation capacities by guaranteeing adequate convergence speed and a balanced exploration-exploitation trade-off. The quality and effectiveness of the AEHO solutions were evaluated using the CEC2014 and CEC2022 test sets.
The suitability of AEHO in solving engineering design problems is investigated and compared to other meta-heuristic methods.
The results show how well AEHO solves complex optimization problems, making it suitable for several different practical uses.
Related works
Meta-heuristic were founded as problem-independent algorithmic, as they rely on heuristics from natural events, animal life, biological processes, mathematics, and even human behavior. Given their favors over mathematical methods - like unpredictability, ease of comprehension, and treatment of black-box problems - meta-heuristics might be used in their place. The next classes can be used to broadly classify meta-heuristics: human-based algorithms (HAs), swarm-based algorithms (SAs), physics-based algorithms (PAs), sport-based algorithms (SBAs), music-based algorithms (MBAs), chemistry-based algorithms (CBAs), mathematics-based algorithms (MAs), and evolutionary-based algorithms (EAs). The algorithms in each of these classes are derived from the processes that gave rise to them (Zhao et al. 2024). These groups and several well-known algorithms that fall under them, can be outlined as follows:
Evolutionary-based algorithms: EAs as the most sophisticated meta-heuristic algorithms in the past were inspired by biological evolutionary phenomena including inheritance, natural selection, and other biological evolution-related processes (Back 1996). Genetic algorithms (GAs) have become the most widely used EAs and support Darwin’s theory of evolution in biology. Furthermore, GA is recognized as one of the most well-liked and established meta-heuristics broadly available. A set of individuals is selected at random as the initial location in the search space during a particular iterative phase, and then expanded by a range of evolutionary operators, including selection, mutation, and reproduction processes. The GA that has performed the best up to that point is regarded the optimum solution after the iteration cycles. A classic type of EA that uses some of the same evolutionary operators as GAs is called differential evolution (DE). GAs and DE differ primarily in that GAs focus more on the crossover operator, while DEs places more emphasis on the mutation operator (Holland et al. 1992). Some of the well-known EAs reported in the literature are shown in Table 1.
Table 1. A brief summary of some popular evolutionary algorithms
Algorithm | Inspiration (contribution) | Advantage | Disadvantage | Year |
|---|---|---|---|---|
Genetic algorithm (GA) Holland et al. (1992) | Evolutionary concepts | Handles complex problems | No guarantee of optimality | 1975 |
Differential evolution (DE) Storn and Price (1997) | Darwin’s theory of evolution | Simplicity and robustness | Premature convergence | 1997 |
Biogeography-based optimization (BBO) Simon (2008) | Geographical distribution of living organisms | Handles multiple objectives | High computational cost | 2008 |
Wildebeests herd optimization (WHO) Motevali et al. (2019) | Wildebeest herding behavior | Rapid convergence capability | Local optima trapping | 2019 |
Liver cancer algorithm (LCA) Houssein et al. (2023) | Liver tumor growth | Parallelism and robustness | Premature convergence | 2023 |
Genetic engineering algorithm (GEA) Sohrabi et al. (2024) | Genetic engineering concepts | Efficient global search | Computational complexity | 2024 |
Swarm-based algorithms: SAs, which rely on the cumulative activities of biological populations-such as microbes and animals-found in nature, are among the meta-algorithmic methods with the quickest growth rate. A few conventional SAs have a big impact on the optimization field. Ant colony optimization represents the foraging strategy of an ant colony (Dorigo et al. 1996). While they forage, ants can leave behind chemicals known as pheromones. Ants can detect the strength of the chemicals because they stick to the food’s pheromone. The technique known as PSO algorithm (Kennedy 1995) simulates the social interactions of fish or birds. The collaboration and division of labor that individual bees utilize to acquire nectar in their environment are mimicked by artificial bee colonies (ABCs) Karaboga and Basturk (2007). The whale optimization algorithm (WOA) Mirjalili and Lewis (2016) imitates prey circling, assaulting bubble nets, and shooting prey. Chameleon swarm algorithm (CSA) Braik (2021) mimics the hunting and foraging habits of chameleons in the wild. In contrast, capuchin search algorithm (CapSA) baba et al. (2022) simulates the collective hunting behaviors of capuchin monkeys in the wild. White shark optimizer (WSO) Braik et al. (2022) is another recent popular SA that imitates the foraging activities of great white sharks in the ocean. The Draco lizard optimizer (DLO) is a promising new swarm intelligence algorithm (Wang 2025). It is based on the distinctive features of the Draco lizard, especially its adept gliding mechanics and ecologically adaptable methods. The frigate bird optimizer (FBO) is a promising SA algorithm inspired by the distinctive flight and feeding habits of frigate birds (Wang 2024). The fishing cat optimizer (FCO) is a recently developed meta-heuristic algorithm based on the distinctive hunting strategies of fishing cats, including ambush, detection, diving, and hunting (Wang 2025). Table 2 lists some popular meta-heuristic algorithms in this category.
Table 2. A brief synopsis of some popular swarm-based methods
Algorithm | Inspiration (contribution) | Advantage | Disadvantage | Year |
|---|---|---|---|---|
Ant colony optimization (ACO) Dorigo et al. (1996) | Ant colony | Suitable for complex problems | Computationally expensive | 1992 |
Particle swarm optimization (PSO) Kennedy (1995) | Birds’ social behavior | Ease of implementation | Possible early convergence | 1995 |
Artificial bee colony (ABC) Karaboga and Basturk (2007) | Honey bees’ foraging habits | Simplicity and ease of use | Premature convergence | 2007 |
Cuckoo search (CS) Yang et al. (2009) | Brood parasites of some cuckoo species | Simplicity | Parameter sensitivity | 2009 |
Bat algorithm (BA) Yang (2010) | Bats’ echolocation habit | Fast convergence | Potentially trapped in local optima | 2010 |
Firefly algorithm (FA) Yang (2009) | Fireflies’ social behavior | Global search capability | Convergence issues | 2010 |
Fruit fly optimization (FOA) Pan (2012) | Fruit fly foraging behavior | Computational efficiency | Limited global search ability | 2011 |
Moth-flame optimization (MFO) Mirjalili (2015) | Moth navigation in nature | Simplicity and efficiency | Local optima trapping | 2015 |
Whale optimization algorithm (WOA) Mirjalili and Lewis (2016) | Social conduct of humpback whales | Powerful global search ability | Limited ability in high dimensions | 2016 |
Salp swarm algorithm (SSA) Mirjalili et al. (2017) | Foraging and navigation conduct of salps | Fewer control parameters | Poor population diversity | 2017 |
Colony predation algorithm (CPA) Jiaze et al. (2021) | Mass predation of animals | Excellent exploration ability | Limited exploitation | 2021 |
Dwarf mongoose optimization (DMO) Agushaka et al. (2022) | Social conduct of dwarf mongoose | Effective exploration and exploitation | Slow convergence | 2022 |
Artificial lemming algorithm (ALA) Xiao et al. (2025) | Lemming behaviors found in nature | Parallelism and scalability | Limited exploration | 2025 |
Frigate bird optimizer (FBO) Wang (2024) | Flight and feeding habits of frigate birds | Robustness | Parameter sensitivity | 2024 |
Fishing cat optimizer (FCO) Wang (2025) | Hunting strategies of fishing cats | Strong convergence and robustness | Complexity and parameter tuning | 2025 |
Draco lizard optimizer (DLO) Wang (2025) | Features of the Draco lizard found in nature | Exploration-exploitation balance | Potential for premature convergence | 2025 |
Rüppell’s fox optimizer (RFO) Braik and Al-Hiary (2025) | Foraging practices of Rüppell’s foxes | Ability to escape from local optima | Modest global exploration | 2025 |
Human-based algorithms: The literature has given increasing attention to HAs, which are recently established meta-heuristics. The two primary factors that influenced the development of HAs were human social connections and non-physical activities. To simulate the dynamics of imperial rivalry and colonial assimilation, the imperialist competitive algorithm (ICA) was developed using human interactions (Atashpaz-Gargari 2007). The innovation of the teaching-learning-based improvement (TLBO) approach is attributed to student collaboration and teacher guidance. This approach is divided into two distinct phases: the teaching phase and the learning phase. Learning occurs in both the teaching and learning phases when students interact with each other during the teaching phase, which is linked to learning from the teacher. Some prominent examples of HAs are given in Table 3.
Table 3. A brief synopsis of some well-known human-based algorithms
Algorithm | Inspiration (Contribution) | Advantage | Disadvantage | Year |
|---|---|---|---|---|
Society civilization algorithm (SCA) Ray and Liew (2003) | Human leadership phenomenon | Efficient for complex problems | Very complex | 2003 |
League championship algorithm (LCA) Kashan (2009) | Sports leagues’ championship process | Simpleness and simplicity of application | Sensitivity to parameter tuning | 2009 |
Human-inspired algorithm (HIA) Zhang et al. (2009) | Human intelligence | Effectiveness in challenging problems | Potential for local optima | 2009 |
Social emotional optimization algorithm (SEOA) Xu et al. (2010) | Social habits of humans | Rapid convergence in complex problems | Risk of early convergence | 2010 |
Skill optimization algorithm (SOA) Givi and Hubalovska (2023) | Human efforts to acquire skills | Robustness | No guarantee of global optimum | 2023 |
Revolution optimization algorithm (ROA) Hamadneh et al. (2025) | Societal revolutions | Adaptability | Slow convergence | 2025 |
Builder optimization algorithm (BOA) Hamadneh et al. (2025) | Human construction strategies | High performance | Slow convergence | 2025 |
Physics-based algorithms: The essential subset of meta-heuristics is PAs. These models cover a wide range of physical principles, processes, events, concepts, and movements, including elements of heat, electricity, mechanics, atomic physics, and others. Gravitational search algorithm (GSA) is a well-known PA driven by the law of gravitation (Rashedi et al. 2009). A set of search agents in GSA gravitationally gravitate toward one another; a heavier search agent attracts others more efficiently. Atom search optimization (ASO) Zhao et al. (2019) is another well-liked PA that uses the forces between atoms to mimic atomic motion. In this situation, the interaction between the atoms is driven by constraining forces provided by the Lennard–Jones and bond-length perspectives. Table 4 contains a list of several well-known PAs.
Table 4. A brief synopsis of some popular physics-based algorithms
Algorithm | Inspiration (contribution) | Advantage | Disadvantage | Year |
|---|---|---|---|---|
Simulated annealing (SA) Kirkpatrick (1983) | The metallurgical process of annealing | Robustness and flexibility | Single-solution approach | 1983 |
Gravitational search algorithm (GSA) Rashedi et al. (2009) | Mass interactions and gravity law | Relatively fast convergence | Potential accuracy issues | 2009 |
Multi-verse optimizer (MVO) Mirjalili et al. (2016) | Multi-verse theory | Simple structure | Prone to local optima | 2016 |
Thermal exchange optimization (TEO) thermal exchange optimization (2017) | Newton’s cooling law | Effective exploration and exploitation | Modest diversity | 2017 |
Henry gas solubility optimization (HGSO) Hashim et al. (2019) | Huddling behavior of gas | Self-organization | Local optima convergence | 2019 |
Equilibrium optimizer (EO) Faramarzi et al. (2020) | Source-sink models based on physics | Simple structure | Slow convergence speed | 2020 |
Slime mould algorithm (SMA) Li et al. (2020) | Slime mould oscillation pattern | Possibility of good performance | Convergence of local optimum | 2020 |
Snow ablation optimizer (SAO) Deng and Liu (2023) | Sublimation and melting behavior of snow | Relatively simple structure | Premature convergence | 2023 |
Geyser inspired algorithm (GEA) Ghasemi et al. (2024) | An unusual geological phenomenon | Reasonable exploration | Local optima entrapment | 2024 |
Water uptake and transport in plants (WUTP) Braik and Al-Hiary (2025) | Water movement across plant membranes | Balanced exploration and exploitation | Entrapment in local optima | 2025 |
Mathematics-based algorithms: MAs are a new and crucial component of meta-heuristics, significantly advancing the optimization field. Specific mathematical procedures, rules, formulas, and theories form the basis of MAs. Sine cosine algorithm (SCA) Mirjalili (2016) and arithmetic optimization algorithm (AOA) Abualigah et al. (2021) are reasonable instances in this class. The distributional assets of addition, subtraction, division, and multiplication- the four fundamental arithmetic operations- form the foundation of AOA. The SCA algorithm encourages the regularity and fluctuation of cosine and sine functions in mathematical concepts. Even though MAs are currently less competitive than others meta-heuristic methods, this class seems to have promise. Former methods in this class incorporate Circle search algorithm (CSA) Qais et al. (2022), which models the geometric properties of circles; golden sine algorithm (GSA) Tanyildizi and Demir (2017), which prototypes various types of sine function; and the lévy flight distribution (LFD) method (Houssein et al. 2020), which models a lévy flight random walk.
Sport-based algorithms: SBAs are driven by fitness regimens and human-centered physical activities. League championship algorithm (LCA) Kashan (2009) is a traditional SBA that is fueled by competition between sports teams in a league. Various sports teams compete in a simulated league in LCA over many weeks. Fitness ratings of structures and loss/win records affect the outcomes of the weekly competitions’ teams play. A team’s lineup and style of play are modified during the recuperation period in preparation for the match the following week. Until a temporary requirement is satisfied, or for a few seasons, the championship runs according to the league schedule. World cup optimization (WCO), which replicates FIFA’s world championships, tug of war optimization (TWO) Kaveh and Zolghadr (2016), which creates a simulated tug of war, and soccer league competition (SLC) Moosavian and Roodsari (2014), which promotes soccer partnership competition, are three additional SBAs that are commonly used to handle complex optimization problems.
Music-based algorithms: MBAs are a creative class of meta-heuristic algorithms inspired by the melody and music of nature. One famous model of an MBA algorithm is harmony search (HS) Geem et al. (2001), like musical improvisation in that musicians adjust the pitch of their compositions to achieve the most effective harmony. An alternative prominent instance of an MBA is the method of musical composition (MMC) Mora-Gutiérrez et al. (2014), which mimics a dynamic music composition system.
Chemistry-based algorithms: The fundamental components of chemical reaction principles, which form the basis of many CBAs, are chemical reactions and thermodynamics. Chemical reaction optimization, or CRO (Lam and Li 2012), is another example of a CBA. It relies on the theory that molecule collisions guide the chain reaction to the low and stable path of the attainable energy surface throughout the chemical reaction. Four fundamental collision reactions are constructed according to the energy conservation concept. A familiar CBA inspired by the variety of chemical reactions and their frequency is the artificial chemical reaction optimization algorithm, or ACROA (Alatas 2011). Although the aforementioned meta-heuristics are all unquestionably fantastic and seductive in many ways, we believe that more work may be done to enhance their optimization for additional real-world problems in intricate search spaces. To do this, new optimization methods may be developed, or existing processes may be improved to perform comparable tasks. To this end, we created a new ameliorated version of the EHO algorithm (Al-Betar et al. 2024), called AEHO, and investigated the performance of this method in addressing unconstrained and constrained benchmark optimization problems.
Elk herd optimizer
Meta-heuristics are stochastic population-based techniques inspired by biological computational theory, swarm behavior models, and many other types of organisms. These techniques are frequently useful and efficient in handling complex optimization problems. Various behaviors are associated with many algorithms; a thorough examination of these categories and algorithms may be found elsewhere (Braik 2023). The main benefits of meta-heuristics are their ability to do local and global searches, as well as their exceptional accuracy and minimal input needs for handling constraints. They are highly adaptable and capable of handling challenging benchmarks and technical design problems. The following are mathematical representations of the basic EHO algorithm:
Mathematical model of EHO
First, the number of elk in a herd is determined by the proportion of bulls in each family. Each family has a bull to guide it throughout the rutting season. The number of cows or harems in a family is determined by the strength of the bulls. To assert their dominance, bulls engage in combat for supremacy. Each family thereafter delivers calves with an equal proportion of family members during the calving season. The best performers from each family are paired up and brought back for the rutting season after the chosen season. This process is repeated to make sure the last elk herd is ready to handle any challenges in the habitat. As described below, EHO has established a number of procedural steps to link the breeding cycle of elk herds with the optimization framework. Al-Betar et al. (2024):
Step 1: Initialize the parameters of EHO: The objective function, which assesses possible solutions, and the solution representation, which elucidates the kind of search space, are the two prerequisites for the EHO to include the problem-specific information. Typically, each decision parameter in simple continuous optimization problems has a range of values. Equation 1 Al-Betar et al. (2024) provides the objective function’s generic form.
1
where is the cost function used to calculate each solution’s fitness, in each solution of elk, lb represents the lower limit, ub the upper bound, and d represents the total number of variables (also called solution dimensionality). Each elk’s attributes that are indexed by j are denoted by the variable in Eq. 1. is the attribute within the range , where represents the bottom boundary at index j and represents the upper bound at index j.Step 2: Create the initial elk herd: Males and harems are part of the population of solutions of elks, which is the original herd of elks, or . As per Eq. 2, the is a matrix (Al-Betar et al. 2024).
2
where is the ith elk in the jth dimension, n is the population size (i.e., the number of elks in the herd), and is the beginning position of every elk in the search space.Equation 3 Al-Betar et al. (2024) provides a strategy for producing each solution designated in the continuous domain.
3
where U(0, 1) is a randomly distributed random number between 0 and 1, and , , shows where the ith elk is at the jth dimension, offering a potential solution to a problem.The fitness score for every elk solution in the EHO is determined using Eq. 2. Based on their fitness scores, the elks are arranged in ascending order as follows: The value of the fitness function of the elk i is .
Step 3: Rutting season: Eq. 4 Al-Betar et al. (2024) states that the EHO method is used to produce elk families during the rutting season.
4
where is used to determine the total number of families, and is the bull ratio that shows the starting rate of the bulls in the herd.In the rutting season, when the bulls in the set begin fighting among themselves to form families in Eq. 4, elk families are formed based on the bull rate (). Bulls are chosen from by looking at the numbing elk B with the best fitness ratings at the top. The more dominant elks in combat dominance contests will be given larger harems following discussion. This is designed to mimic those circumstances. The harems are allocated to each bull in utilizing the roulette wheel selection process, as per Eq. 5 Al-Betar et al. (2024).
5
where reflects the bull’s absolute fitness value, represents the total absolute fitness values of all bulls and represents the selection likelihood for each bull in , and B defines the number of families.The allocation of harems to their bulls in Eq. 5 is determined by the bulls’ fitness values in relation to the overall fitness values. Technically, each bull in will have its selection probability determined by dividing its absolute fitness score by the total of all bulls’ absolute fitness values. Algorithm 1’s pseudocode yields the selection probability, , that establishes which bulls will receive harems. In the approach, the vector , where k is equal to , represents the harems. Based on a roulette wheel decision, the bull index assigns a number to each harem. For instance, denotes the number of families if the elk herd size is 10 () and the bull rate is 30%. In this case, , where is the resulting task that can be generated to the remaining elks, i.e., (), where the primary bull has three harems, the second bull has two harems, and the third bull has two harems (Al-Betar et al. 2024).
[See PDF for image]
Algorithm 1
A pseudo-code that uses a roulette wheel method to describe the selection procedure
Step 4: Calving season: Eq. 6 Al-Betar et al. (2024) states that each family’s calf, , is created during the calving season using the traits mostly found in its mother harem, , and father bull, .
6
where is a random value between 0 and 1, represents the ith elk in dimension j at the th iteration, stands for the ith mother harem elk in dimension j at the tth iteratio, and stands for the father bull at the jth dimension and iteration t.According to Eq. 6, a calf will reproduce () if it shares the same index i as its bull father in the family. The rate of inherited characteristics from randomly picked elk in the herd is determined by the random number , . According to Eq. 6, a large amount of increase in diversity by increasing the probability that random components will participate in the new calf. In Eq. 7 Al-Betar et al. (2024), the elk inherits the features of both its mother, a harem , and its father, a bull, when a calf’s index matches that of its mother.
7
where and are two randomly generated values in the range [0, 2], r is a random bull’s index such that in the current bull set. The harem bull j is represented by , and the elk i at dimension i at iteration is represented by ; this attribute will be maintained in .Sometimes, if the mother harem bull in Eq. 7 is not adequately safeguarded, she may mate with other bulls in the wild. Using the random and parameters, the portions of features inherited from the previously created calves are identified (Al-Betar et al. 2024).
Step 5: Selection season: Every family unites its bulls, calves, and harems. The calves’ and bulls’ solutions, which are technically preserved in and , respectively, are combined to form a single matrix . Based on their fitness ratings, the elks in the list will be arranged in increasing order. To secure that , for , the top elks in will be retained for the following generation. This kind of selection is known as -selection in the context of evolution strategy, where is the offspring population and is the parent population (Al-Betar et al. 2024).
Step 6: Termination criteria: Until the termination conditions are fulfilled, the previously indicated steps will be repeated. The termination criteria is often the maximum number of repetitions. This might be an indication of the highest number of optimum iterations or the maximum availability of the optimum result.
Implementation of EHO
In a nutshell, Fig. 1 contains the flowchart of the basic EHO algorithm (Al-Betar et al. 2024), whereas Algorithm 2 contains the pseudo-code.
[See PDF for image]
Algorithm 2
A pseudo-code describing the basic stages of the EHO algorithm
[See PDF for image]
Fig. 1
A flow diagram outlining the fundamental EHO algorithm
Because EHO is an excellent meta-heuristic technique (Al-Betar et al. 2024), it can be used in many situations, such as the one mentioned above. EHO’s limited ability to search for local optimums is typically limited, especially when dealing with complex optimization problems involving a variety of local optimums, although it achieves optimal results when addressing real-life problems, as mentioned earlier. EHO’s exploration and exploitation abilities have been strengthened, and efforts have been made to maintain an equilibrium between the two aspects mentioned above to further improve its performance. Furthermore, EHO has not been used for high-dimensional benchmark functions, despite its undeniable value in optimization problems. In this study, we analyze the significance of EHO in high-dimensional problems, engineering, and industrial applications, given its promising outcomes in several research fields (Al-Betar et al. 2024). By offering a broader version of the EHO optimizer for optimizing high-dimensional problems, this work tackles the aforementioned difficulties. An in-depth mathematical analysis and explanations of the problem definition and proposed algorithm are provided in the next section.
Ameliorated elk herd optimizer
In this section, the proposed ameliorated elk herd optimizer (AEHO), a memory-based hybrid EHO with the PSO algorithm, is described. This algorithm operates in four stages, as previously mentioned: (1) In the beginning, an exponential function is laid out for each of the randomly generated numbers and ; (2) the memory of elks is added utilizing the best solutions of PSO; (3) the next stage is to support the positions of the elks employing the best and global best solutions of PSO, and (4) the last stage is to utilize a greedy selection method employing the fitness values of search agents before and after updates as an indicator of the efficacy of the solutions. The ensuing subsections explain these phases.
Exponential adaptive parameters in AEHO
In order to balance local and global search abilities, EHO only has two parameters for regulating elk swarming behavior: 1) The exploration capability of EHO is controlled by the first parameter, . 2) The second parameter, , indicates the exploitation ability of EHO throughout the calving season. This value is random between 0 and 2 during the repetitive procedural process of EHO during the calving season, which may compromise the exploration ability of EHO. This value fluctuates randomly between 0 and 2 throughout the iteration process of EHO, which might reduce EHO’s exploitation potential. Due to the randomization of these features, there might be two drawbacks: 1) Elks in EHO should search more frequently over time, and 2) the difference between the elks’ current best position and their current random position should be as small as possible during the calving season, where the rate of inherited traits from randomly selected elk in the herd should be highly effective. Accordingly, these two random numbers could make EHO’s exploration and exploitation less strong, which might transform the search behavior into rational exploration and exploitation that is not rigorously structured. Equations 8 and 9 define the two adaptive exponential functions that were stated for and in AEHO, respectively, to address the shortcomings caused by these two parameters.
8
9
where and denote two essential coefficients of the parameter that play an important role in the iterative process of the elks during the search process in the search space, and and denote two basic factors of the parameter that play a significant role in the iteration loops of the elks during the search process.The values of , , , and are equal to 1.0, 0.1, 2.0, and 0.1, respectively, for all test problems solved in this work. Pilot testing is used to choose these settings for several test problems across various classes. These settings, however, can be changed as needed to account for other problems. Iterative updates are made to the exponential functions specified in Eqs. 8 and 9 during the iterative process of AEHO. These parameters are updated exponentially regarding the time t and lifetime T of the elks in AEHO, which stand for the current and maximum number of iterations, respectively. Over 1000 iterations, the values of and of the proposed AEHO algorithm are displayed in Fig. 2.
[See PDF for image]
Fig. 2
Proposed adaptive functions for the elks in the AEHO algorithm
To systematically conduct the tests, the parameter was first selected within the range [0.1, 2]. This was done to expand the search to include more suitable regions of the search space while ensuring that AEHO achieves high performance. The range from 0.5 to 1.0 was selected as the appropriate range for . The appropriate values for the parameters and were experimentally determined to be 1.0 and 0.1, respectively. The optimal values for and were also found to be 2.0 and 0.1, respectively. These values of the above-mentioned parameters were found to be the best for the proposed AEHO algorithm as it achieved the highest performance in the optimization problems examined in this study.
Implementation of the memory of AEHO
Elks use the seasons of rutting, calving, and selecting in their present habitats to help them track other elk in the herd and locate the best solutions in a promising search space region. In such a situation, the basic EHO algorithm does not track previously visited alternative solutions using the elks’ memory. Consequently, not every elk in the herd will benefit equally from the developed EHO solutions. Thus, this procedure may weaken EHO’s exploitation potential, which would cause Elk i to lose its route to the optimal solution. Additionally, EHO never finds the global feasible solutions that may converge to the global optimum during the swarm-based process; instead, it discards all fitness values that exceed the global optimal solution. This weakens EHO’s ability to be exploited because it converges slowly and occasionally stalls at the local optima. The pbest concept PSO represents the elks’ memory that may be utilized to find the best harem (i.e., mother) or bull (i.e., father). In this sense, AEHO is the memory concept that best describes the features of elks’ habitats. Each proposed solution’s quality, or what is called the elks’ memory (i.e., m), is assessed using a predetermined fitness criterion, which is presented as shown below:As a result, each elk is permitted to monitor its location in the hyperspace problem related to its fitness value. Each iteration compares the best fitness value for that iteration to the fitness value of the elks in the current population. The memory of elks is represented by a matrix frame Cpbest that preserves the best solutions. Regarding this, the elks in AEHO are told to note the highest value attained thus far by any elk in the population that chooses another harem or bull in the immediate vicinity. This is referred to here as Cgbest equivalent to the gbest concept of PSO. In AEHO, the notions of Cgbest and Cpbest, which stand for the memory characteristic of elks, can 1) improve AEHO’s capacity for exploitation, 2) break out of local optimums, and 3) outperform the mainstream EHO and PSO. This means that the pbest and gbest concepts of PSO will be used to determine the present position of the harems or bulls in EHO and follow up another harem or bull in EHO, improving the exploitation and exploration aspects of AEHO.
Iterative level hybridization of PSO with EHO
Iteration-level hybridization is a straightforward method of sequentially and iteratively executing two optimization algorithms to enhance their performance (Braik et al. 2023). Two iterative stages are used in this hybridization approach: PSO uses the previously constrained area of optimization to find a tenable solution, and then EHO moves on with its search agents’ memory, which is the best solution for all the search agents in the population. The memory of the elks in EHO (i.e., Cgbest) initializes the best solutions of the particles of PSO in AEHO. In contrast, the global position of the elks in EHO (i.e., Cgbest) initializes the best global solutions for the particles. Equation 10 is used in the proposed AEHO algorithm in place of Eq. 6 in the basic EHO algorithm, which asserts that each family’s calf is formed during the calving season utilizing the features primarily found in its mother harem.
10
where , , and are three randomly generated numbers uniformly distributed in the range of [0, 1], represents the new position of the ith elk in the herd, and is the present position vector of the ith elk, Q is the indexes of the elks with the best memory as defined in Eq. 11, and are the upper and lower bounds at the jth dimension, respectively, and is an exponential function that governs the exploitation ability.11
where rand(1, n) is an matrix of random values in the range [0, 1].Equation 12 indicates that the parameter decreases with the overall number of iterations (T), as specified as a function of iteration loops (t), as seen in Fig. 3.
12
where is a positive value used to control the exploration and exploitation abilities of the AEHO algorithm.[See PDF for image]
Fig. 3
A function of iterations ( ) to ensure that exploration and exploitation are properly balanced
Equation 12 was developed to enhance exploration and exploitation in the first and subsequent phases of AEHO while also ensuring appropriate convergence by reducing the search pace. All the benchmark problems examined in this study have a value of of 2.0. This value was identified after a thorough examination of the optimization outcomes of several benchmark problems. Adding the parameter to the elks’ randomization and swarming conduct in AEHO helps solutions avoid local optimal conditions and enhance both local and global searches. Accordingly, Eq. 13 may be utilized in AEHO in place of Eq. 7, which explains how elks inherit their parents’ features throughout the calving season.
13
where and represent the following and current locations of the ith elk in the jth dimension at the th and tth iterations, respectively, is the ith elk’s current position in the jth dimension at the tth iteration, and is the global best solution of all elks in the jth dimension in the search space up to the tth iteration, is the best solution of the elks at the ith index with the jth dimension at iteration t, and are uniformly produced random numbers that fall inside the range of [0, 1], and and are two adaptive parameters over the AEHO’s iterative process, which are specified in Eqs. 8 and 9, respectively.Greedy selection mechanism
The greedy selection process in classic meta-heuristic optimization techniques continually updates and records the best fitness value and related solutions. The fitness value of the newly updated solution is typically compared to the global optimum following each update. If the updated value exceeds the current global optimum, the best fitness value is replaced, and the solution is designated as the new optimum. This approach is simple and efficient because it is primarily a record-keeping strategy, but it does not instantly support aspects of population investigation or exploitation. Meta-heuristic optimization algorithms often employ an aggressive greedy selection technique to increase the effectiveness of the global exploration component, using search agent fitness values before and after updates as a stand-in for the success of global exploration or the proximity of optimal solutions. This method evaluates each update by comparing the search agents’ current and prior fitness values. The highest fitness scores are either substituted or their attributes and the solution changed to reflect the search agent’s outstanding success. The new fitness scores may be used as a measure to assess how successfully a search agent solves the given problem. The two implications of the search mechanism during the rutting season for AEHO are as follows:
Search agents are actively replaced with better ones, improving the population as a whole. Good search agents, or those with higher fitness values, are consistently maintained in the population to encourage improved solutions.
This technique keeps the population stocked with high-quality search agents, but it comes with potential hazards because the search agents’ placements in the population substantially shift with each iteration. These modifications may lead specific search agents to perform worse than before the update mechanism. This degradation may cause the population to suffer in the next iteration because not all changes lead to improvements.
[See PDF for image]
Algorithm 3
The selection process for the rutting season
The harems in Algorithm 3 are paired with their bulls based on the bull search agents’ fitness scores relative to the total fitness values. Technically, each bull in has a selection probability that can be calculated by dividing its absolute fitness score by the sum of the absolute fitness values of all bull search agents. Algorithm 3 contains pseudocode that provides the selection probability needed to determine which bulls will be given harems. For harems, the method uses the vector , where k is equivalent to . The greedy selection method outlined in Algorithm 3 determines the bull index, which assigns a number to each harem.
Computational complexity of AEHO
An optimization algorithm’s computational complexity may be calculated using a formula that connects the algorithm’s runtime to the problem’s input size. This is accomplished by defining the time complexity of AEHO using Big-O notation, which is as follows:
14
The number of search agents (n), problem’s dimension (d), cost function (c), and iteration course (t) all affect the time complexity of AEHO in Eq. 14. To put it more simply, the total time-based complexity of AEHO is as follows:15
where v is the total number of experiments.Component 1 nd may be excluded from Eq. 15 as and . Moreover, . Thus, AEHO’s time complexity may be condensed down to:
16
Equation 16 illustrates that AEHO’s time complexity is of the polynomial order, making it a potentially effective optimization technique.Implementation of AEHO
In the first and last phases of AEHO, respectively, the exploration and exploitation aspects of PSO and EHO were combined to assist in identifying the global optimal solutions. AEHO’s pseudocode can be found in Algorithm 4.
[See PDF for image]
Algorithm 4
A pseudo code outlining the proposed AEHO algorithm
According to Algorithm 4, the exploration and exploitation capabilities provided by PSO and EHO, along with the additional proposed improvements, amplify the equilibrium between exploration and exploitation in AEHO. Therefore, AEHO is expected to outperform the original EHO and PSO algorithms.
Experimental results and comparisons
This section presents AEHO’s findings on publicly available benchmark problems. These findings were investigated and contrasted with those of other meta-heuristic algorithms that have shown effective performance in literature.
Description of the benchmark functions
To assess the effectiveness of the proposed AEHO method, it was applied to the set of benchmark functions of the CEC2014 competition on Real-Parameter Numerical Optimization (Liang et al. 2013). The selected set consists of thirty problems with varying levels of difficulty. A search space with dimensions of 10 and 30 with a range of [-100, 100] has been assessed. Each task is performed 30 times on its own, following the CEC2014 benchmark collection standards. The seed is selected according to the system clock, and each run utilizes a uniformly generated starting population within the range [-100,100]. After each method, the condition will be set to the highest evaluation or dimension of the function . This study also applies the proposed AEHO method to CEC2022 benchmark functions with dimensions 10 and 20 (Bujok and Kolenovsky 2022). CEC2022 is a modern collection of twelve test functions. All test functions in this group have a search space of [-100, 100]. Moreover, authorized test functions cover both hybrid and composite problems. Hybrid functions can behave in a unimodal or multimodal manner, depending on their essential purpose. Composite functions can be created by combining existing, rotated, and shifted functions. It is difficult to determine the optimal solution because these functions are multimodal, hybrid, and composite. However, it provides a superb chance to assess the effectiveness, scalability, quality, and searchability of the proposed AEHO method.
Experimental setup
The performance of the proposed AEHO method is shown in this section while optimizing a range of numerical benchmarks. We used two benchmark datasets for evaluation: CEC2022 (Liang et al. 2020) and CEC2014 (Liang et al. 2013). Interested readers can refer to these sources for more details about these benchmark functions. Definitions of these benchmark functions are not provided here. The findings are shown as the mean and standard deviation of 30 distinct runs for these benchmark test functions, with a population size of 100 and a maximum number of iterations of 1000. A modest PC operating with MATLAB R2021a is used to collect the results. Hippopotamus optimization (HO) algorithm (Amiri et al. 2024), particle swarm optimization (PSO) Kennedy (1995), polar lights optimizer (PLO) Yuan et al. (2024), EHO optimizer (Al-Betar et al. 2024), transit search (TS) Mirrashid and Naderpour (2022), sinh cosh optimizer (SCHO) Bai et al. (2023), electric eel foraging optimization (EEFO) Zhao et al. (2024), triangulation topology aggregation optimizer (TTAO) Zhao et al. (2024), RUNge kutta optimizer (RUN) Ahmadianfar et al. (2021), educational competition optimizer (ECO) Lian et al. (2024), osprey optimization algorithm (OOA) Dehghani and Trojovskỳ (2023). The parameters of the competing algorithms and the proposed AEHO algorithm are given in 5.
Table 5. Parameter sets of the competing algorithms and the proposed AEHO algorithm
Algorithm | Parameter | Value |
|---|---|---|
All algorithms | Population size | 100 |
No. of function evaluations | ||
AEHO | 0.2 | |
, , | Adaptive parameters | |
EHO | 0.2 | |
, | (0, 2) | |
EEFO | , , | (0, 1) |
, , , , , E(t) | Dynamic parameters | |
b, 1.5. | ||
HO | , , | Random values . |
PSO | Social coefficient () | 2.0 |
Cognitive coefficient () | 2.0 | |
Min. inertia weight () | 0.2 | |
Max. inertia weight () | 0.9 | |
PLO | m, a | 100, [1, 1.5]. |
TS | , | Random vectors. |
, | Random values | |
, | Random values | |
SCHO | ct | 3.6. |
, , , | Random values . | |
, | Adaptive parameters. | |
TTAO | l | Dynamic parameter. |
random vector. | ||
RUN | , , , | Adaptive vectors. |
ECO | w, P, E | Adaptive parameters. |
OOA | – | –. |
It is important to note that the settings indicated in their native references were used to establish the parameters of the current optimization algorithms, which are displayed in Table 5. These settings are present in the software that implements these algorithms. Two typical key parameter choices for the algorithms in this table that are well documented in the literature are the number of search agents and the number of function evaluations. Furthermore, AEHO’s initialization procedure is similar to that of other rival algorithms. This makes it possible to compare AEHO and other rival algorithms fairly. The same parameter values of the proposed AEHO method were used to evaluate each benchmark test function.
The average (Ave) and standard deviation (Std) metrics were used to assess the stability and accuracy of the proposed AEHO approach. These statistical evaluation criteria were applied to each test problem and optimization process as the top two performance scores. The mean measure was used to assess the approaches’ accuracy. At the same time, the analysis of the standard deviation results helps to guarantee that the optimization techniques function consistently across the many runs. The best-performing algorithms are encouraged to outperform others in every test during this study. The following subsections examine the AEHO algorithm’s performance on the CEC2014 and CEC2022 benchmark problems in comparison to several meta-heuristics. Friedman’s and Holm’s tests are used in this study at a 5% significant level, and the statistical results are reformatted win(W)/tie(T)/loss(L). For these tests, which rank the algorithms from best to worst, a “+” indicates that the proposed EEHO algorithm performs better than the test algorithm, a “-” indicates that the test algorithm performs better than the proposed EEHO algorithm, and a “=” implies that the two methods do not demonstrate any statistical significance or behave comparably.
Performance evaluation of AEHO on CEC2014 functions
Scalability is a feature of the IEEE CEC2014 benchmark test functions, a set of thirty benchmark test functions designed to assess single-objective optimization problems. Each competing method searches for the global optimum thirty times for each benchmark function in the CEC2014, maybe using a different starting population each time. The maximum number of search agents and iterations for every method is 100 and 1000, respectively. Each competing algorithms reported in the literature and original works are considered while selecting the parameter settings shown in Table 5. For AEHO to be reasonably compared to other competing meta-heuristics, the initialization strategy was the same as that of the other candidate algorithms. Each evaluated test function’s mean values and standard deviations are calculated and used to assess how well AEHO performs with other competing algorithms. The advantage of AEHO is demonstrated by a comprehensive statistical study. The first evaluation is shown in the first line of the findings, where three indications (W|T|L) show whether the methods worked best W (win), comparably T (tie), or least effectively L (loss) for specific functions. The third row displays the final Friedman’s test rating, which provides details about the overall rankings, while the second row averages the mean efficiency of all algorithms. The Ave and Std values, respectively, achieved using the AEHO method and other competing algorithms to calculate the global optimum values in the CEC2014 benchmark functions with 10 and 30 dimensions are shown in Tables 6 and 7. The most important results are highlighted in these tables by being displayed prominently.
Table 6. Assessment results of AEHO and other algorithms on CEC2014 functions with d = 10
F | AEHO | EHO | TS | SCHO | ||||
|---|---|---|---|---|---|---|---|---|
Ave | Std | Ave | Std | Ave | Std | Ave | Std | |
CEC2014-f1 | 1.0000E+02 | 3.8880E+04 | 5.7425E+06 | 2.4728E+06 | 1.0000E+02 | 7.3324E-09 | 1.3739E+05 | 5.2166E+04 |
CEC2014-f2 | 2.0000E+02 | 1.0761E+03 | 5.5490E+08 | 3.1305E+08 | 6.3024E+03 | 1.5620E+04 | 6.2507E+03 | 3.2015E+03 |
CEC2014-f3 | 3.0000E+02 | 1.1380E+02 | 3.0956E+03 | 1.6144E+03 | 3.0000E+02 | 3.1667E-14 | 1.4433E+03 | 1.7161E+03 |
CEC2014-f4 | 4.0000E+02 | 1.6574E+01 | 4.5480E+02 | 2.2420E+01 | 4.0178E+02 | 2.1398E+00 | 4.1924E+02 | 1.6951E+01 |
CEC2014-f5 | 5.1582E+02 | 8.2438E+00 | 5.2035E+02 | 7.8979E-02 | 5.2029E+02 | 8.4974E-02 | 5.2008E+02 | 5.5658E-02 |
CEC2014-f6 | 6.0170E+02 | 1.1996E+00 | 6.0612E+02 | 1.3633E+00 | 6.0390E+02 | 2.6485E+00 | 6.0436E+02 | 1.6353E+00 |
CEC2014-f7 | 7.0011E+02 | 4.7192E-02 | 7.0845E+02 | 2.1401E+00 | 7.0016E+02 | 9.5990E-02 | 7.0024E+02 | 9.0602E-02 |
CEC2014-f8 | 8.1741E+02 | 6.4890E+00 | 8.3637E+02 | 5.5539E+00 | 8.1446E+02 | 5.4231E+00 | 8.0013E+02 | 3.4403E-01 |
CEC2014-f9 | 9.1386E+02 | 5.0526E+00 | 9.3731E+02 | 5.2725E+00 | 9.1237E+02 | 4.6203E+00 | 9.1314E+02 | 6.2736E+00 |
CEC2014-f10 | 1.5698E+03 | 2.4986E+02 | 1.9165E+03 | 1.6784E+02 | 1.7757E+03 | 2.9625E+02 | 1.1359E+03 | 9.0128E+01 |
CEC2014-f11 | 1.6510E+03 | 2.1469E+02 | 2.2317E+03 | 2.0748E+02 | 1.9290E+03 | 3.2193E+02 | 1.6513E+03 | 2.4499E+02 |
CEC2014-f12 | 1.2001E+03 | 9.4302E-02 | 1.2011E+03 | 2.0702E-01 | 1.2010E+03 | 3.5215E-01 | 1.2001E+03 | 7.4939E-02 |
CEC2014-f13 | 1.3002E+03 | 6.0651E-02 | 1.3005E+03 | 1.1125E-01 | 1.3001E+03 | 8.2955E-02 | 1.3002E+03 | 6.3494E-02 |
CEC2014-f14 | 1.4002E+03 | 7.1609E-02 | 1.4008E+03 | 3.8355E-01 | 1.4002E+03 | 6.2988E-02 | 1.4002E+03 | 4.8405E-02 |
CEC2014-f15 | 1.5009E+03 | 3.3284E-01 | 1.5067E+03 | 1.6285E+00 | 1.5010E+03 | 3.6965E-01 | 1.5010E+03 | 3.9259E-01 |
CEC2014-f16 | 1.6024E+03 | 4.1228E-01 | 1.6032E+03 | 2.9367E-01 | 1.6028E+03 | 4.5477E-01 | 1.6023E+03 | 5.2947E-01 |
CEC2014-f17 | 1.9961E+03 | 1.5911E+02 | 1.9556E+04 | 1.3452E+04 | 2.8496E+03 | 1.6992E+03 | 8.9260E+03 | 5.5851E+03 |
CEC2014-f18 | 1.8680E+03 | 3.9313E+01 | 1.6251E+04 | 8.2803E+03 | 2.7387E+03 | 7.6032E+02 | 2.2943E+04 | 1.5537E+04 |
CEC2014-f19 | 1.9019E+03 | 5.9299E-01 | 1.9046E+03 | 7.0978E-01 | 1.9017E+03 | 4.0591E-01 | 1.9017E+03 | 5.8893E-01 |
CEC2014-f20 | 2.0415E+03 | 3.0094E+01 | 3.1676E+03 | 1.2631E+03 | 2.0145E+03 | 5.0696E+01 | 4.0991E+03 | 4.5491E+03 |
CEC2014-f21 | 2.2179E+03 | 9.3389E+01 | 7.7280E+03 | 3.9742E+03 | 2.7957E+03 | 7.1512E+02 | 2.8283E+03 | 1.0998E+03 |
CEC2014-f22 | 2.2680E+03 | 5.4575E+01 | 2.2497E+03 | 9.0009E+00 | 2.2305E+03 | 1.1328E+01 | 2.2233E+03 | 2.7084E+01 |
CEC2014-f23 | 2.6295E+03 | 4.6409E-06 | 2.6398E+03 | 4.2767E+00 | 2.6295E+03 | 1.1010E-12 | 2.5000E+03 | 0.00E+00 |
CEC2014-f24 | 2.5267E+03 | 9.5716E+00 | 2.5462E+03 | 6.4766E+00 | 2.5205E+03 | 5.5596E+00 | 2.5271E+03 | 1.5942E+01 |
CEC2014-f25 | 2.6543E+03 | 2.2191E+01 | 2.6911E+03 | 1.7653E+01 | 2.6936E+03 | 1.9866E+01 | 2.6869E+03 | 2.5480E+01 |
CEC2014-f26 | 2.7001E+03 | 4.8040E-02 | 2.7005E+03 | 8.3967E-02 | 2.7001E+03 | 6.2061E-02 | 2.7002E+03 | 5.6158E-02 |
CEC2014-f27 | 2.7037E+03 | 1.0280E+00 | 2.9242E+03 | 1.9528E+02 | 3.0325E+03 | 1.5486E+02 | 2.8802E+03 | 6.0370E+01 |
CEC2014-f28 | 3.0000E+03 | 1.7478E+02 | 3.2439E+03 | 3.0795E+01 | 3.1094E+03 | 1.7103E+01 | 3.0000E+03 | 0.00E+00 |
CEC2014-f29 | 3.1698E+03 | 4.0392E+01 | 6.2834E+03 | 2.5276E+03 | 3.1027E+03 | 1.0779E+00 | 3.5187E+03 | 2.2726E+02 |
CEC2014-f30 | 4.0552E+03 | 3.2629E+02 | 4.4148E+03 | 5.3358E+02 | 3.2521E+03 | 2.8746E+01 | 3.6779E+03 | 2.8998E+02 |
(W|T|L) | (10|7|13) | (0|0|30) | (3|7|20) | (1|4|25) | ||||
Mean | 3.50 | 8.20 | 4.10 | 3.87 | ||||
Ranking | 1 | 8 | 4 | 3 | ||||
F | EEFO | TTAO | HO | OOA | ||||
|---|---|---|---|---|---|---|---|---|
Ave | Std | Ave | Std | Ave | Std | Ave | Std | |
CEC2014-f1 | 1.0000E+02 | 7.5116E-04 | 1.1670E+05 | 7.6716E+04 | 4.0565E+07 | 2.8614E+07 | 4.7995E+06 | 6.5517E+06 |
CEC2014-f2 | 2.0000E+02 | 3.5793E-03 | 2.6938E+03 | 3.6737E+03 | 3.8448E+09 | 1.6633E+09 | 3.2418E+08 | 3.2742E+08 |
CEC2014-f3 | 3.0000E+02 | 3.6295E-05 | 4.9272E+02 | 2.8552E+02 | 4.2597E+04 | 2.0170E+04 | 9.1202E+03 | 7.9253E+03 |
CEC2014-f4 | 4.0066E+02 | 1.5111E+00 | 4.2490E+02 | 1.5395E+01 | 6.2355E+02 | 2.8751E+02 | 4.6171E+02 | 3.3774E+01 |
CEC2014-f5 | 5.1974E+02 | 3.4180E+00 | 5.2008E+02 | 4.6798E-02 | 5.2075E+02 | 1.1088E-01 | 5.2038E+02 | 7.4494E-02 |
CEC2014-f6 | 6.0445E+02 | 1.6076E+00 | 6.0528E+02 | 1.9318E+00 | 6.0955E+02 | 1.3209E+00 | 6.0667E+02 | 1.7186E+00 |
CEC2014-f7 | 7.0027E+02 | 1.4050E-01 | 7.0019E+02 | 9.8939E-02 | 7.6524E+02 | 3.1065E+01 | 7.0971E+02 | 9.6983E+00 |
CEC2014-f8 | 8.1646E+02 | 9.5393E+00 | 8.1203E+02 | 5.9182E+00 | 8.7217E+02 | 1.6254E+01 | 8.3281E+02 | 1.1587E+01 |
CEC2014-f9 | 9.2633E+02 | 1.2231E+01 | 9.2434E+02 | 1.1128E+01 | 9.6834E+02 | 1.1120E+01 | 9.3514E+02 | 1.2790E+01 |
CEC2014-f10 | 1.4463E+03 | 2.4965E+02 | 1.2298E+03 | 1.2926E+02 | 2.5854E+03 | 3.6313E+02 | 1.9488E+03 | 2.5870E+02 |
CEC2014-f11 | 1.9030E+03 | 2.2773E+02 | 1.8013E+03 | 2.6602E+02 | 2.9085E+03 | 3.2086E+02 | 2.2520E+03 | 3.4631E+02 |
CEC2014-f12 | 1.2008E+03 | 4.6315E-01 | 1.2003E+03 | 1.7566E-01 | 1.2025E+03 | 4.8076E-01 | 1.2008E+03 | 3.0759E-01 |
CEC2014-f13 | 1.3003E+03 | 1.1302E-01 | 1.3002E+03 | 1.0272E-01 | 1.3013E+03 | 8.8776E-01 | 1.3005E+03 | 2.3249E-01 |
CEC2014-f14 | 1.4002E+03 | 2.1207E-01 | 1.4002E+03 | 1.7911E-01 | 1.4126E+03 | 1.0672E+01 | 1.4028E+03 | 3.5995E+00 |
CEC2014-f15 | 1.5023E+03 | 1.7149E+00 | 1.5021E+03 | 1.3713E+00 | 2.1760E+03 | 1.7352E+03 | 1.5250E+03 | 2.7534E+01 |
CEC2014-f16 | 1.6029E+03 | 4.5730E-01 | 1.6027E+03 | 3.9233E-01 | 1.6039E+03 | 2.2205E-01 | 1.6033E+03 | 3.4953E-01 |
CEC2014-f17 | 2.1224E+03 | 2.7078E+02 | 5.8120E+03 | 4.1607E+03 | 6.9857E+05 | 5.2560E+05 | 4.3745E+03 | 3.3837E+03 |
CEC2014-f18 | 1.8361E+03 | 2.7002E+01 | 1.9236E+03 | 9.0414E+01 | 7.4010E+05 | 2.0217E+06 | 3.7241E+03 | 7.1223E+03 |
CEC2014-f19 | 1.9023E+03 | 1.0591E+00 | 1.9037E+03 | 1.3615E+00 | 1.9145E+03 | 1.5413E+01 | 1.9065E+03 | 2.1178E+00 |
CEC2014-f20 | 2.0149E+03 | 1.2919E+01 | 2.1955E+03 | 5.3444E+02 | 7.4629E+04 | 9.7445E+04 | 2.6541E+03 | 1.1554E+03 |
CEC2014-f21 | 2.3198E+03 | 1.5268E+02 | 2.5698E+03 | 2.2308E+02 | 2.7460E+05 | 2.8992E+05 | 4.6959E+03 | 6.1693E+03 |
CEC2014-f22 | 2.2806E+03 | 7.5690E+01 | 2.2297E+03 | 1.1844E+01 | 2.4397E+03 | 9.4923E+01 | 2.2898E+03 | 6.3657E+01 |
CEC2014-f23 | 2.6295E+03 | 5.0086E-12 | 2.5000E+03 | 0.00E+00 | 2.5000E+03 | 0.00E+00 | 2.5276E+03 | 3.3797E+01 |
CEC2014-f24 | 2.5489E+03 | 2.6769E+01 | 2.5484E+03 | 2.3296E+01 | 2.5961E+03 | 1.1660E+01 | 2.5534E+03 | 2.5180E+01 |
CEC2014-f25 | 2.6933E+03 | 1.6975E+01 | 2.6868E+03 | 1.6018E+01 | 2.6994E+03 | 3.1223E+00 | 2.6960E+03 | 9.5261E+00 |
CEC2014-f26 | 2.7003E+03 | 1.6197E-01 | 2.7002E+03 | 9.6250E-02 | 2.7020E+03 | 9.1410E-01 | 2.7005E+03 | 3.0201E-01 |
CEC2014-f27 | 3.0027E+03 | 1.8541E+02 | 2.9211E+03 | 1.2012E+02 | 2.8981E+03 | 7.6045E+01 | 3.0909E+03 | 1.5995E+02 |
CEC2014-f28 | 3.1490E+03 | 7.2157E+01 | 3.0554E+03 | 1.1513E+02 | 3.1303E+03 | 2.4928E+02 | 3.1551E+03 | 1.7204E+02 |
CEC2014-f29 | 3.1046E+03 | 1.3993E+00 | 1.8837E+05 | 5.6726E+05 | 1.0089E+06 | 1.5650E+06 | 7.5111E+05 | 1.3584E+06 |
CEC2014-f30 | 3.3937E+03 | 1.5787E+02 | 4.0513E+03 | 6.6816E+02 | 1.8703E+04 | 2.0199E+04 | 7.3946E+03 | 1.2084E+04 |
(W|T|L) | (0|2|28) | (5|4|21) | (0|1|29) | (0|0|30) | ||||
Mean | 4.63 | 4.65 | 10.50 | 8.47 | ||||
Ranking | 5 | 6 | 11 | 10 | ||||
F | RUN | ECO | PSO | PLO | ||||
|---|---|---|---|---|---|---|---|---|
Ave | Std | Ave | Std | Ave | Std | Ave | Std | |
CEC2014-f1 | 4.0891E+06 | 2.9337E+06 | 4.9697E+07 | 3.1114E+07 | 2.1930E+07 | 3.1445E+07 | 2.1618E+04 | 2.4627E+04 |
CEC2014-f2 | 1.4483E+04 | 1.2783E+04 | 2.5673E+09 | 1.6807E+09 | 3.1498E+09 | 1.1228E+09 | 2.6402E+03 | 2.9217E+03 |
CEC2014-f3 | 2.3994E+03 | 1.5995E+03 | 5.3394E+04 | 2.7146E+04 | 1.6142E+04 | 6.3651E+03 | 3.3716E+02 | 7.0384E+01 |
CEC2014-f4 | 4.3321E+02 | 1.6973E+01 | 7.1022E+02 | 3.3795E+02 | 8.4426E+02 | 2.8717E+02 | 4.2075E+02 | 1.6376E+01 |
CEC2014-f5 | 5.2007E+02 | 5.9934E-02 | 5.2074E+02 | 1.3891E-01 | 5.2007E+02 | 2.1257E-02 | 5.2022E+02 | 1.1182E-01 |
CEC2014-f6 | 6.0575E+02 | 1.7531E+00 | 6.0936E+02 | 1.6602E+00 | 6.0873E+02 | 1.0222E+00 | 6.0292E+02 | 1.5990E+00 |
CEC2014-f7 | 7.0087E+02 | 6.5704E-01 | 7.6756E+02 | 3.5918E+01 | 7.6879E+02 | 2.9299E+01 | 7.0018E+02 | 1.0995E-01 |
CEC2014-f8 | 8.2980E+02 | 1.0635E+01 | 8.6851E+02 | 1.1676E+01 | 8.2548E+02 | 8.2591E+00 | 8.1061E+02 | 4.8079E+00 |
CEC2014-f9 | 9.3283E+02 | 1.0743E+01 | 9.6830E+02 | 1.3911E+01 | 9.3031E+02 | 7.1027E+00 | 9.1105E+02 | 4.9400E+00 |
CEC2014-f10 | 1.6867E+03 | 3.1768E+02 | 2.6074E+03 | 2.4917E+02 | 1.4437E+03 | 1.8290E+02 | 1.3646E+03 | 2.1738E+02 |
CEC2014-f11 | 1.9349E+03 | 2.8188E+02 | 2.7926E+03 | 3.0243E+02 | 1.7479E+03 | 2.4592E+02 | 1.7321E+03 | 2.5581E+02 |
CEC2014-f12 | 1.2006E+03 | 2.2093E-01 | 1.2020E+03 | 6.3289E-01 | 1.2003E+03 | 1.5640E-01 | 1.2003E+03 | 1.3108E-01 |
CEC2014-f13 | 1.3004E+03 | 1.3333E-01 | 1.3017E+03 | 1.1415E+00 | 1.3017E+03 | 9.0639E-01 | 1.3002E+03 | 8.7001E-02 |
CEC2014-f14 | 1.4005E+03 | 2.2677E-01 | 1.4141E+03 | 9.0022E+00 | 1.4134E+03 | 5.1661E+00 | 1.4002E+03 | 7.5830E-02 |
CEC2014-f15 | 1.5035E+03 | 1.7808E+00 | 2.2775E+03 | 2.0592E+03 | 1.6793E+03 | 2.7440E+02 | 1.5008E+03 | 2.9043E-01 |
CEC2014-f16 | 1.6030E+03 | 3.3270E-01 | 1.6040E+03 | 2.2637E-01 | 1.6034E+03 | 2.9877E-01 | 1.6024E+03 | 5.0137E-01 |
CEC2014-f17 | 7.7220E+03 | 7.4029E+03 | 9.6799E+05 | 7.1855E+05 | 2.0198E+05 | 1.3973E+05 | 2.1419E+03 | 2.1715E+02 |
CEC2014-f18 | 1.0903E+04 | 7.7874E+03 | 8.6305E+05 | 2.1654E+06 | 1.1861E+04 | 7.6483E+03 | 1.8599E+03 | 3.2573E+01 |
CEC2014-f19 | 1.9041E+03 | 1.3855E+00 | 1.9148E+03 | 1.5520E+01 | 1.9227E+03 | 1.5416E+01 | 1.9033E+03 | 1.4592E+00 |
CEC2014-f20 | 4.0196E+03 | 1.7198E+03 | 4.8397E+05 | 2.0120E+06 | 4.7376E+03 | 1.9096E+03 | 2.0334E+03 | 1.9334E+01 |
CEC2014-f21 | 7.1852E+03 | 3.0910E+03 | 6.1791E+05 | 8.4950E+05 | 2.0214E+05 | 5.7584E+05 | 2.2310E+03 | 1.2375E+02 |
CEC2014-f22 | 2.2600E+03 | 3.2109E+01 | 2.4765E+03 | 1.2482E+02 | 2.3586E+03 | 6.2639E+01 | 2.2264E+03 | 4.9580E+00 |
CEC2014-f23 | 2.5000E+03 | 1.2924E-03 | 2.5435E+03 | 7.7224E+01 | 2.5000E+03 | 0.00E+00 | 2.5951E+03 | 5.8051E+01 |
CEC2014-f24 | 2.5625E+03 | 2.9713E+01 | 2.5907E+03 | 1.5598E+01 | 2.5623E+03 | 2.5709E+01 | 2.5267E+03 | 1.5745E+01 |
CEC2014-f25 | 2.6960E+03 | 1.0190E+01 | 2.6997E+03 | 1.0731E+00 | 2.6965E+03 | 8.1020E+00 | 2.6670E+03 | 3.0607E+01 |
CEC2014-f26 | 2.7003E+03 | 1.3045E-01 | 2.7019E+03 | 1.1443E+00 | 2.7019E+03 | 7.9378E-01 | 2.7001E+03 | 5.3426E-02 |
CEC2014-f27 | 2.8489E+03 | 8.6256E+01 | 3.0938E+03 | 2.0051E+02 | 2.8879E+03 | 3.4709E+01 | 3.0159E+03 | 1.6264E+02 |
CEC2014-f28 | 3.0000E+03 | 2.5359E-03 | 3.3299E+03 | 2.8807E+02 | 3.0000E+03 | 0.00E+00 | 3.2518E+03 | 7.1509E+01 |
CEC2014-f29 | 5.1447E+03 | 2.5074E+03 | 1.2414E+06 | 2.5538E+06 | 5.8937E+04 | 3.0246E+05 | 3.3086E+05 | 8.8834E+05 |
CEC2014-f30 | 4.4794E+03 | 8.3148E+02 | 3.2954E+04 | 4.1503E+04 | 1.7916E+04 | 2.6668E+04 | 3.8578E+03 | 3.4586E+02 |
(W|T|L) | (0|2|28) | (0|0|30) | (0|2|28) | (2|2|26) | ||||
Mean | 6.62 | 11.37 | 8.42 | 3.68 | ||||
Ranking | 7 | 12 | 9 | 2 | ||||
Table 7. Assessment results of AEHO and other algorithms on CEC2014 functions with d = 30
F | AEHO | EHO | TS | SCHO | ||||
|---|---|---|---|---|---|---|---|---|
Ave | Std | Ave | Std | Ave | Std | Ave | Std | |
CEC2014-f1 | 5.8988E+06 | 2.6932E+06 | 2.8058E+08 | 6.4000E+07 | 7.2205E+06 | 6.9056E+06 | 1.1316E+07 | 4.5948E+06 |
CEC2014-f2 | 1.0754E+04 | 1.1643E+04 | 2.0393E+10 | 3.9746E+09 | 1.2289E+10 | 5.9520E+09 | 3.0999E+06 | 3.9013E+06 |
CEC2014-f3 | 1.8629E+03 | 1.7961E+03 | 4.5664E+04 | 6.9432E+03 | 3.2929E+04 | 1.0899E+04 | 1.6332E+04 | 3.8311E+03 |
CEC2014-f4 | 5.1421E+02 | 3.9973E+01 | 1.6453E+03 | 3.1254E+02 | 1.0456E+03 | 6.6088E+02 | 5.7747E+02 | 6.1848E+01 |
CEC2014-f5 | 5.2094E+02 | 1.1152E-01 | 5.2097E+02 | 4.8996E-02 | 5.2095E+02 | 6.6995E-02 | 5.2094E+02 | 1.4160E-01 |
CEC2014-f6 | 6.1536E+02 | 2.7626E+00 | 6.3449E+02 | 2.0981E+00 | 6.3012E+02 | 6.2675E+00 | 6.2370E+02 | 3.5394E+00 |
CEC2014-f7 | 7.0085E+02 | 1.0103E-01 | 8.6211E+02 | 3.2937E+01 | 8.4188E+02 | 4.6236E+01 | 7.0090E+02 | 1.1459E-01 |
CEC2014-f8 | 8.1568E+02 | 3.3172E+00 | 1.0586E+03 | 1.7326E+01 | 9.5922E+02 | 2.6148E+01 | 9.1539E+02 | 1.9567E+01 |
CEC2014-f9 | 1.0078E+03 | 2.3488E+01 | 1.1861E+03 | 2.1905E+01 | 1.0726E+03 | 2.9695E+01 | 1.0179E+03 | 1.9352E+01 |
CEC2014-f10 | 1.7182E+03 | 2.2076E+02 | 7.0533E+03 | 5.2117E+02 | 5.9967E+03 | 8.4284E+02 | 4.1186E+03 | 5.4454E+02 |
CEC2014-f11 | 4.1217E+03 | 6.9504E+02 | 8.2540E+03 | 3.4683E+02 | 6.4825E+03 | 7.2026E+02 | 4.4274E+03 | 6.2189E+02 |
CEC2014-f12 | 1.2004E+03 | 1.3104E-01 | 1.2027E+03 | 3.0377E-01 | 1.2026E+03 | 4.7573E-01 | 1.2004E+03 | 1.5348E-01 |
CEC2014-f13 | 1.3006E+03 | 8.9698E-02 | 1.3032E+03 | 2.9082E-01 | 1.3024E+03 | 9.9697E-01 | 1.3005E+03 | 1.0672E-01 |
CEC2014-f14 | 1.4008E+03 | 3.0330E-01 | 1.4530E+03 | 8.8880E+00 | 1.4401E+03 | 2.2458E+01 | 1.4002E+03 | 5.1592E-02 |
CEC2014-f15 | 1.5079E+03 | 2.8390E+00 | 7.6548E+03 | 6.3849E+03 | 4.5410E+03 | 3.8215E+03 | 1.5145E+03 | 4.8220E+00 |
CEC2014-f16 | 1.6114E+03 | 6.0496E-01 | 1.6130E+03 | 1.9933E-01 | 1.6125E+03 | 6.5714E-01 | 1.6117E+03 | 4.0243E-01 |
CEC2014-f17 | 1.0314E+06 | 4.7292E+05 | 9.1678E+06 | 3.3821E+06 | 9.7670E+04 | 1.4406E+05 | 1.1930E+05 | 8.2669E+04 |
CEC2014-f18 | 1.8764E+04 | 9.7593E+03 | 2.6515E+08 | 1.3512E+08 | 9.2424E+03 | 1.0376E+04 | 2.3867E+03 | 6.7482E+02 |
CEC2014-f19 | 1.9204E+03 | 2.6164E+01 | 2.0095E+03 | 2.5912E+01 | 1.9167E+03 | 2.6587E+00 | 1.9246E+03 | 2.1152E+01 |
CEC2014-f20 | 8.3989E+03 | 6.7409E+03 | 2.5528E+04 | 9.0052E+03 | 3.2830E+03 | 1.1032E+03 | 4.3241E+03 | 2.1987E+03 |
CEC2014-f21 | 3.6656E+05 | 2.9476E+05 | 2.3336E+06 | 1.7498E+06 | 5.9204E+04 | 5.8401E+04 | 4.6917E+04 | 4.8065E+04 |
CEC2014-f22 | 2.6904E+03 | 2.0127E+02 | 3.1253E+03 | 1.4295E+02 | 2.6452E+03 | 1.7921E+02 | 2.7029E+03 | 1.6524E+02 |
CEC2014-f23 | 2.5000E+03 | 0.00E+00 | 2.6785E+03 | 1.3728E+01 | 2.6318E+03 | 1.4291E+01 | 2.6244E+03 | 3.6141E+00 |
CEC2014-f24 | 2.6000E+03 | 0.00E+00 | 2.6031E+03 | 6.8294E+00 | 2.6603E+03 | 1.0167E+01 | 2.6191E+03 | 8.2693E+00 |
CEC2014-f25 | 2.7000E+03 | 0.00E+00 | 2.7318E+03 | 1.0698E+01 | 2.7049E+03 | 5.2884E+00 | 2.7021E+03 | 2.8390E+00 |
CEC2014-f26 | 2.7006E+03 | 1.3248E-01 | 2.7029E+03 | 5.3274E-01 | 2.7011E+03 | 8.0225E-01 | 2.7005E+03 | 1.3502E-01 |
CEC2014-f27 | 2.9000E+03 | 0.00E+00 | 3.5102E+03 | 3.1948E+02 | 3.7471E+03 | 1.9557E+02 | 3.2196E+03 | 2.2398E+02 |
CEC2014-f28 | 3.0000E+03 | 0.00E+00 | 5.1655E+03 | 4.6900E+02 | 3.2656E+03 | 4.2605E+01 | 5.8949E+03 | 7.8847E+02 |
CEC2014-f29 | 3.8951E+03 | 1.0774E+03 | 2.1024E+07 | 1.1315E+07 | 3.1328E+03 | 3.6970E+01 | 8.3446E+03 | 2.7377E+03 |
CEC2014-f30 | 6.5820E+03 | 1.5976E+03 | 3.0683E+05 | 1.2227E+05 | 4.0856E+03 | 3.0352E+02 | 2.4142E+04 | 1.8182E+04 |
(W|T|L) | (12|4|14) | (0|0|30) | (3|0|27) | (3|2|25) | ||||
Mean | 2.73 | 8.65 | 5.78 | 4.02 | ||||
Ranking | 1 | 9 | 7 | 4 | ||||
F | EEFO | TTAO | HO | OOA | ||||
|---|---|---|---|---|---|---|---|---|
Ave | Std | Ave | Std | Ave | Std | Ave | Std | |
CEC2014-f1 | 1.5646E+06 | 1.0653E+06 | 1.0862E+07 | 6.1247E+06 | 8.2850E+08 | 3.4634E+08 | 5.6932E+08 | 3.6499E+08 |
CEC2014-f2 | 2.3970E+07 | 4.5767E+07 | 4.4057E+06 | 6.7753E+06 | 5.7955E+10 | 1.2016E+10 | 4.6152E+10 | 1.1250E+10 |
CEC2014-f3 | 5.2941E+03 | 3.6177E+03 | 6.5939E+03 | 2.6811E+03 | 1.7741E+05 | 6.5766E+04 | 7.2803E+04 | 9.3715E+03 |
CEC2014-f4 | 4.8947E+02 | 3.3887E+01 | 5.4593E+02 | 4.1477E+01 | 7.3800E+03 | 2.9413E+03 | 6.3728E+03 | 2.7297E+03 |
CEC2014-f5 | 5.2099E+02 | 4.8412E-02 | 5.2072E+02 | 1.4354E-01 | 5.2118E+02 | 6.8750E-02 | 5.2096E+02 | 6.3720E-02 |
CEC2014-f6 | 6.2500E+02 | 4.1256E+00 | 6.2774E+02 | 3.1053E+00 | 6.3881E+02 | 1.8219E+00 | 6.3724E+02 | 2.1293E+00 |
CEC2014-f7 | 7.0126E+02 | 5.9146E-01 | 7.0098E+02 | 2.2972E-01 | 1.1705E+03 | 8.7365E+01 | 1.0600E+03 | 9.7066E+01 |
CEC2014-f8 | 8.9617E+02 | 2.2633E+01 | 9.0098E+02 | 2.3751E+01 | 1.1388E+03 | 3.2112E+01 | 1.0556E+03 | 3.2718E+01 |
CEC2014-f9 | 1.0603E+03 | 2.6500E+01 | 1.0567E+03 | 3.0787E+01 | 1.2584E+03 | 3.0801E+01 | 1.1759E+03 | 3.0633E+01 |
CEC2014-f10 | 3.0620E+03 | 6.8207E+02 | 3.1726E+03 | 4.9791E+02 | 8.7286E+03 | 6.5679E+02 | 6.7683E+03 | 7.3241E+02 |
CEC2014-f11 | 4.7235E+03 | 6.1253E+02 | 4.8389E+03 | 7.8122E+02 | 9.5611E+03 | 4.1312E+02 | 7.5848E+03 | 6.9057E+02 |
CEC2014-f12 | 1.2022E+03 | 1.0370E+00 | 1.2011E+03 | 4.9528E-01 | 1.2042E+03 | 6.1590E-01 | 1.2023E+03 | 5.8959E-01 |
CEC2014-f13 | 1.3006E+03 | 1.3168E-01 | 1.3005E+03 | 1.2241E-01 | 1.3058E+03 | 6.4930E-01 | 1.3052E+03 | 8.1877E-01 |
CEC2014-f14 | 1.4002E+03 | 5.2668E-02 | 1.4002E+03 | 1.4515E-01 | 1.5433E+03 | 2.7875E+01 | 1.5368E+03 | 2.7628E+01 |
CEC2014-f15 | 1.5305E+03 | 7.1963E+00 | 1.5307E+03 | 1.6920E+01 | 1.6799E+05 | 1.1993E+05 | 7.8454E+04 | 5.7470E+04 |
CEC2014-f16 | 1.6120E+03 | 6.6323E-01 | 1.6120E+03 | 5.4384E-01 | 1.6137E+03 | 2.4220E-01 | 1.6130E+03 | 3.0386E-01 |
CEC2014-f17 | 3.4778E+04 | 2.7067E+04 | 6.0980E+05 | 4.7713E+05 | 3.7353E+07 | 2.6183E+07 | 6.3145E+06 | 7.5785E+06 |
CEC2014-f18 | 5.7442E+03 | 8.0041E+03 | 1.2286E+04 | 1.0011E+04 | 1.6478E+09 | 1.8911E+09 | 1.3149E+08 | 2.0461E+08 |
CEC2014-f19 | 1.9178E+03 | 3.5206E+00 | 1.9257E+03 | 2.3422E+01 | 2.1404E+03 | 8.1646E+01 | 2.1165E+03 | 1.0208E+02 |
CEC2014-f20 | 2.9540E+03 | 9.7366E+02 | 6.7315E+03 | 6.8620E+03 | 5.2670E+05 | 5.5277E+05 | 3.7775E+04 | 2.5014E+04 |
CEC2014-f21 | 1.1830E+04 | 1.1093E+04 | 1.8568E+05 | 1.9003E+05 | 1.8635E+07 | 1.2095E+07 | 2.0302E+06 | 5.3690E+06 |
CEC2014-f22 | 2.9307E+03 | 2.1486E+02 | 2.8213E+03 | 2.6225E+02 | 4.1893E+03 | 1.3714E+03 | 3.1295E+03 | 3.1456E+02 |
CEC2014-f23 | 2.6154E+03 | 1.3749E+00 | 2.5077E+03 | 2.9361E+01 | 2.5000E+03 | 0.00E+00 | 2.5514E+03 | 7.7935E+01 1 |
CEC2014-f24 | 2.6315E+03 | 4.4655E+00 | 2.6000E+03 | 0.00E+00 | 2.6000E+03 | 8.4444E-14 | 2.6000E+03 | 0.00E+00 |
CEC2014-f25 | 2.7154E+03 | 6.2537E+00 | 2.7000E+03 | 0.00E+00 | 2.7000E+03 | 0.00E+00 | 2.7000E+03 | 0.00E+00 |
CEC2014-f26 | 2.7007E+03 | 1.5696E-01 | 2.7005E+03 | 1.3417E-01 | 2.7524E+03 | 4.8407E+01 | 2.7502E+03 | 4.7426E+01 |
CEC2014-f27 | 3.5066E+03 | 2.3566E+02 | 3.0396E+03 | 3.2571E+02 | 3.1339E+03 | 4.5590E+02 | 3.8400E+03 | 4.3801E+02 |
CEC2014-f28 | 3.2843E+03 | 8.4281E+01 | 3.1569E+03 | 4.1609E+02 | 3.2927E+03 | 1.1167E+03 | 3.6313E+03 | 1.0425E+03 |
CEC2014-f29 | 3.1345E+03 | 4.1680E+01 | 6.4671E+06 | 4.7251E+06 | 3.0893E+07 | 5.2065E+07 | 5.3145E+07 | 3.1640E+07 |
CEC2014-f30 | 4.1723E+03 | 3.4817E+02 | 1.2249E+04 | 7.1066E+03 | 1.3143E+06 | 1.2176E+06 | 4.7851E+05 | 3.5439E+05 |
(W|T|L) | (5|1|24) | (1|4|25) | (0|3|27) | (0|2|28) | ||||
Mean | 2 4.0 | 3.73 | 9.83 | 8.52 | ||||
Ranking | 3 | 2 | 11 | 8 | ||||
F | RUN | ECO | PSO | PLO | ||||
|---|---|---|---|---|---|---|---|---|
Ave | Std | Ave | Std | Ave | Std | Ave | Std | |
CEC2014-f1 | 7.1040E+07 | 2.6814E+07 | 9.0586E+08 | 3.6855E+08 | 7.9558E+08 | 2.3598E+08 | 3.9063E+07 | 2.7624E+07 |
CEC2014-f2 | 2.1075E+08 | 1.5741E+08 | 5.7549E+10 | 8.3576E+09 | 5.0998E+10 | 8.8232E+09 | 1.1181E+10 | 5.8291E+09 |
CEC2014-f3 | 3.7770E+04 | 6.4685E+03 | 2.1805E+05 | 6.2975E+04 | 6.5906E+04 | 1.0616E+04 | 3.8801E+04 | 9.3980E+03 |
CEC2014-f4 | 6.1695E+02 | 7.1750E+01 | 6.4466E+03 | 3.0720E+03 | 7.5652E+03 | 2.9197E+03 | 1.1987E+03 | 4.1215E+02 |
CEC2014-f5 | 5.2057E+02 | 1.3544E-01 | 5.2121E+02 | 6.4603E-02 | 5.2084E+02 | 6.4015E-02 | 5.2073E+02 | 1.7626E-01 |
CEC2014-f6 | 6.3217E+02 | 3.6524E+00 | 6.4097E+02 | 3.5313E+00 | 6.3530E+02 | 2.2603E+00 | 6.2641E+02 | 2.4158E+00 |
CEC2014-f7 | 7.0297E+02 | 1.0740E+00 | 1.1760E+03 | 9.2653E+01 | 1.1676E+03 | 7.6152E+01 | 7.8053E+02 | 3.7042E+01 |
CEC2014-f8 | 9.7002E+02 | 2.7456E+01 | 1.1323E+03 | 4.0888E+01 | 1.0594E+03 | 2.2626E+01 | 9.4498E+02 | 2.1444E+01 |
CEC2014-f9 | 1.0783E+03 | 2.7994E+01 | 1.2507E+03 | 3.6466E+01 | 1.1705E+03 | 2.5967E+01 | 1.0708E+03 | 3.0215E+01 |
CEC2014-f10 | 4.9731E+03 | 7.8953E+02 | 8.6453E+03 | 7.2483E+02 | 5.9385E+03 | 4.0038E+02 | 4.8244E+03 | 5.8915E+02 |
CEC2014-f11 | 5.8946E+03 | 6.9950E+02 | 9.1526E+03 | 5.8003E+02 | 7.1002E+03 | 5.6862E+02 | 5.7210E+03 | 6.4056E+02 |
CEC2014-f12 | 1.2014E+03 | 3.9614E-01 | 1.2043E+03 | 8.5686E-01 | 1.2016E+03 | 4.5079E-01 | 1.2010E+03 | 4.4496E-01 |
CEC2014-f13 | 1.3006E+03 | 1.1429E-01 | 1.3059E+03 | 6.7340E-01 | 1.3058E+03 | 7.6866E-01 | 1.3018E+03 | 1.0772E+00 |
CEC2014-f14 | 1.4002E+03 | 1.3437E-01 | 1.5610E+03 | 2.4960E+01 | 1.5645E+03 | 3.6459E+01 | 1.4335E+03 | 1.8899E+01 |
CEC2014-f15 | 1.5548E+03 | 3.0540E+01 | 1.9859E+05 | 1.8196E+05 | 7.0581E+04 | 5.1473E+04 | 4.1569E+03 | 2.9927E+03 |
CEC2014-f16 | 1.6124E+03 | 4.9373E-01 | 1.6137E+03 | 3.0445E-01 | 1.6125E+03 | 3.2591E-01 | 1.6116E+03 | 4.2516E-01 |
CEC2014-f17 | 3.4325E+06 | 2.5580E+06 | 5.1100E+07 | 4.4042E+07 | 4.3093E+07 | 2.6345E+07 | 1.9053E+05 | 1.5190E+05 |
CEC2014-f18 | 4.3998E+04 | 9.5186E+04 | 9.0407E+08 | 9.8405E+08 | 1.0746E+09 | 7.5075E+08 | 3.4865E+03 | 3.1921E+03 |
CEC2014-f19 | 1.9511E+03 | 3.3176E+01 | 2.2149E+03 | 1.2849E+02 | 2.1892E+03 | 7.3983E+01 | 1.9443E+03 | 3.1687E+01 |
CEC2014-f20 | 3.1695E+04 | 1.4924E+04 | 1.6387E+06 | 1.5603E+06 | 7.3532E+04 | 3.5990E+04 | 5.6372E+03 | 4.1605E+03 |
CEC2014-f21 | 9.5363E+05 | 6.1925E+05 | 2.4277E+07 | 2.3985E+07 | 1.3988E+07 | 1.0487E+07 | 2.8536E+04 | 2.0646E+04 |
CEC2014-f22 | 2.9736E+03 | 2.1704E+02 | 3.9627E+03 | 7.4497E+02 | 3.5017E+03 | 5.9771E+02 | 2.6402E+03 | 1.5747E+02 |
CEC2014-f23 | 2.5000E+03 | 2.5255E-03 | 2.5555E+03 | 1.2781E+02 | 2.5262E+03 | 7.8831E+01 | 2.6428E+03 | 3.9727E+01 |
CEC2014-f24 | 2.6000E+03 | 6.6061E-03 | 2.6000E+03 | 1.5418E-04 | 2.6000E+03 | 7.0151E-02 | 2.6037E+03 | 4.4678E+00 |
CEC2014-f25 | 2.7000E+03 | 1.2235E-04 | 2.7000E+03 | 2.9413E-05 | 2.7000E+03 | 0.00E+00 | 2.7035E+03 | 6.9985E+00 |
CEC2014-f26 | 2.7271E+03 | 4.4703E+01 | 2.7563E+03 | 4.7516E+01 | 2.7600E+03 | 4.5788E+01 | 2.7047E+03 | 1.8041E+01 |
CEC2014-f27 | 2.9000E+03 | 1.3194E-03 | 3.8483E+03 | 5.4898E+02 | 3.6409E+03 | 4.5771E+02 | 3.6015E+03 | 3.3297E+02 |
CEC2014-f28 | 3.0000E+03 | 1.1818E-03 | 5.6871E+03 | 2.5170E+03 | 5.4699E+03 | 2.5969E+03 | 4.7513E+03 | 5.0396E+02 |
CEC2014-f29 | 1.4686E+04 | 1.6939E+04 | 9.8405E+07 | 1.0093E+08 | 3.1357E+08 | 1.1174E+08 | 1.5505E+07 | 1.6649E+07 |
CEC2014-f30 | 2.3595E+04 | 3.3231E+04 | 1.8777E+06 | 1.3512E+06 | 3.2004E+06 | 1.9733E+06 | 5.2036E+04 | 5.8851E+04 |
(W|T|L) | (3|4|23) | (0|2|28) | (0|2|28) | (1|0|29) | ||||
Mean | 5.28 | 10.78 | 9.10 | 5.57 | ||||
Ranking | 5 | 12 | 10 | 6 | ||||
According to the results displayed in Tables 6 and 7, the AEHO algorithm outperformed other competing algorithms in 10 and 30 dimensions, respectively, in 10 and 12 of the test functions examined in terms of average accuracy. Furthermore, it reached the highest accuracy in 10 and 30 dimensions in an overall of 17 and 16 benchmark test functions, respectively. The PLO method achieved the best average values in CEC2014-f8, CEC2014-f14, CEC2014-f15, and CEC2014-f26 functions, ranking second in CEC2014 functions in 10 dimensions. In the CEC2014-f13, CEC2014-f14, CEC2014-f24, CEC2014-f25, and CEC2014-f26 test functions, the TTAO method obtained the best Ave values, securing the second position in CEC2014 functions in 30 dimensions. The stability of AEHO was evaluated by calculating the standard deviation values and comparing them with the standard deviation values of competing methods; this challenging benchmark set served as a basis for stability. For a few of the 30 CEC2014 test functions, the AEHO method yielded the lowest Std values. Once more, this demonstrates that AEHO, in comparison to other competing algorithms, produced the most reliable findings.
Convergence analysis of AEHO on CEC2014
This subsection provides the convergence analysis of the proposed AEHO algorithm compared to other competitive algorithms. The convergence curves of AEHO for the CEC2014 test functions in 10 and 30 dimensions for functions CEC2014-f1, CEC2014-f2, CEC2014-f4, CEC2014-f5, CEC2014-f6, CEC2014-f10, CEC2014-f13, CEC2014-f16, CEC2014-f19, CEC2014-f22, CEC2014-f24, and CEC2014-f27 are displayed in Figs. 4 and 5, respectively, compared to other methods. Most functions demonstrate that the proposed AEHO approach has achieved a stable state.
[See PDF for image]
Fig. 4
Convergence behaviors of the AEHO algorithm and other algorithms for some selected 10-dimensional CEC2014 functions
[See PDF for image]
Fig. 5
Convergence behaviors of the AEHO algorithm and other algorithms for some selected 30-dimensional CEC2014 functions
Figures 4 and 5 show the convergence curves of the 10- and 30-dimensional CEC2014 test functions, respectively, to illustrate AEHO’s convergence rate compared to its competitors. The AEHO method outperforms other optimizers in CEC2014-f1, CEC2014-f5, CEC2014-f6, CEC2014-f10, CEC2014-f13, and others in attaining the best fitness values during iterations. AEHO performed better than other algorithms in several test functions, obtaining the best outcomes with the lowest fitness values in early iterations before finishing the 200th iteration. The addition of a memory component, an adaptive strategy, a greedy selection strategy, and a hybrid approach further enhanced AEHO’s exploration and exploitation capabilities. As a result, exploration has been improved to identify new search regions and obtain the best solutions in the first rounds. Additionally, the memory component has enhanced the overall search capabilities of AEHO, helping to explain why its early version dominated. AEHO’s exploitability has been enhanced by the adaptive approach compared to other algorithms, accelerating its convergence rate. The EEFO method performs better than the AEHO algorithm for some test functions. In contrast, TTAO starts with a higher convergence rate in the early iterations and achieves the best results, albeit by a narrow margin. For CEC2014-f3, AEHO has the fastest convergence rate among all competitors. Although EHO is doing well as well, there is a noticeable difference between EHO and AEHO because of the proposed modifications that enhanced the quality of the solutions and the balance between the exploration and exploitation phases. AEHO fared better than any other algorithm in the initial rounds of CEC2014-f4, although OOA also fought for the most significant fitness value in the final iterations. Additionally, AEHO’s performance degree at this test function is satisfactory, and it outperforms its rivals at some iterations by achieving the best convergence speed. Due to its robust search capability, the proposed AEHO performed significantly better than all its rivals for the CEC2014-f5 and CEC2014-f16 test functions. The efficacy of AEHO remains superior to other classical and modern algorithms for functions CEC2014-f19 and CEC2014-f22. The experimental results showed that the proposed AEHO algorithm effectively handled 30 complex benchmark tasks in 30 separate runs, highlighting the significant efficiency gain achieved using the memory component, adaptive operators, greedy selection mechanism, and the hybrid strategies in AEHO. The superiority of the AEHO method was made clear by the fact that, for CEC2014-f27 and others, the convergence rate of AEHO was the fastest at the beginning of iterations, and that, after 100 iterations, the performance of SCHO and AEHO was comparable to, or even better than, the original EHO.
Boxplot analysis of AEHO on CEC2014
A boxplot is a valuable tool for showing data according to quartiles. It compares the performance of AEHO with alternative approaches and analyzes the data distribution. Figures 6 and 7 display the boxplot analysis findings for test functions CEC2014-f1, CEC2014-f2, CEC2014-f4, CEC2014-f5, CEC2014-f6, CEC2014-f10, CEC2014-f13, CEC2014-f16, CEC2014-f19, CEC2014-f22, CEC2014-f24, and CEC2014-f27 with 10 and 30 dimensions, respectively. The lower and higher quartiles for each competing technique are displayed on the edges of the rectangular boxes. At the same time, the whiskers on each boxplot indicate the lowest and highest values the approaches achieved.
[See PDF for image]
Fig. 6
Boxplot charts for AEHO and other algorithms for some specific CEC2014 functions with 10 dimensions
[See PDF for image]
Fig. 7
Boxplot charts for AEHO and other algorithms for some specific CEC2014 functions with 30 dimensions
Figures 6 and 7 show the compact boxplots, which perform similarly to the proposed AEHO method for a range of test functions, such as CEC2014-f1, CEC2014-f4 through CEC2014-f6, CEC2014-f22, CEC2014-f24, and CEC2014-f27. This demonstrates that TTAO and AEHO are comparable in terms of stability. In other words, it is impressive that AEHO outperforms many alternative algorithms, especially considering how well-liked the proposed AEHO method is. The memory component, adaptive parameters, greedy selection process, and hybridization must remain close to solution values to improve speed and stability. The adaptive strategy reduces significant differences in fitness values across iterations by improving solution quality and balancing the use of known solutions with exploring new ones. In terms of obtaining maximum fitness scores, the algorithms AEHO, OOA, SCHO, and TTAO are represented by their compact rectangular shapes.
Statistical test of AEHO on CEC2014 benchmark
Friedman’s test was applied to the average accuracy findings in Tables 6 and 7 for dimensions 10 and 30, respectively, to get the ranking results shown in Table 8.
Table 8. The ranking results obtained using Friedman’s test on the CEC2014 functions with
Algorithm | Rank | Rank |
|---|---|---|
AEHO | 3.50 | 2.73 |
EHO | 8.20 | 8.65 |
TS | 4.10 | 5.78 |
SCHO | 3.87 | 4.02 |
EEFO | 4.63 | 4.0 |
TTAO | 4.65 | 3.73 |
HO | 10.50 | 9.83 |
OOA | 8.47 | 8.52 |
RUN | 6.62 | 5.28 |
ECO | 11.37 | 10.78 |
PSO | 8.42 | 9.10 |
PLO | 3.68 | 5.57 |
Using Friedman’s test, the mean accuracy results on the CEC2014 functions with dimensions 10 and 30 in Table 8 yield p-values of 9.29E-11 and 8.0288E-11, respectively. The AEHO algorithm scored highest among all the algorithms, indicating its statistical significance. The 10-dimensional CEC 2024 ranking is AEHO, PLO, SCHO, TS, EEFO, TTAO, RUN, EHO, PSO, OOA, HO, and ECO. The 30-dimensional CEC 2024 ranking is as follows: AEHO, TTAO, EEFO, SCHO, RUN, PLO, TS, OOA, EHO, PSO, HO, and ECO was rated last.
The significant difference between the control algorithm and its competitors was then revealed by applying Holm’s test as a post-hoc statistical technique. Because of this, the algorithm that performs the best on each Friedman’s test assessment metric is the control algorithm. Table 9 presents the statistical results of applying Holm’s statistical approach to the results of the competing algorithms on the CEC2014 test functions with 10 and 30 dimensions. Friedman’s rank of the control algorithm is in this table, the Friedman’s rank of the ith algorithm is , the effect size of the control algorithm on the ith algorithm is , and the statistical difference between the two algorithms is z.
Table 9. Holm’s procedure results for CEC2014 functions
Average accuracy on CEC2014 functions with 10 dimensions (AEHO is the control algorithm) | |||||
|---|---|---|---|---|---|
i | Method | p-value | Hypothesis | ||
11 | ECO | 8.45015551 | 2.9091E-17 | 0.00454545 | Rejected |
10 | HO | 7.51920617 | 5.5109E-14 | 0.00500000 | Rejected |
9 | OOA | 5.33505581 | 9.5515E-08 | 0.00555555 | Rejected |
8 | PSO | 5.28134719 | 1.2823E-07 | 0.00625000 | Rejected |
7 | EHO | 5.048609868 | 4.4503E-07 | 0.00714285 | Rejected |
6 | RUN | 3.34783703 | 8.1444E-04 | 0.00833333 | Rejected |
5 | TTAO | 1.23529815 | 0.21671955 | 0.01000000 | Not rejected |
4 | EEFO | 1.21739528 | 0.22345385 | 0.01250000 | Not rejected |
3 | TS | 0.64450338 | 0.51924906 | 0.01666666 | Not rejected |
2 | SCHO | 0.39386318 | 0.69368205 | 0.02500000 | Not rejected |
1 | PLO | 0.19693159 | 0.84388107 | 0.05000000 | Not rejected |
Average accuracy on CEC2014 functions with 30 dimensions (AEHO is the control algorithm) | |||||
|---|---|---|---|---|---|
i | Method | p-value | i | Hypothesis | |
11 | ECO | 8.64708710 | 5.2830E-18 | 0.00454545 | Rejected |
10 | HO | 7.62662340 | 2.4098E-14 | 0.00500000 | Rejected |
9 | PSO | 6.83889704 | 7.9805E-12 | 0.00555555 | Rejected |
8 | EHO | 6.35551950 | 2.0772E-10 | 0.00625000 | Rejected |
7 | OOA | 6.21229653 | 5.2215E-10 | 0.00714285 | Rejected |
6 | TS | 3.27622554 | 0.00105204 | 0.00833333 | Rejected |
5 | PLO | 3.04348821 | 0.00233852 | 0.01000000 | Rejected |
4 | RUN | 2.73913939 | 0.00616002 | 0.01250000 | Rejected |
3 | SCHO | 1.37852113 | 0.16804244 | 0.01666666 | Not rejected |
2 | EEFO | 1.36061826 | 0.17363435 | 0.02500000 | Not rejected |
1 | TTAO | 1.07417231 | 0.28274545 | 0.05000000 | Not rejected |
Table 9 demonstrates that p-values and for 10 and 30 dimensions, respectively, indicating that Holm’s test technique rejected the hypotheses. The CEC2014 test set was successfully optimized using the AEHO method, as indicated by Tables 8 and 9. The statistical information in these tables indicates that AEHO has very realistic exploration and exploitation capabilities. This can be seen from its performance regarding the intricacy of the benchmark functions in CEC2014.
AEHO analysis using CEC2022 benchmark functions
This section assesses the algorithm’s effectiveness using the most recent CEC2022 benchmark, which has twelve test functions, focusing on dimensions 10 and 20. Therefore, all algorithms are limited to 100 search agents, 1000 iterations, and 30 runs. Using dimensions 10 and 20, the average (Ave) and standard deviation (Std) values are displayed in Tables 10 and 11, respectively.
Table 10. Assessment results of AEHO and other algorithms on CEC2022 functions with d=10
Functions | AEHO | EHO | TS | SCHO | ||||
|---|---|---|---|---|---|---|---|---|
Ave | Std | Ave | Std | Ave | Std | Ave | Std | |
CEC2022-f1 | 3.0000E+02 | 2.3587E-06 | 9.4198E+02 | 6.4942E+02 | 3.0000E+02 | 4.3522E-14 | 3.0000E+02 | 2.6274E-04 |
CEC2022-f2 | 4.0009E+02 | 9.5212E-01 | 4.5171E+02 | 1.4713E+01 | 4.0148E+02 | 1.9961E+00 | 4.1019E+02 | 1.2344E+01 |
CEC2022-f3 | 6.0002E+02 | 3.0619E+00 | 6.1571E+02 | 2.9526E+00 | 6.0063E+02 | 1.3550E+00 | 6.0005E+02 | 3.0669E-02 |
CEC2022-f4 | 8.1280E+02 | 4.7518E+00 | 8.3369E+02 | 7.5149E+00 | 8.1234E+02 | 4.2432E+00 | 8.2311E+02 | 1.0020E+01 |
CEC2022-f5 | 9.0011E+02 | 1.8503E-01 | 9.8328E+02 | 4.3779E+01 | 9.0198E+02 | 3.3591E+00 | 9.0029E+02 | 3.6716E-01 |
CEC2022-f6 | 1.8139E+03 | 3.0933E+01 | 1.2779E+06 | 1.2874E+06 | 5.0981E+03 | 6.6751E+03 | 5.3597E+03 | 2.0972E+03 |
CEC2022-f7 | 2.0255E+03 | 1.0771E+01 | 2.0534E+03 | 8.1873E+00 | 2.0231E+03 | 7.3526E+00 | 2.0194E+03 | 5.1750E+00 |
CEC2022-f8 | 2.2185E+03 | 7.4800E+00 | 2.2310E+03 | 2.1676E+00 | 2.2200E+03 | 8.4283E+00 | 2.2208E+03 | 6.0467E-01 |
CEC2022-f9 | 2.4766E+03 | 6.7380E-01 | 2.5532E+03 | 1.0542E+01 | 2.4855E+03 | 5.4823E-03 | 2.5293E+03 | 8.4413E-05 |
CEC2022-f10 | 2.5005E+03 | 1.3202E-01 | 2.5065E+03 | 2.6318E+01 | 2.5153E+03 | 3.8748E+01 | 2.5092E+03 | 3.8205E+01 |
CEC2022-f11 | 2.6000E+03 | 1.4987E-12 | 2.8213E+03 | 1.5706E+02 | 2.6224E+03 | 5.2282E+01 | 2.6000E+03 | 0000 |
CEC2022-f12 | 2.8577E+03 | 1.9741E+00 | 2.8633E+03 | 1.1412E+00 | 2.8630E+03 | 1.4437E+00 | 2.8621E+03 | 1.7097E+00 |
(W|T|L) | (6|3|3) | (0|0|10) | (0|1|9) | (1|2|8) | ||||
Mean | 1.75 | 7.63 | 3.71 | 3.71 | ||||
Ranking | 1 | 8 | 4 | 3 | ||||
Functions | EEFO | TTAO | HO | OOA | ||||
|---|---|---|---|---|---|---|---|---|
Ave | Std | Ave | Std | Ave | Std | Ave | Std | |
CEC2022-f1 | 3.0000E+02 | 1.2808E-10 | 3.0001E+02 | 1.2906E-02 | 1.4314E+04 | 6.7324E+03 | 2.5169E+03 | 2.0469E+03 |
CEC2022-f2 | 4.0040E+02 | 1.2164E+00 | 4.0664E+02 | 3.1937E+00 | 6.2573E+02 | 1.6194E+02 | 4.6325E+02 | 3.2848E+01 |
CEC2022-f3 | 6.0460E+02 | 5.2831E+00 | 6.0422E+02 | 3.6791E+00 | 6.4317E+02 | 9.7830E+00 | 6.2323E+02 | 7.3018E+00 |
CEC2022-f4 | 8.2259E+02 | 9.5902E+00 | 8.2013E+02 | 8.5017E+00 | 8.5773E+02 | 1.0335E+01 | 8.3227E+02 | 8.1867E+00 |
CEC2022-f5 | 9.8522E+02 | 8.6192E+01 | 9.8262E+02 | 1.3651E+02 | 1.5622E+03 | 3.2065E+02 | 1.1667E+03 | 1.8284E+02 |
CEC2022-f6 | 1.8139E+03 | 1.2498E+01 | 3.4070E+03 | 1.7015E+03 | 1.6610E+07 | 1.6007E+07 | 3.2298E+03 | 2.0412E+03 |
CEC2022-f7 | 2.0270E+03 | 9.9374E+00 | 2.0258E+03 | 9.2881E+00 | 2.1066E+03 | 3.6311E+01 | 2.0611E+03 | 2.2754E+01 |
CEC2022-f8 | 2.2319E+03 | 3.7818E+01 | 2.2220E+03 | 8.4297E+00 | 2.3031E+03 | 6.1458E+01 | 2.2306E+03 | 1.0159E+01 |
CEC2022-f9 | 2.4982E+03 | 4.8386E+01 | 2.5293E+03 | 1.7305E-09 | 2.7067E+03 | 5.1475E+01 | 2.5641E+03 | 3.8928E+01 |
CEC2022-f10 | 2.5538E+03 | 6.2445E+01 | 2.5626E+03 | 6.3344E+01 | 2.6392E+03 | 3.0087E+02 | 2.5706E+03 | 6.6783E+01 |
CEC2022-f11 | 2.7467E+03 | 1.8096E+02 | 2.6868E+03 | 1.1895E+02 | 2.6580E+03 | 1.1653E+02 | 2.6050E+03 | 2.7464E+01 |
CEC2022-f12 | 2.8694E+03 | 7.3135E+00 | 2.8657E+03 | 1.7614E+01 | 2.8663E+03 | 6.8275E+00 | 2.8649E+03 | 1.7585E+00 |
(W|T|L) | (1|1|8) | (0|0|10) | (0|0|10) | (0|0|10) | ||||
Mean | 5.92 | 5.83 | 10.91 | 7.96 | ||||
Ranking | 6 | 5 | 12 | 9 | ||||
Functions | RUN | ECO | PSO | PLO | ||||
|---|---|---|---|---|---|---|---|---|
Ave | Std | Ave | Std | Ave | Std | Ave | Std | |
CEC2022-f1 | 3.4221E+02 | 3.4933E+01 | 2.2875E+04 | 1.2185E+04 | 8.4387E+03 | 4.8964E+03 | 3.0307E+02 | 1.2976E+01 |
CEC2022-f2 | 4.2467E+02 | 3.1741E+01 | 7.1952E+02 | 2.3478E+02 | 8.2759E+02 | 3.1484E+02 | 4.0913E+02 | 1.5899E+01 |
CEC2022-f3 | 6.2005E+02 | 8.7631E+00 | 6.4619E+02 | 1.4054E+01 | 6.3083E+02 | 5.8647E+00 | 6.0341E+02 | 2.5607E+00 |
CEC2022-f4 | 8.2433E+02 | 9.6257E+00 | 8.5341E+02 | 1.5659E+01 | 8.2570E+02 | 9.5718E+00 | 8.1141E+02 | 4.4705E+00 |
CEC2022-f5 | 1.0016E+03 | 9.6911E+01 | 1.5921E+03 | 3.2376E+02 | 1.2658E+03 | 1.8743E+02 | 9.0596E+02 | 5.4730E+00 |
CEC2022-f6 | 3.9638E+03 | 1.9604E+03 | 1.6831E+07 | 4.5950E+07 | 7.4766E+06 | 2.2417E+07 | 1.8811E+03 | 4.8259E+01 |
CEC2022-f7 | 2.0468E+03 | 1.1523E+01 | 2.1085E+03 | 4.1189E+01 | 2.0813E+03 | 2.2875E+01 | 2.0213E+03 | 6.9369E+00 |
CEC2022-f8 | 2.2294E+03 | 4.2918E+00 | 2.2728E+03 | 5.5432E+01 | 2.2360E+03 | 1.7642E+01 | 2.2175E+03 | 8.4903E+00 |
CEC2022-f9 | 2.5460E+03 | 2.6914E+01 | 2.7095E+03 | 5.9671E+01 | 2.6723E+03 | 4.1323E+01 | 2.5293E+03 | 1.0252E-01 |
CEC2022-f10 | 2.5090E+03 | 3.1347E+01 | 2.6290E+03 | 3.0199E+02 | 2.5813E+03 | 1.0951E+02 | 2.5199E+03 | 4.4532E+01 |
CEC2022-f11 | 2.6434E+03 | 9.0823E+01 | 2.6000E+03 | 1.5435E-04 | 2.6100E+03 | 3.8161E+01 | 2.6050E+03 | 2.7471E+01 |
CEC2022-f12 | 2.8633E+03 | 1.5193E+00 | 2.8735E+03 | 1.3139E+01 | 2.8643E+03 | 1.4925E+00 | 2.8629E+03 | 1.2900E+00 |
(W|T|L) | (0|0|10) | (0|1|10) | (0|0|10) | (2|0|8) | ||||
Mean | 6.62 | 10.83 | 9.41 | 3.71 | ||||
Ranking | 7 | 11 | 10 | 2 | ||||
Table 11. Assessment results of AEHO and other algorithms on CEC2022 functions with d=20
Functions | AEHO | EHO | TS | SCHO | ||||
|---|---|---|---|---|---|---|---|---|
Ave | Std | Ave | Std | Ave | Std | Ave | Std | |
CEC2022-f1 | 3.0004E+02 | 1.5717E+02 | 9.4810E+03 | 2.1556E+03 | 5.8984E+03 | 3.5782E+03 | 3.0004E+02 | 2.3924E-02 |
CEC2022-f2 | 4.6572E+02 | 1.8177E+01 | 6.3886E+02 | 5.1027E+01 | 4.6490E+02 | 6.6307E+01 | 4.5887E+02 | 3.6607E+01 |
CEC2022-f3 | 6.2243E+02 | 8.1450E+00 | 6.3970E+02 | 6.5341E+00 | 6.2414E+02 | 9.8069E+00 | 6.0062E+02 | 4.8645E-01 |
CEC2022-f4 | 8.5051E+02 | 9.5350E+00 | 9.3497E+02 | 1.6118E+01 | 8.6317E+02 | 1.3198E+01 | 8.6548E+02 | 1.8995E+01 |
CEC2022-f5 | 1.0506E+03 | 1.6926E+02 | 2.0630E+03 | 3.8987E+02 | 1.4906E+03 | 3.5597E+02 | 1.3123E+03 | 6.2320E+02 |
CEC2022-f6 | 2.1611E+03 | 6.2689E+02 | 1.1219E+08 | 5.8567E+07 | 3.2022E+03 | 2.9841E+03 | 2.2203E+04 | 4.6567E+03 |
CEC2022-f7 | 2.0873E+03 | 2.1424E+01 | 2.1237E+03 | 2.0042E+01 | 2.0539E+03 | 1.6104E+01 | 2.0564E+03 | 3.7358E+01 |
CEC2022-f8 | 2.2275E+03 | 4.8153E+00 | 2.2548E+03 | 1.1260E+01 | 2.2310E+03 | 6.4314E+00 | 2.2422E+03 | 4.0711E+01 |
CEC2022-f9 | 2.4654E+03 | 6.8370E+00 | 2.5657E+03 | 2.1142E+01 | 2.4665E+03 | 1.1752E+00 | 2.4808E+03 | 4.0690E-02 |
CEC2022-f10 | 2.5055E+03 | 2.5602E+01 | 2.5326E+03 | 5.0498E+01 | 4.0548E+03 | 1.3417E+03 | 2.7819E+03 | 2.5600E+02 |
CEC2022-f11 | 2.8494E+03 | 1.0605E+02 | 2.9171E+03 | 9.6155E+01 | 2.9467E+03 | 5.0742E+01 | 2.9454E+03 | 9.1425E+01 |
CEC2022-f12 | 2.9000E+03 | 6.2730E-05 | 2.9494E+03 | 7.9969E+00 | 3.0577E+03 | 9.3433E+01 | 2.9334E+03 | 1.1857E+00 |
(W|T|L) | (5|4|3) | (0|0|10) | (1|0|9) | (1|1|8) | ||||
Mean | 1.50 | 6.16 | 4.33 | 8.58 | ||||
Ranking | 1 | 7 | 4 | 9 | ||||
Functions | EEFO | TTAO | HO | OOA | ||||
|---|---|---|---|---|---|---|---|---|
Ave | Std | Ave | Std | Ave | Std | Ave | Std | |
CEC2022-f1 | 7.4970E+02 | 5.4966E+02 | 1.4061E+03 | 1.4002E+03 | 6.8138E+04 | 2.0733E+04 | 2.7831E+04 | 1.0688E+04 |
CEC2022-f2 | 4.1910E+02 | 1.5837E+01 | 4.6856E+02 | 3.0053E+01 | 1.2465E+03 | 4.0498E+02 | 1.1048E+03 | 3.6699E+02 |
CEC2022-f3 | 6.3153E+02 | 9.4720E+00 | 6.2647E+02 | 1.1615E+01 | 6.7861E+02 | 1.1751E+01 | 6.5095E+02 | 1.1159E+01 |
CEC2022-f4 | 8.7970E+02 | 2.7557E+01 | 8.7602E+02 | 2.0843E+01 | 9.7817E+02 | 1.7581E+01 | 9.1918E+02 | 1.8641E+01 |
CEC2022-f5 | 1.9649E+03 | 3.8106E+02 | 1.8902E+03 | 4.0224E+02 | 3.9161E+03 | 6.9989E+02 | 2.6584E+03 | 8.4749E+02 |
CEC2022-f6 | 2.2263E+03 | 8.3683E+02 | 8.4850E+03 | 7.0309E+03 | 2.6627E+08 | 1.9907E+08 | 2.7660E+07 | 4.1310E+07 |
CEC2022-f7 | 2.1091E+03 | 4.4853E+01 | 2.0953E+03 | 3.6416E+01 | 2.3026E+03 | 8.1423E+01 | 2.1783E+03 | 6.8222E+01 |
CEC2022-f8 | 2.3228E+03 | 9.5230E+01 | 2.2545E+03 | 5.0315E+01 | 2.4682E+03 | 1.2825E+02 | 2.3230E+03 | 8.9655E+01 |
CEC2022-f9 | 2.4654E+03 | 5.6693E-02 | 2.4810E+03 | 1.0416E+00 | 2.8094E+03 | 1.2517E+02 | 2.6707E+03 | 8.5784E+01 |
CEC2022-f10 | 3.0800E+03 | 6.4084E+02 | 3.2538E+03 | 8.1803E+02 | 5.8198E+03 | 2.1621E+03 | 4.9498E+03 | 1.4967E+03 |
CEC2022-f11 | 2.9233E+03 | 4.3018E+01 | 2.9346E+03 | 4.7913E+01 | 2.9220E+03 | 8.6599E+01 | 2.9073E+03 | 6.9199E+01 |
CEC2022-f12 | 2.9000E+03 | 6.4286E-03 | 2.9944E+03 | 2.2206E+01 | 2.9403E+03 | 3.5601E+00 | 3.0928E+03 | 6.4220E+01 |
(W|T|L) | (1|2|8) | (0|0|10) | (0|0|10) | (0|0|10) | ||||
Mean | 4.00 | 4.83 | 10.00 | 8.58 | ||||
Ranking | 2 | 5 | 11 | 8 | ||||
Functions | RUN | ECO | PSO | PLO | ||||
|---|---|---|---|---|---|---|---|---|
Ave | Std | Ave | Std | Ave | Std | Ave | Std | |
CEC2022-f1 | 7.1062E+03 | 2.6416E+03 | 9.5792E+04 | 3.0645E+04 | 3.2150E+04 | 9.6861E+03 | 7.2680E+03 | 3.1190E+03 |
CEC2022-f2 | 4.7168E+02 | 1.8285E+01 | 1.3657E+03 | 4.0106E+02 | 1.3462E+03 | 3.7800E+02 | 5.2719E+02 | 4.0870E+01 |
CEC2022-f3 | 6.4851E+02 | 1.0096E+01 | 6.7703E+02 | 1.2818E+01 | 6.5058E+02 | 6.3137E+00 | 6.2217E+02 | 6.3523E+00 |
CEC2022-f4 | 8.7910E+02 | 1.5777E+01 | 9.6458E+02 | 2.6981E+01 | 9.2355E+02 | 2.3272E+01 | 8.5835E+02 | 1.2915E+01 |
CEC2022-f5 | 2.2421E+03 | 4.5457E+02 | 4.3512E+03 | 9.9389E+02 | 2.4392E+03 | 4.4995E+02 | 1.3067E+03 | 2.2022E+02 |
CEC2022-f6 | 1.8266E+04 | 2.8492E+04 | 4.0232E+08 | 3.6518E+08 | 4.4796E+08 | 5.5295E+08 | 3.3189E+03 | 3.2662E+03 |
CEC2022-f7 | 2.1310E+03 | 2.5184E+01 | 2.2960E+03 | 8.9374E+01 | 2.1628E+03 | 4.9977E+01 | 2.0680E+03 | 2.3496E+01 |
CEC2022-f8 | 2.2596E+03 | 4.8251E+01 | 2.6466E+03 | 7.6486E+02 | 2.3393E+03 | 8.7589E+01 | 2.2670E+03 | 5.4535E+01 |
CEC2022-f9 | 2.5311E+03 | 4.0187E+01 | 2.8885E+03 | 1.6970E+02 | 2.8939E+03 | 1.9254E+02 | 2.5005E+03 | 1.8143E+01 |
CEC2022-f10 | 2.6171E+03 | 4.9019E+02 | 6.4227E+03 | 1.4355E+03 | 4.5878E+03 | 8.0782E+02 | 3.3905E+03 | 1.1466E+03 |
CEC2022-f11 | 2.8984E+03 | 1.3855E+02 | 2.9251E+03 | 1.4104E+02 | 2.9391E+03 | 3.3469E+01 | 2.9000E+03 | 2.0457E-02 |
CEC2022-f12 | 2.9797E+03 | 2.5195E+01 | 2.9539E+03 | 1.4614E+01 | 3.0747E+03 | 5.1953E+01 | 2.9578E+03 | 9.9760E+00 |
(W|T|L) | (0|0|10) | (0|0|10) | (0|0|10) | (0|1|9) | ||||
Mean | 5.41 | 10.75 | 9.75 | 4.08 | ||||
Ranking | 6 | 12 | 10 | 3 | ||||
Table 10 displays the CEC2022 findings for the dimension (d) of 10. The Ave and Std metrics are used to evaluate the performance of various competing methods. For CEC2022-f1 and other test functions, AEHO obtained the lowest Std values and the global optimums, as seen from the results in Table 10. Among the CEC2022 test functions, the AEHO algorithm outperforms most of its competitors, except CEC2022-f4, CEC2022-f7 and CEC2022-f8. EEFO outperforms many other competing algorithms in CEC2022-f1 and CEC2022-f6 test functions when discussing the average value. Thus, the total Friedman’s rank (FR) results clearly show that the proposed AEHO algorithm achieves the best performance compared to all other competitors. The performance of the proposed AEHO algorithm is stable, as evidenced by the low standard deviation values in these benchmark functions. These results demonstrate AEHO’s strong exploration and exploitation capabilities.
Table 11 displays the outcomes of the competing algorithms on 20-dimensional CEC2022 benchmark functions. The results show that the presented AEHO algorithm produces the lowest average values in nine out of twelve optimization cases. Furthermore, SCHO outperforms the other algorithms in optimizing functions CEC2022-f1 and CEC2022-f3. Moreover, the TS technique outperforms the other algorithms on the CEC2022-f7 test function. Across all CEC2022 test functions, the modest standard deviation results of the proposed AEHO algorithm demonstrate its stability and high performance. The features of the CEC2022 functions under study demonstrate that AEHO benefits from its powerful exploitation and exploration capabilities.
Convergence analysis of AEHO on CEC2022
This subsection shows the convergence analysis of the proposed AEHO algorithm compared to rival algorithms. For CEC2022 test functions on dimensions 10 and 20, the convergence curves of the proposed AEHO method are displayed in Figs. 8 and 9, respectively, in contrast to other methods. Most functions indicate that the proposed AEHO method has successfully achieved a state of stability.
[See PDF for image]
Fig. 8
Convergence curves of many algorithms including AEHO for 10-dimensional CEC2022 test functions
[See PDF for image]
Fig. 9
Convergence curves of many algorithms including AEHO for 20-dimensional CEC2022 test functions
The convergence curves for the CEC2022 test function are displayed in Figs. 8 and 9 to illustrate how quickly AEHO converges compared to its rivals. As iterations continue, the AEHO achieves the best fitness values, outperforming other optimization algorithms in CEC2022-f1. Before reaching the 200th iteration, the AEHO produced the best results with the lowest fitness values, outperforming competing algorithms for most test functions. This is because AEHO’s ability to explore and exploit was enhanced by the memory component, adaptive strategy, greedy selection technique, and hybridization approach. Therefore, exploration is enhanced to identify new search locations and obtain the best solutions in early iterations. Additionally, the memory component technique enhanced the AEHO’s ability to search globally, which helped it maintain its high speed in early generations. Furthermore, the proposed strategies, including adaptive and memory elements embedded into EHO, improved the exploitative potential of AEHO, which also increased its convergence rate relative to other algorithms. Although AEHO starts with an early rate of convergence at early iterations and ends with the greatest performance, with a slight performance difference from EHO, the SCHO algorithm is shown to perform competitively equal to AEHO for CEC2022-f3. Despite EHO’s strong performance, AEHO’s convergence speed is the most impressive of the contenders for CEC2022-f3. However, there is now a big difference between EHO and AEHO due to the presented changes that balanced the exploration and exploitation stages and improved the quality of the solutions in AEHO. For CEC2022-f4, TTAO performed better than any other algorithm in the initial iterations, but RUN ultimately fought for the highest fitness value. The performance of AEHO at this function is also good and, in some iterations, it attains the best convergence speed, proving that it performs better than its rivals. The AEHO algorithm outperformed all its competitors by a sizable margin in CEC2022-f5 and CEC2022-f6, due to its superior search capabilities. The early rounds of CEC2022-f6 showed the best rate of convergence for AEHO. After 100 iterations, SCHO and AEHO performed similarly and better than the original EHO, demonstrating the effectiveness of SCHO and the proposed AEHO methods. The performance of AEHO is still the best for the CEC2022-f9 and CEC2022-f10 functions compared to other classical and modern algorithms. In 30 distinct experiments, the proposed AEHO method handled 10 complex benchmark functions with remarkable performance, according to the testing findings. The results demonstrate the notable boost in efficiency achieved using hybridization approaches, adaptive strategy, greedy selection strategy, and memory component in AEHO.
Boxplot analysis of AEHO on CEC2022
Figures 10 and 11 include the boxplot analysis findings for the CEC2022 test functions with 10 and 20 dimensions, respectively. The rectangular box bound for each approach reflects the lower and upper quartiles, and the whiskers on each boxplot display the lowest and highest values that the different methods achieved.
[See PDF for image]
Fig. 10
Boxplot charts of AEHO and other algorithms for 10-dimensional CEC2022 functions
[See PDF for image]
Fig. 11
Boxplot charts of AEHO and other algorithms for 20-dimensional CEC2022 functions
The compact boxplot charts of the competing algorithms, shown in Figs. 10 and 11, perform similarly to the AEHO technique for a variety of functions, such as CEC2022-f1, CEC2022-f2, CEC2022-f3, CEC2022-f5, CEC2022-f6, CEC2022-f9, and CEC2022-f10. This shows that AEHO and EEFO have similar levels of stability. To put it another way, considering the importance of the strategies included in the EHO technique, AEHO outperforms other algorithms and achieves the highest degree of performance in CEC2022-f1 and many other test functions. The memory component, adaptive strategy, and hybridization processes improve stability and performance by remaining around solution values. By improving the quality of solutions, preserving the equilibrium between finding new solutions and utilizing preexisting ones, the memory component technique reduces significant fluctuations in fitness values over iterations. The AEHO, SCHO, and TS techniques’ compact rectangular shapes show exceptional effectiveness in achieving peak fitness values in some CEC2020 test functions.
Computational time efficiency of AEHO on CEC2022 functions
In addition to the average and standard deviation values used in the above-mentioned evaluations of competing algorithms, computation time is another essential metric frequently used to demonstrate the efficacy of algorithms. In other words, computation time is a key factor in assessing whether the computational load of a new algorithm is reasonable and on par with that of competing algorithms. We used the MATLAB 2021 A platform to implement the proposed AEHO method and the other rival methods. All these algorithms were executed under similar conditions on a Windows 10 machine equipped with an Intel Core i7-5200 CPU running at 2.2 GHz and 8.0 GB of RAM. This is intended to ensure a fair comparison between competing algorithms. Table 12 shows the average computational times of the proposed AEHO method and all other competitors, on 30 different runs, while optimizing the CEC2022 benchmark test functions with 10 and 20 dimensions.
Table 12. Average computation time results (in seconds) for AEHO and other algorithms over 30 separate runs in solving CEC2022 functions of 10 and 20 variables with 1000 iterations and 100 search agents
Algorithm | Average time | Average time |
|---|---|---|
CEC2022 with d = 10 | CEC2022 with d = 20 | |
AEHO | 16.7336 | 26.9271 |
EHO | 13.9844 | 22.1355 |
TS | 100.9407 | 135.3282 |
SCHO | 5.5404 | 8.8891 |
EEFO | 8.4550 | 12.6688 |
TTAO | 30.6158 | 63.4769 |
HO | 9.9523 | 13.4735 |
OOA | 8.1940 | 17.3911 |
RUN | 29.3872 | 49.7861 |
ECO | 11.3363 | 18.5145 |
PSO | 4.5179 | 9.3040 |
PLO | 15.3476 | 21.5793 |
The time taken by each algorithm to optimize the CEC2022 test functions for 10 and 20 variables, with 1000 iterations over 30 runs, is used to compare the competing methods in Table 12. According to this table, the execution time of the proposed AEHO algorithm is sometimes a little longer than that of several other algorithms, such as PSO, OOA, and EEFO, but overall, there is no noticeable increase in this computation time. Interestingly, compared to other algorithms such as TS, RUN, and TTAO, among other rivals, the proposed AEHO algorithm has faster computation time. This shows that AEHO is a computationally effective optimization technique on a machine with low specs like the one described above. Since it falls within the computation time spectrum of other algorithms, the computation time of the proposed AEHO algorithm is generally reasonable.
Sensitive parameter analysis
The success of a meta-heuristic algorithm and its ability to maintain a better balance between exploration and exploitation depends on the values of its key control parameters. All five of the AEHO’s control parameters are examined in this section, including the coefficients and of the parameter , and of the parameter , and the coefficient of the parameter . Using the Design of Experiment (DoE) technique, the optimal parameter set was identified by empirically testing AEHO on the CEC 2021 benchmark functions utilized in this work. A suitable range was initially determined for each parameter after a few tests were carefully carried out. The control parameter values’ trends were established to ascertain if the optimal values were within the range or whether more testing was required. Until a reasonable solution was identified, the parameter values were altered several times. This study helps identify which parameters are reliable, sensitive to various inputs, and have a substantial impact on AEHO performance. For the purpose of designing AEHO, this study used these parameters in a comprehensive design using 10-dimensional CEC2022 functions. In this analytical design, the control parameter values were determined to be: , , , , and . Each test function was solved using up to 1000 iterations, 100 search agents, and 30 distinct runs. The results of the sensitivity analysis for the five control parameters and their respective values are as follows:
Control parameters ( and ): The proposed AEHO algorithm was evaluated with 1000 iterations and 100 search agents for different values of and , while keeping the other parameters constant. This was done to observe the impact of the settings on the proposed AEHO algorithm. The ability of these factors to accomplish a favorable balance between exploration and exploitation is also crucial. Table 13 displays the mean accuracy values, which demonstrate how sensitive AEHO is to these values on the 10-dimensional CEC2022 test functions. As can be seen from Table 13, which shows the margins of mean value between the results, AEHO is rather sensitive to these factors. AEHO’s typically stable behavior is shown in this table when falls between 1.0 and 2.0. However, AEHO produced better results for . Furthermore, it is clear that was the optimal value for AEHO. According to this study, AEHO yields the best results, and the optimal values for and are 1 and 0.1, respectively.
Control parameters ( and ): To examine the sensitivity of AEHO to and , AEHO was simulated for a range of values of these parameters, while maintaining the other parameters unchanged. Table 14 illustrates how and affect the average accuracy values of AEHO in the CEC2022 test functions with 10d. It is evident from Table 14 that AEHO is very sensitive to the values of and , as there are notable differences in the results based on these parameters. According to this table, AEHO performed best when and are 2.0 and 0.1, respectively.
Control parameter (): The proposed AEHO algorithm was simulated throughout a range of values for with all other parameters maintained constant. The search agents and iterations were set at 100 and 1000, respectively. Table 15 shows the average accuracy values of the proposed AEHO algorithm when AEHO was used to solve CEC2022 functions with 10d using different values of . Table 15 shows that has a substantial effect on AEHO, as evidenced by the diverse results obtained with different values of this parameter. Additionally, it was observed that when was adjusted to 2.0, the best results from AEHO were achieved. The ratio of exploration to exploitation appears to be effective at this value. In summary, Tables 13, 14, and 15 demonstrate that different solutions were found for the same CEC2022 benchmark test functions while using different parameter values. The aforementioned study demonstrates that AEHO is generally responsive to these factors. The best values for these control parameters may thus be determined to be , , , , and . All the optimization problems addressed in this work were based on these values.
Statistical test of AEHO on CEC2022 benchmark
Friedman’s test was performed on the CEC2022 benchmark with dimensions of 10 and 30 to assess the performance of the AEHO optimizer using various algorithms. A significance threshold of 0.05 was established. The ranking results obtained by performing Friedman’s test to the average accuracy outcomes in Tables 10 and 11 are documented in Table 16.
Table 13. Average results of AEHO for 30 different runs on CEC2022 benchmark functions for 10 variables for several values of and
F | EHO | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|
CEC2022-f1 | Ave | 4.9040E+02 | 3.0001E+02 | 3.0001E+02 | 1.1066E+04 | 3.2025E+02 | 2.9824E+04 | 2.2939E+03 | 3.0010E+02 | 9.4198E+02 |
CEC2022-f2 | Ave | 4.1615E+02 | 4.0273E+02 | 4.0687E+02 | 6.5598E+02 | 4.0793E+02 | 4.8181E+02 | 4.8609E+02 | 4.0013E+02 | 4.5171E+02 |
CEC2022-f3 | Ave | 6.0048E+02 | 6.0023E+02 | 6.0447E+02 | 6.3541E+02 | 6.1543E+02 | 6.4112E+02 | 6.2211E+02 | 6.0017E+02 | 6.1571E+02 |
CEC2022-f4 | Ave | 8.2192E+02 | 8.1050E+02 | 8.2367E+02 | 8.6229E+02 | 8.2778E+02 | 8.4277E+02 | 8.4705E+02 | 8.1282E+02 | 8.3369E+02 |
CEC2022-f5 | Ave | 9.0524E+02 | 9.0016E+02 | 9.5130E+02 | 1.5969E+03 | 9.1245E+02 | 1.4692E+03 | 1.0511E+03 | 9.0013E+02 | 9.8328E+02 |
CEC2022-f6 | Ave | 4.7201E+03 | 1.8400E+03 | 3.1901E+03 | 1.2000E+07 | 4.5301E+04 | 5.6801E+03 | 5.0601E+06 | 1.8139E+03 | 1.2779E+06 |
CEC2022-f7 | Ave | 2.0260E+03 | 2.0110E+03 | 2.0354E+03 | 2.1024E+03 | 2.0549E+03 | 2.0790E+03 | 2.0655E+03 | 2.0255E+03 | 2.0534E+03 |
CEC2022-f8 | Ave | 2.2244E+03 | 2.2118E+03 | 2.2210E+03 | 2.2502E+03 | 2.2372E+03 | 2.2420E+03 | 2.2350E+03 | 2.2186E+03 | 2.2310E+03 |
CEC2022-f9 | Ave | 2.5441E+03 | 2.4861E+03 | 2.5294E+03 | 2.6828E+03 | 2.4993E+03 | 2.6166E+03 | 2.5825E+03 | 2.4767E+03 | 2.5532E+03 |
CEC2022-f10 | Ave | 2.5521E+03 | 2.5413E+03 | 2.5757E+03 | 2.5704E+03 | 2.6973E+03 | 2.7594E+03 | 2.5182E+03 | 2.5006E+03 | 2.5065E+03 |
CEC2022-f11 | Ave | 2.7232E+03 | 2.6479E+03 | 2.6747E+03 | 3.0475E+03 | 2.8012E+03 | 2.8909E+03 | 2.8023E+03 | 2.6369E+03 | 3.0393E+03 |
CEC2022-f12 | Ave | 2.8655E+03 | 2.8568E+03 | 2.8642E+03 | 2.9450E+03 | 2.8602E+03 | 2.9067E+03 | 2.8722E+03 | 2.8554E+03 | 2.9447E+03 |
Table 14. Average results of AEHO for 30 different runs on CEC2022 benchmark functions for 10 variables for different values of and
F | EHO | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|
CEC2022-f1 | Ave | 3.1901E+02 | 1.0977E+03 | 1.1066E+04 | 3.0063E+02 | 3.0069E+02 | 6.8482E+03 | 2.9823E+04 | 2.2938E+03 | 9.4198E+02 |
CEC2022-f2 | Ave | 4.0751E+02 | 4.2018E+02 | 6.5596E+02 | 4.0356E+02 | 4.0101E+02 | 4.2492E+02 | 4.8180E+02 | 4.8608E+02 | 4.5171E+02 |
CEC2022-f3 | Ave | 6.0287E+02 | 6.0512E+02 | 6.3540E+02 | 6.0127E+02 | 6.0141E+02 | 6.0408E+02 | 6.4111E+02 | 6.2210E+02 | 6.1571E+02 |
CEC2022-f4 | Ave | 8.1841E+02 | 8.1483E+02 | 8.6227E+02 | 8.1218E+02 | 8.1468E+02 | 8.3553E+02 | 8.4275E+02 | 8.4703E+02 | 8.3369E+02 |
CEC2022-f5 | Ave | 9.2897E+02 | 9.2923E+02 | 1.5969E+03 | 9.0203E+02 | 9.0219E+02 | 1.0696E+03 | 1.4692E+03 | 1.0511E+03 | 9.8328E+02 |
CEC2022-f6 | Ave | 3.8700E+03 | 1.8400E+03 | 1.2000E+07 | 1.8439E+03 | 1.8181E+03 | 4.9300E+03 | 5.6800E+03 | 5.0600E+06 | 1.2779E+06 |
CEC2022-f7 | Ave | 2.0263E+03 | 2.0234E+03 | 2.1024E+03 | 2.0152E+03 | 2.0302E+03 | 2.0319E+03 | 2.0790E+03 | 2.0655E+03 | 2.0534E+03 |
CEC2022-f8 | Ave | 2.2234E+03 | 2.2279E+03 | 2.2501E+03 | 2.2163E+03 | 2.2236E+03 | 2.2252E+03 | 2.2419E+03 | 2.2349E+03 | 2.2310E+03 |
CEC2022-f9 | Ave | 2.5081E+03 | 2.5412E+03 | 2.6827E+03 | 2.4912E+03 | 2.4823E+03 | 2.5365E+03 | 2.6165E+03 | 2.5824E+03 | 2.5532E+03 |
CEC2022-f10 | Ave | 2.5707E+03 | 2.5421E+03 | 2.5703E+03 | 2.5465E+03 | 2.5063E+03 | 2.5439E+03 | 2.7593E+03 | 2.5181E+03 | 2.5065E+03 |
CEC2022-f11 | Ave | 2.6850E+03 | 2.7021E+03 | 3.0474E+03 | 2.6534E+03 | 2.6429E+03 | 2.7985E+03 | 2.8908E+03 | 2.8022E+03 | 3.0393E+03 |
CEC2022-f12 | Ave | 2.8833E+03 | 2.8759E+03 | 2.9449E+03 | 2.8627E+03 | 2.8619E+03 | 2.8645E+03 | 2.9066E+03 | 2.8721E+03 | 2.9447E+03 |
The p-values determine if the differences between each algorithm’s output and AEHO are statistically significant. The Friedman test’s average accuracy findings are in Table 16. The p-values found for the CEC2022 test functions with 10 and 20 dimensions, respectively, are 6.38E-11 and 6.16E-11. According to Table 16, AEHO outperforms all other algorithms concerning most functions, with a p-value less than 0.05. This table demonstrates that AEHO consistently yields the lowest average values for many functions. The findings of AEHO are statistically distinct from those of its competitors, as shown in Table 16. As a result, AEHO performs better than its rivals. Table 16 shows that AEHO is statistically noteworthy due to its highest score among all algorithms. Friedman’s test and the outcomes of the competing algorithms on the CEC2022 functions with 10 dimensions were used to get the ranking that follows: AEHO, PLO, SCHO, TS, TTAO, EEFO, RUN, EHO, OOA, PSO, ECO, and HO. The ranking, which used 20 dimensions, looks like this: AEHO, EEFO, PLO, TS, TTAO, RUN, EHO, OOA, SCHO, PSO, HO, and ECO were the next in line. Holm’s statistical approach was applied to the CEC2022 functions with 10 dimensions, and the statistical findings are displayed in Table 17. With dimension 20, the outcomes of the Holm’s test on CEC2022 are also recorded in Table 17.
Table 15. Average results of AEHO for 30 different runs on CEC2022 benchmark functions for 10 variables for different values of
F | EHO | |||||
|---|---|---|---|---|---|---|
CEC2022-f1 | Ave | 4.9039E+02 | 2.2938E+03 | 1.0977E+03 | 3.0000E+02 | 9.4198E+02 |
CEC2022-f2 | Ave | 4.1614E+02 | 4.8608E+02 | 4.2018E+02 | 4.0009E+02 | 4.5171E+02 |
CEC2022-f3 | Ave | 6.0047E+02 | 6.2210E+02 | 6.0512E+02 | 6.0002E+02 | 6.1571E+02 |
CEC2022-f4 | Ave | 8.2190E+02 | 8.4703E+02 | 8.1483E+02 | 8.1280E+02 | 8.3369E+02 |
CEC2022-f5 | Ave | 9.0522E+02 | 1.0511E+03 | 9.2923E+02 | 9.0011E+02 | 9.8328E+02 |
CEC2022-f6 | Ave | 4.7200E+03 | 5.0600E+06 | 1.8400E+03 | 1.8139E+03 | 1.2779E+06 |
CEC2022-f7 | Ave | 2.0260E+03 | 2.0655E+03 | 2.0234E+03 | 2.0255E+03 | 2.0534E+03 |
CEC2022-f8 | Ave | 2.2243E+03 | 2.2349E+03 | 2.2279E+03 | 2.2185E+03 | 2.2310E+03 |
CEC2022-f9 | Ave | 2.5440E+03 | 2.5824E+03 | 2.5412E+03 | 2.4766E+03 | 2.5532E+03 |
CEC2022-f10 | Ave | 2.5520E+03 | 2.5181E+03 | 2.5421E+03 | 2.5005E+03 | 2.5065E+03 |
CEC2022-f11 | Ave | 2.7231E+03 | 2.8022E+03 | 2.7021E+03 | 2.6368E+03 | 3.0393E+03 |
CEC2022-f12 | Ave | 2.8654E+03 | 2.8721E+03 | 2.8759E+03 | 2.8553E+03 | 2.9447E+03 |
Again, Table 17 clarifies that the AEHO findings differ significantly from the other methods. A p-value of less than 0.05 indicates that this difference is highly significant in most test functions. According to Table 17, Holm’s test technique dismissed the hypotheses with p-values and for 10 and 20 dimensions, respectively. Tables 16 and 17 demonstrate that the AEHO algorithm performed well while optimizing the CEC2022 test set. AEHO has extremely reasonable exploration and exploitation skills, based on statistical statistics in Tables 16 and 17. The intricacy of the CEC2022 benchmark functions serves as evidence of this. Table 16 and Table 17 provide statistics that indicate AEHO continues to beat its competitors while operating on a larger scale.
Applications of AEHO on engineering cases
This section provides evidence of the developed AEHO method’s efficacy and reliability in addressing real-world applications using five engineering design scenarios. Because these engineering cases entail several constraints, a constraint handling method must be used to optimize them.
Death penalty method
The generic formula for penalty functions, which are adopted by meta-heuristic algorithms based on mathematical programming approaches, is found in Eq. 17. It transforms the constrained numerical optimization problem into an unconstrained one (Bäck 1997).
17
where is the expanded objective function to be optimized, and is the penalty significance, which may be calculated as shown in Eq. 18.18
where and are positive constants, which are referred to as “penalty factors”.Table 16. The ranking outcomes acquired based on Friedman’s test on the CEC2022 benchmark functions with
Algorithm | Rank | Rank |
|---|---|---|
AEHO | 1.75 | 1.50 |
EHO | 7.63 | 6.16 |
TS | 3.71 | 4.33 |
SCHO | 3.71 | 8.58 |
EEFO | 5.92 | 4.00 |
TTAO | 5.83 | 4.83 |
HO | 10.91 | 10.00 |
OOA | 7.96 | 8.58 |
RUN | 6.62 | 5.41 |
ECO | 10.83 | 10.75 |
PSO | 9.41 | 9.75 |
PLO | 3.71 | 4.08 |
Table 17. Holm’s test results of CEC2022 functions based on Friedman’s test findings
Average accuracy on CEC2022 functions with 10 dimensions (AEHO is the control algorithm) | |||||
|---|---|---|---|---|---|
i | Method | p-value | i | Hypothesis | |
11 | HO | 6.22752368 | 4.7386E-10 | 0.00454545 | Rejected |
10 | ECO | 6.17090983 | 6.7898E-10 | 0.00500000 | Rejected |
9 | PSO | 5.20847435 | 1.9039E-07 | 0.00555555 | Rejected |
8 | OOA | 4.21773195 | 2.4677E-05 | 0.00625000 | Rejected |
7 | EHO | 3.99127654 | 6.5718E-05 | 0.00714285 | Rejected |
6 | RUN | 3.31191032 | 9.2661E-04 | 0.00833333 | Rejected |
5 | EEFO | 2.83069258 | 0.004644733 | 0.01000000 | Rejected |
4 | TTAO | 2.77407873 | 0.00553582 | 0.01250000 | Rejected |
3 | TS | 1.33042551 | 0.18337811 | 0.01666666 | Not rejected |
2 | SCHO | 1.33042551 | 0.18337811 | 0.02500000 | Not rejected |
1 | PLO | 1.33042551 | 0.18337811 | 0.05000000 | Not rejected |
Average accuracy on CEC2022 functions with 20 dimensions (AEHO is the control algorithm) | |||||
|---|---|---|---|---|---|
i | Method | p-value | i | Hypothesis | |
11 | ECO | 6.28413753 | 3.2967E-10 | 0.00454545 | Rejected |
10 | HO | 5.77461287 | 7.7130E-09 | 0.00500000 | Rejected |
9 | PSO | 5.60477131 | 2.0853E-08 | 0.00555555 | Rejected |
8 | SCHO | 4.81217739 | 1.4929E-06 | 0.00625000 | Rejected |
7 | OOA | 4.81217739 | 1.4929E-06 | 0.00714285 | Rejected |
6 | EHO | 3.17037569 | 0.00152241 | 0.00833333 | Rejected |
5 | RUN | 2.66085103 | 0.00779434 | 0.01000000 | Rejected |
4 | TTAO | 2.26455406 | 0.02354005 | 0.01250000 | Rejected |
3 | TS | 1.92487095 | 0.05424550 | 0.01666666 | Not rejected |
2 | PLO | 1.75502940 | 0.07925427 | 0.02500000 | Not rejected |
1 | EEFO | 1.69841555 | 0.08942935 | 0.05000000 | Not rejected |
The objective is to reduce the fitness of infeasible solutions to increase the likelihood of selecting feasible ones. A penalty value is added to the solution’s fitness in Eq. 17 as low values are inherently selected in a minimization problem. By adjusting the penalty components of penalty functions, the severity of the penalties to be administered must be carefully decided. These values are pretty situation-specific, although they are relatively simple to construct (Runarsson and Yao 2000). AEHO was modified to incorporate a death penalty handling mechanism to address the constraints of the above-mentioned engineering design problems. This is done to fairly compare rival algorithms with the proposed AEHO algorithm. The penalty function of this constraint handling strategy assigns the lowest fitness value to possible solutions or removes them entirely from the optimization process (Back 1991). All solutions in the viable region of the search space are initialized using the death penalty technique, which assumes an infinite penalty factor. Accordingly, feasible solutions consistently perform better than unfeasible ones. Therefore, there is no need to adjust the parameters of the penalty function while using the death penalty approach. Another advantage of this strategy is that it does not require a penalty constraint violation to be added to the goal function. Therefore, the objective function value and viability of each candidate design may be evaluated separately during the search process. Notably, the AEHO algorithm used the same parameter values as indicated in Table 5 to solve each of the following four engineering problems. The following subsections outline the engineering design problems addressed using the proposed AEHO algorithm.
Tension/compression spring design problem
Another well-known engineering problem used to assess the feasibility of AEHO in optimizing traditional engineering applications is the design of a tension/compression spring with the structure shown in Fig. 12.
[See PDF for image]
Fig. 12
An illustration of the structure of a tension/compression spring
This optimization problem aims to reduce the weight of a tension/compression spring design; the constraints that apply to this engineering problem are minimal deflection, minimum surge frequency, and shear stress; the optimization decision factors in this design case are wire diameter (d), number of active coils (N), and mean coil diameter (D); the parameters in this case were represented by a vector, which looks like this: , where , and represent the variables d, D, and N, respectively; the mathematical formula for this design problem is as follows:
Optimize the function:
This problem is subject to the following constraints:where , and .
Table 18 compares the costs and values of the design parameters for this design problem between the proposed AEHO algorithm and its rivals.
Table 18. Comparison of the optimization outcomes for the tension/compression spring issue from several meta-heuristic methods
Algorithm | Optimum variables | Optimum cost | ||
|---|---|---|---|---|
d | D | N | ||
AEHO | 0.052592133 | 0.39952173 | 8.6622537 | 0.012665233 |
EHO | 0.05168915 | 0.35671989 | 11.28884 | 0.012760828 |
TS | 0.057747977 | 0.51967107 | 6.103274 | 0.012698582 |
SCHO | 0.052236839 | 0.37074039 | 10.660672 | 0.012694899 |
EEFO | 0.051689814 | 0.35673585 | 11.287906 | 0.012666158 |
TTAO | 0.051690247 | 0.35674628 | 11.287293 | 0.014633389 |
HO | 0.052686427 | 0.38146505 | 10.0374 | 0.012748931 |
OOA | 0.066577615 | 0.64750602 | 8.3184044 | 0.012669478 |
RUN | 0.052592133 | 0.39952173 | 8.6622537 | 0.012667447 |
ECO | 0.053736725 | 0.40821675 | 8.8507752 | 0.012665322 |
PSO | 0.05172222 | 0.35751664 | 11.242574 | 0.012693359 |
PLO | 0.053736725 | 0.40821675 | 8.8507752 | 0.012790142 |
The statistical results from AEHO and other competitors for this design problem are presented in Table 19. The results in Table 18 demonstrate that the proposed AEHO algorithm can address this problem successfully with the best design at a cost of , which is marginally lower than the costs that obtained by other algorithms.
Table 19. Statistical findings for the tension/compression spring problem from a variety of meta-heuristic methods
Algorithm | Best | Ave | Worst | Std |
|---|---|---|---|---|
AEHO | 0.012665233 | 0.012665233 | 0.012665233 | 6.9475865E-12 |
EHO | 0.012665234 | 0.012760828 | 0.013379399 | 0.00019648512 |
TS | 0.012698582 | 0.012793477 | 0.013008418 | 6.5936E-02 |
SCHO | 0.012627237 | 0.012656814 | 0.013151583 | 0.00011657134 |
EEFO | 0.012665234 | 0.012666158 | 0.01266862 | 9.5575006E-07 |
TTAO | 0.012749135 | 0.014633389 | 0.017831085 | 0.0019205367 |
HO | 0.012666304 | 0.012748931 | 0.012849052 | 4.5255871E-05 |
OOA | 0.012382418 | 0.012386568 | 0.012423228 | 9.4009091E-06 |
RUN | 0.012665233 | 0.012667447 | 0.012706637 | 9.2298569E-06 |
ECO | 0.012665233 | 0.012665322 | 0.012665642 | 1.3635423E-07 |
PSO | 0.012665237 | 0.012693359 | 0.012852313 | 4.2213299E-05 |
PLO | 0.012790142 | 0.012864680 | 0.012880150 | 5.4555E-01 |
As seen by the findings in Table 19, AEHO continues to provide superior statistical outcomes in terms of best, average, worst, and standard deviation when compared to rival optimization methods.
Speed reducer design problem
Another real-world example commonly used as a benchmark case for assessing optimization techniques is the speed reducer design provided in Gandomi and Yang (2011), with the schematic diagram depicted in Fig. 13. This design case is difficult to assess because it is tied to seven independent variables.
[See PDF for image]
Fig. 13
A speed reducer design schematic diagram
The weight reduction in this design case is subject to the following constraints (Mezura-Montes and Coello 2005): the bending stress of the gear teeth, the surface stress, the transverse shaft deflections, and the stresses in the shafts. To address this design case, decision variables b, m, z, , and , as well as and , are used. The following parameters are listed in order: face width, tooth module, number of teeth in the pinion, distance between bearings between both the first and second shafts, diameter of the main and second shafts, and space between bearings between the main and second shafts. They were represented as a vector when this optimization problem was solved . The mathematical formula for this problem may be described as follows:
Optimize this function:
This case undergoes to the constraints given below:where the decision factors’ range is as: and were presented as , , , , , and , respectively.
Table 20 lists the best solutions produced by AEHO for the speed reducer design problem in comparison to those of other different algorithms.
Table 20. Comparison of optimization outcomes for the speed reducer problem from several meta-heuristic techniques
Algorithm | Optimum variables | Optimum cost | ||||||
|---|---|---|---|---|---|---|---|---|
b | m | z | ||||||
AEHO | 3.471804729 | 0.685570278 | 18.66706956 | 7.775502626 | 7.920449339 | 3.343630765 | 5.189697903 | 2994.471176 |
EHO | 3.500000003 | 0.7 | 17.00000001 | 7.300010677 | 7.715320139 | 3.350214686 | 5.286654469 | 3022.220451 |
TS | 3.579153573 | 0.741365423 | 25.82103159 | 8.11380002 | 8.17449833 | 3.760867125 | 5.435319243 | 3030.6814 |
SCHO | 3.500218522 | 0.7000292933 | 17.00001892 | 7.300895524 | 7.715925637 | 3.350292355 | 5.286659732 | 2994.73703 |
EEFO | 3.500708212 | 0.7000005047 | 17.00003609 | 7.346920098 | 7.740170734 | 3.350360446 | 5.286665413 | 2994.471589 |
TTAO | 3.504955315 | 0.7008027516 | 17.04070582 | 7.650798137 | 7.811110198 | 3.379430846 | 5.28669973 | 2997.118018 |
HO | 3.4062877 | 3.075454135 | 3.335409529 | 3.257757844 | 3.268942314 | 3.251204902 | 3.411923451 | 3450.271926 |
OOA | 3.399416708 | 0.7 | 17.6103543 | 7.607188736 | 8.008253075 | 3.620613482 | 5.366198207 | 2998.992762 |
RUN | 3.471804729 | 0.685570278 | 18.66706956 | 7.775502626 | 7.920449339 | 3.343630765 | 5.189697903 | 2997.363145 |
ECO | 3.547363573 | 0.7004907305 | 19.07333152 | 7.944725275 | 8.092826544 | 3.416400087 | 5.302644225 | 2995.028909 |
PSO | 3.500418267 | 0.7000097439 | 17.00057115 | 7.306335684 | 7.718451311 | 3.350548046 | 5.28671934 | 2996.709012 |
PLO | 3.547363573 | 0.7004907305 | 19.07333152 | 7.944725275 | 8.092826544 | 3.416400087 | 5.302644225 | 3004.6365 |
With a minimal cost of around , the AEHO algorithm performed better than other algorithms, as seen in Table 20. This proves that it can determine the optimal design for this problem. AEHO’s statistical results and those of other competing approaches to this problem are summarized in Table 21.
Table 21. Statistical results of many meta-heuristic methods for the speed reducer problem
Algorithm | Best | Ave | Worst | Std |
|---|---|---|---|---|
AEHO | 2994.471066 | 2994.471176 | 2994.473182 | 0.0004723311915 |
EHO | 3000.902003 | 3022.220451 | 3124.247542 | 30.20978988 |
TS | 3030.6814 | 3528.0799 | 4032.0466 | 391.8419 |
SCHO | 2985.488876 | 2985.752819 | 2989.261856 | 0.8381944025 |
EEFO | 2994.471066 | 2994.471589 | 2994.476318 | 0.001172531874 |
TTAO | 2995.336553 | 2997.118018 | 3003.282772 | 2.048603874 |
HO | 3035.307728 | 3450.271926 | 3781.23461 | 299.3035354 |
OOA | 2927.615629 | 2932.025254 | 2949.405905 | 6.132716096 |
RUN | 2994.474226 | 2997.363145 | 3007.46243 | 4.530242012 |
ECO | 2994.478344 | 2995.028909 | 2997.910253 | 0.7876966297 |
PSO | 2995.901231 | 2996.709012 | 3001.4501978 | 38.3375 |
PLO | 3004.6365 | 3013.0920 | 4046.2375 | 389.1820 |
Based on the results in Table 21, the AEHO algorithm determines the best optimal solutions among the several methods. This demonstrates that the AEHO algorithm outperforms competing algorithms in terms of these statistical results.
Three-bar truss design problem
The objective of solving this traditional engineering problem is to construct the truss with three bars to lower its weight. The search space for this problem is heavily constrained by the two design variables (Sadollah et al. 2013). The schematic and structural specifications of this design are shown in Fig. 14.
[See PDF for image]
Fig. 14
An illustration of a three-bar truss design case (Sadollah et al. 2013)
The objective of this engineering design problem is to construct a truss with three bars, specified as follows, to decrease the weight of the truss construction of a desired function:
. The following stress limits apply to this design problem:where and , and are other constants.
The results of applying the proposed AEHO algorithm to optimize a design for the three-bar truss problem are compared to the meta-heuristics discussed before. A quick comparison of the designs and cost results generated by the AEHO method with those of the other optimization methods covered above is shown in Table 22 for the three-bar truss design problem.
Table 22. Comparative results for the three-bar truss design problem
Algorithm | Optimal values for variables | Optimum weight | |
|---|---|---|---|
AEHO | 80 | 1.7647058 | 1.339956361 |
EHO | 80 | 1.7647058 | 1.339964313 |
TS | 80 | 1.7647058 | 1.42783727 |
SCHO | 78.98 | 1.772797 | 1.340214803 |
EEFO | 80 | 1.7647058 | 1.339956836 |
TTAO | 80 | 1.7647058 | 1.343169106 |
HO | 79.99 | 1.7157969 | 1.339960621 |
OOA | 80 | 1.7647058 | 1.340021490 |
RUN | 80 | 1.1592419 | 1.339956361 |
ECO | 80 | 1.7644777 | 1.339957487 |
PSO | 80 | 1.7647056 | 1.339961575 |
PLO | 80 | 1.7647057 | 1.349329728 |
The results in Table 22 show that the proposed AEHO algorithm is effective in optimizing the three-bar truss design problem when compared to current met-heuristic methods reported in the literature. Based on these design and cost results, AEHO can identify the three-bar truss problem’s optimal cost that is comparable with the cost solutions obtained by other algorithms that have been reported in the literature. The statistical results obtained for this designs problem are shown in Table 23 in relation to the mean, Std score, best, and worst scores of AEHO and other algorithms.
Table 23. Statistical findings for the three-bar truss design problem using several meta-heuristic methods
Algorithm | Best | Ave | Worst | Std |
|---|---|---|---|---|
AEHO | 1.339956 | 1.339956 | 1.339956 | 0.00000E+00 |
EHO | 1.339956 | 1.339964 | 1.339987 | 9.6871E-06 |
TS | 1.380203298 | 1.42783727 | 1.506165699 | 0.037338 |
SCHO | 1.340214803 | 1.341315930 | 1.341725197 | 2.6527E-02 |
EEFO | 1.339956367 | 1.339956836 | 1.33995813 | 5.41018E-07 |
TTAO | 1.340838198 | 1.343169106 | 1.35182462 | 0.002580 |
HO | 1.339956957 | 1.339960621 | 1.339970831 | 4.3543E-06 |
OOA | 1.340021490 | 1.341420021 | 1.341748331 | 8.1344E-01 |
RUN | 1.339956361 | 1.339956361 | 1.339956361 | 5.3415E-11 |
ECO | 1.339956361 | 1.339957487 | 1.339966742 | 2.6493E-06 |
PSO | 1.339956466 | 1.339961575 | 1.339989845 | 8.1238E-06 |
PLO | 1.343348147 | 1.349329728 | 1.364835768 | 0.005531 |
The proposed AEHO algorithm’s performance in relation to other competing algorithms is shown and analyzed in Table 23, which provides the best, average, worst, and standard deviation optimal cost results.
I-beam design problem
To demonstrate the proposed AEHO algorithm’s applicability in addressing real-world problems in comparison to other reputable meta-heuristics, it was used to solve the real I-beam engineering design problem, including four structural design variables (Mirjalili et al. 2017). The main objective of this task is to decrease the vertical deflection of the I-beam design shown in Fig. 15.
[See PDF for image]
Fig. 15
An illustration of an I-beam design (Mirjalili et al. 2017)
This design problem simultaneously fulfills the cross-sectional area and stress constraints under given loads. b, h, , and are four structural variables pertinent to this problem. The beam’s length (L) is 5.200 cm, and its modulus of elasticity (E) is 523.104 kN/. The objective function that is stated for lowering the vertical deflection was found using the following formula:The layout of this design case is defined as having a cross-sectional area of less than 300 .
where , and are the design space definitions for these decision factors.
In the event that the beam’s allowable bending stress is 56 kN/, the following stress constraint is applicable:
19
where the components and can be defined according to Eqs. 20 and 21, respectively.20
21
Table 22 compares the cost solutions and decision design parameters generated by the proposed AEHO algorithm and various meta-heuristic algorithms with respect to the nonlinear constrained design displayed in Fig. 15. These approaches followed the previously mentioned constraints and used the same set of design variables.Table 24. Comparison of the optimization outcomes for the I-beam design problem using several meta-heuristic methods
Algorithm | Optimal values for variables | Optimum vertical deflection | |||
|---|---|---|---|---|---|
b | h | ||||
AEHO | 4.4897508 | 3.4979076 | 2.1504722 | 49.949061 | 0.006625958 |
EHO | 4.4942969 | 3.5056657 | 2.1656499 | 49.948359 | 0.006639862 |
TS | 4.4897789 | 3.4982093 | 2.1511266 | 49.949061 | 0.007701084 |
SCHO | 4.4897508 | 3.4979076 | 2.1504722 | 49.949061 | 0.006634875 |
EEFO | 4.4903881 | 3.496808 | 2.1502867 | 49.949061 | 0.006625958 |
TTAO | 4.489558 | 3.4981001 | 2.1504635 | 49.949061 | 0.006634875 |
HO | 4.4895014 | 3.4992412 | 2.1494974 | 16.735647 | 0.006742738 |
OOA | 4.4913491 | 3.4981665 | 2.1499018 | 49.296601 | 0.006625958 |
RUN | 4.4983675 | 3.4971092 | 2.1581777 | 11.719166 | 0.006625958 |
ECO | 4.4909127 | 3.4948447 | 2.1488352 | 49.949061 | 0.006625958 |
PSO | 4.6158984 | 3.5993563 | 2.208994 | 10.797049 | 0.006625958 |
PLO | 4.4867186 | 3.4981311 | 2.1503361 | 49.949061 | 0.006625972 |
It is evident from Table 24 that the proposed AEHO algorithm outperformed several similar meta-heuristics in optimizing this design problem. They have reached extraordinary solutions, which may be the sole global optimum. In terms of the minimum objective function’s value, they also fared better than several other competing algorithms. Table 25 shows the average statistical results for the best, worst, mean, and Std scores for the I-beam design problem, obtained over 30 different runs using the AEHO algorithm and other meta-heuristics.
Table 25. Statistical results of many meta-heuristic methods for the I-beam design problem
Algorithm | Best | Ave | Worst | Std |
|---|---|---|---|---|
AEHO | 0.006625958 | 0.006625958 | 0.006625958 | 1.5092E-12 |
EHO | 0.006625982 | 0.006639862 | 0.006848307 | 4.9223E-05 |
TS | 0.006850875 | 0.007701084 | 0.013403865 | 0.00145406 |
SCHO | 0.006634875 | 0.006634875 | 0.006634875 | 2.1929E-04 |
EEFO | 0.006625958 | 0.006625958 | 0.006625959 | 3.1713E-10 |
TTAO | 0.006626366 | 0.006628814 | 0.006651495 | 5.4230E-06 |
HO | 0.006625958 | 0.006742738 | 0.007123147 | 0.000177 |
TTAO | 0.006634875 | 0.006634875 | 0.006634875 | 2.2010E-04 |
RUN | 0.006625958 | 0.006625958 | 0.006625959 | 2.6707E-10 |
ECO | 0.006625958 | 0.006625958 | 0.006625958 | 1.4382E-10 |
PSO | 0.006625958 | 0.006625958 | 0.006625958 | 1.1943E-10 |
PLO | 0.006625958 | 0.006625972 | 0.006626001 | 1.3501E-08 |
The proposed AEHO algorithm outperformed several competing techniques, as Table 25 demonstrates. The best, average, worst, and standard deviation results demonstrate a high success rate, demonstrating that AEHO outperforms other algorithms in solving the I-beam design problem.
Cantilever beam design problem
Although this problem is analogous to the previous one, it aims to reduce the weight of a cantilever beam composed of five parts, each of which has a hollow cross-section that thickens steadily (Mirjalili et al. 2017). The beam is firmly supported, as illustrated in Fig. 16, and an external vertical force acts on the free end of the cantilever.
[See PDF for image]
Fig. 16
An illustration of a cantilever beam design (Mirjalili et al. 2017)
The objective of this design problem is to reduce the weight of a cantilever beam while imposing a maximum restriction on the vertical displacement of the free end. The design variables are the cross-sectional heights and widths of each part; these variables do not become operative in the problem because the upper and lower bounds are too big and small, respectively. The objective cost function of this design problem can be stated as follows: , where the following optimization constraint applies to this design problem.
With , the variables were assumed to be in the interval for this design problem.
The optimization results of the proposed AEHO method and other similar competing meta-heuristics used to address this problem are shown in Table 26.
Table 26. Comparison of several meta-heuristic algorithms’ optimization results for the cantilever design problem
Algorithm | Optimal values for variables | Optimum weight | ||||
|---|---|---|---|---|---|---|
AEHO | 3.3291427 | 5.2533919 | 5.9781626 | 5.2757689 | 4.4660511 | 263.895843377 |
EHO | 3.4626823 | 5.2942146 | 5.9620209 | 5.2953199 | 4.4705734 | 263.895843377 |
TS | 3.3309689 | 5.2676107 | 5.977518 | 5.2755614 | 4.4660791 | 278.093519034 |
SCHO | 3.3291427 | 5.2533919 | 5.9781626 | 5.2757689 | 4.4660511 | 263.9782353 |
EEFO | 3.3295309 | 5.2534518 | 5.9787059 | 5.2759153 | 4.4666851 | 263.895843386 |
TTAO | 3.3295771 | 5.2533908 | 5.9776026 | 5.2763922 | 4.4658594 | 263.9782407 |
HO | 3.3183548 | 3.4125742 | 5.9772118 | 5.2767081 | 4.465803 | 263.90053555 |
OOA | 3.3937982 | 5.2822494 | 5.9770247 | 5.2756721 | 4.467641 | 263.895843377 |
RUN | 3.3244577 | 3.5773487 | 5.9562856 | 5.2986299 | 4.4746224 | 263.895843377 |
ECO | 3.3299261 | 5.2535205 | 5.9807915 | 5.2771485 | 4.4672069 | 263.895843377 |
PSO | 3.5504068 | 3.5773487 | 6.0546047 | 5.4792438 | 4.5915328 | 264.524883327 |
PLO | 3.3292011 | 5.2534295 | 5.9810867 | 5.2759611 | 4.4630349 | 263.983099362 |
Based on the cost weight values mentioned in Table 26, the proposed AEHO algorithm generated the optimal solution for the cantilever design problem, with an optimal cost of around 263.89584337. Compared to other competing algorithms, this result showed AEHO’s very competitive findings, surpassing most of them. The statistical optimization results of the proposed AEHO method and the other optimization methods mentioned above are summarized in Table 27 in terms of the best, worst, mean and Std scores, in 30 different runs, for the cantilever design problem.
Table 27. The cantilever design problem’s statistical results from several meta-heuristic algorithms
Algorithm | Best | Ave | Worst | Std |
|---|---|---|---|---|
AEHO | 263.895843377 | 263.895843377 | 263.895843377 | 1.1641E-10 |
EHO | 263.895843377 | 263.895843377 | 263.895843377 | 1.1943E-10 |
TS | 267.064696241 | 278.093519034 | 310.730115478 | 9.577753 |
SCHO | 263.9782353 | 263.9782371 | 263.9782353 | 2.2347E-04 |
EEFO | 263.895843377 | 263.895843386 | 263.895843462 | 1.9270E-08 |
TTAO | 263.8959 | 263.9100 | 263.9550 | 14.318236 |
HO | 263.895844725 | 263.90053555 | 263.919806741 | 0.007878 |
TTAO | 263.9782407 | 263.9784742 | 263.9799490 | 3.4051E-03 |
RUN | 263.895843377 | 263.895843377 | 263.895843377 | 1.1787E-10 |
ECO | 263.895843377 | 263.895843377 | 263.895843377 | 1.1860E-10 |
PSO | 263.895843377 | 263.895843377 | 263.895843377 | 1.2056E-10 |
PLO | 263.905885334 | 263.983099362 | 264.203386358 | 0.085606 |
As seen by the results in Table 27, the proposed AEHO method performed better statistically than many of the competing algorithms for this design problem. This implies that AEHO is superior to other meta-heuristics, surpassing most algorithms in producing results that are on par with them.
Winding process modeling
This section confirms the efficiency of the proposed AEHO optimizer when used to represent an actual industrial winding machine system. This industrial winding machine is commonly seen in real web conveyance systems (Bastogne et al. 1998; Rodan et al. 2017). Figure 17 shows the structural diagram of this process, which is highly nonlinear and presents challenges to research modeling as well as the control community (Nozari et al. 2012).
[See PDF for image]
Fig. 17
A schematic diagram showing the structure of the winding process (Nozari et al. 2012)
As presented in Fig. 17, the winding process under consideration consists of three reels, referred to as reel 1, reel 2, and reel 3. The three DC motors that control these reels are , , and , respectively. Reels 1 and 3 are coupled to DC motors that are operated by set-point currents at motor 1 and at motor 2. The strip tensions in the web between reels 1 and 2 () and between reels 2 and 3 () are also measured by tension meters at these points. In this winding process, a dynamo tachometer is used to measure the angular velocities of reels 1, 2, and 3 (also referred to as , , and ), accordingly. A dynamo tachometer is used to measure the angular speed for , or referred to as . , , , , and are the chief input variables for this winding process. Through the modeling of this process, and are the results. This process’s modeling problem size is 5 for the linear estimations of and . For and , each model was developed with a training dataset that has 1250 data samples for every input parameter. The test dataset contains 1250 samples for each input variable, as detailed in Nozari et al. (2012). Considering the process’s input and output parameters, the goal of this modeling case is to identify the salient characteristics of the suitable output responses. In this modeling problem, the main goal is to construct linear models of and using linear estimates, which are given as shown in Eqs. 22 and 22, respectively.where and implement the previous values of and at time instance , respectively, and , , , , , , , , , and represent the weights of the linear models generated for models and , respectively.
Equation 22 describes the mean absolute percentage error (MAPE) criterion, which serves as the fitness function for this modeling problem.
22
where y represents the real values of an actual experiment, n represents the number of experimental data values, and represents the estimated values created by the derived models.Evaluation criteria
To assess the performance degree of models designed for the winding process, it is usually necessary to select appropriate performance indicators. The Pearson product-moment correlation coefficient (R) Braik (2021), defined in Eq. 23, is one of the widely used performance indicators to evaluate the effectiveness of models designed for industrial applications.
23
where represents the ith actual output from an actual experiment, represents the ith output value from the constructed model, and and are the mean values of the original data and the model data, respectively.The percentile variance-accounted-for (VAF) Babuska (1998), which is defined in Eq. 24, is one of the performance evaluation methods frequently considered in studies when evaluating the performance of developed models. Since this evaluation metric is simple to apply to actual manufacturing processes, it is often used in the literature to evaluate the quality and accuracy of predictive models.
24
where var(y) denotes the variance of the actual data and denotes the variance of the difference between the model data () and the real data (y).For instance, Babuska (1998) proposed the VAF performance indicator to assess the performance of fuzzy models. This indicator provides a quantitative measure of the accuracy between two signals. Two comparable signals have a VAF of 100%, but if the signals are different, the VAF is lower (Braik 2021; Braik et al. 2021).
Simulation results
Figures 18 and 19 show how well AEHO keeps track of the output data for and , respectively, along with the actual responses for and for both training and test processes.
[See PDF for image]
Fig. 18
Responses for both the training and testing stages of the linear model developed by AEHO
[See PDF for image]
Fig. 19
Responses for both the training and testing stages of the linear model developed by AEHO
The results of the simulation in Figs. 18 and 19 demonstrate that the proposed AEHO algorithm is a viable and effective way to faithfully model the web tensions, and , of the winding process. In sum, Figs. 18 and 19 show that the real data and the anticipated data obtained by AEHO appear to be rather comparable.
Figures 20b and 20b display the correlation coefficient values for the AEHO- and AEHO- models in the training and test cases for the linear models and , respectively.
[See PDF for image]
Fig. 20
R computed using AEHO across training and testing data for the linear models ( and )
Consequently, the correlation coefficients derived from AEHO for the linear models show that consistent outcomes and excellent performance were consistently attained over the course of the sampling period.
Convergence results
The MAPE values of the estimated responses produced by the models and based on AEHO and the associated true responses for and , respectively, are represented by the convergence curves shown in Fig. 21. Models and were created using the largest number of iterations and search agents of the proposed AEHO optimizer, set to 100 and 30, respectively.
[See PDF for image]
Fig. 21
Mean convergence curves for the linear models and on the basis of the proposed AEHO algorithm for 30 separate experiments
The convergence curves in Fig. 21 highlight how effective AEHO is at reaching the best fitness with the fewest errors. It is also found that the convergence curve approaches the global minimum error quite quickly.
Computational results
Tables 28 and 29 list the performance degrees of creating and models using the proposed AEHO algorithm in comparison to various rival meta-heuristics utilizing the same modeling procedure. The optimal parameter values were obtained for each competing algorithm, and the results were averaged across 30 evaluation experiments. In Tables 28 and 29, the best results are highlighted in bold to stand out from the others and give them more attention.
Table 28. Analysis of the performance of the proposed AEHO algorithm in creating linear models for across 30 separate runs compared to other competing algorithms
Algorithm | Weight parameters for | Optimal VAF | ||||
|---|---|---|---|---|---|---|
AEHO | 0.0045 | 0.0043 | 0.0577 | 0.0128 | .9877 | 99.5060 |
EHO | 0.0260 | 0.0133 | 0.0940 | 0.0122 | 2.0795 | 89.6824 |
TS | 0.0075 | 0.0191 | 0.0056 | 0.0920 | 0.0070 | 89.1081 |
SCHO | 0.0210 | 0.0164 | 0.0224 | 0.0905 | 0.1610 | 91.7964 |
EEFO | 0.0234 | 0.0564 | 0.0497 | 0.0098 | 0.0057 | 95.4383 |
TTAO | 0.0304 | 0.0670 | 0.0982 | 0.1582 | 0.0482 | 94.2190 |
HO | 0.0903 | 0.0513 | 0.0100 | 0.2183 | 0.0229 | 90.4617 |
OOA | 0.1435 | 0.0419 | 0.1275 | 0.0629 | 0.1872 | 88.8040 |
RUN | 0.1412 | 0.0411 | 0.0782 | 0.0969 | 0.1874 | 90.5585 |
ECO | 0.0596 | 0.0703 | 0.0610 | 0.1270 | 0.2592 | 87.6218 |
PSO | 0.0785 | 0.0444 | 0.0127 | 0.3314 | 0.0463 | 89.4589 |
PLO | 0.0690 | 0.0728 | 0.0356 | 0.0304 | 0.1328 | 90.4261 |
Table 29. Analysis of the performance of the proposed AEHO algorithm in creating linear models for across 30 separate runs compared to other competing algorithms
Algorithm | Weight parameters for | Optimal VAF | ||||
|---|---|---|---|---|---|---|
AEHO | 0.0066 | 0.0136 | 0.0172 | 0.0309 | 2.1028 | 98.1401 |
EHO | 0.0037 | 0.0297 | 0.0237 | 0.0429 | 0.3423 | 96.6531 |
TS | 0.0078 | 0.0340 | 0.0186 | 0.0451 | 2.4156 | 92.3856 |
SCHO | 0.0060 | 0.0343 | 0.0255 | 0.0473 | 0.7624 | 91.6910 |
EEFO | 0.0045 | 0.0673 | 0.0165 | 0.0363 | 2.0984 | 88.4159 |
TTAO | 0.0063 | 0.0410 | 0.0648 | 0.0432 | 0.3624 | 87.7212 |
HO | 0.0076 | 0.0343 | 0.0240 | 0.0980 | 2.0819 | 87.4235 |
OOA | 0.0319 | 0.3189 | 0.2118 | 0.1204 | 0.1222 | 87.7212 |
RUN | 0.0165 | 0.0309 | 0.1081 | 0.2114 | 1.1103 | 85.4386 |
ECO | 0.0012 | 0.0110 | 0.1300 | 0.0343 | 0.2746 | 84.7439 |
PSO | 0.0186 | 0.0122 | 0.0241 | 0.0539 | 1.3024 | 83.4537 |
PLO | 0.0100 | 0.0997 | 0.1112 | 0.1091 | 0.2088 | 82.7590 |
The results in Tables 28 and 29 show that AEHO considerably outperforms other comparative algorithms such as POA, EHO, and SO algorithms in achieving optimal VAF rates for and models. Based on these results, it can be concluded that the AEHO algorithm is a promising candidate for the design of complex manufacturing systems.
In summary, the proposed AEHO algorithm is a promising meta-heuristic-based optimization method, and its overall performance in solving various classical engineering design problems has proven its efficiency and reliability. The proposed AEHO algorithm has also demonstrated its dependability and efficacy in addressing an industrial process. This leads us to believe that AEHO is undoubtedly a plausible optimization algorithm with an excellent chance of success in solving various modern real-world problems. This algorithm has several advantages, including high efficiency in terms of standard deviations and optimal cost results compared to many other popular algorithms, such as PSO and EHO.
Conclusion and future works
In this study, we have shown how to employ particle swarm optimization (PSO) to improve the elks’ memory and position throughout the calving season in the traditional elk herd optimizer (EHO). This proposed variant of EHO is called ameliorated EHO (AEHO). Further, the mathematical model describing the calving season in EHO was implemented using stochastic parameters and new efficient adaptive functions across the iterative loops of AEHO. Using the pbest notion of PSO, a memory component was introduced to EHO to exploit the most promising search areas. Besides, the gbest and pbest notions of PSO are applied to improve the optimal positions of search agents of EHO. A greedy selection approach was used to use the search agent’s fitness values before and after updates as a guide to achieving the promising efficacy of the best solutions. These improvements to EHO were proposed to improve its exploration and exploitation, and to achieve an appropriate balance between these two features. The performance of the AEHO algorithm was evaluated in optimizing complex benchmark test functions, known as CEC2014 and CEC2022. Comparison of AEHO with the EHO and PSO algorithms and other well-known meta-heuristics showed that AEHO is a robust optimization algorithm that surpasses many promising meta-heuristics. Even though AEHO has revealed promising outcomes and fulfilling solutions for the benchmark problems under study, there are computational costs associated with this proposed algorithm. The time complexity discussed in subsection 4.5, is its biggest weakness and may be important when dealing with extremely complex or high-dimensional problems. Therefore, more work needs to be done to reduce the running time without sacrificing the algorithm’s ability to handle real-world problems. Furthermore, AEHO could find certain solutions that are close to the global optimum or fall within the local optimum, as was the case in some CEC2022 functions with 20d. Therefore, additional efforts to enhance AEHO’s performance could be considered to overcome these limitations.
The outcomes show that AEHO consistently beats its rivals in a wide range of optimization functions with many dimensions and levels of complexity. In practical engineering design and industrial process cases, AEHO consistently outperformed several other meta-heuristics in most of the problems examined, achieving very reasonable outcomes. The presented results confirm that AEHO is a powerful optimization technique since it can effectively balance exploration and exploitation. Future work will focus on adapting, applying, and evaluating multi-objective variants of AEHO to large-scale real-world problems and high-dimensional problems in continuous and binary search spaces. Given the effectiveness of AEHO in obtaining optimum or near-optimum solutions, combining AEHO with promising meta-heuristics could be considered a possible additional search to further increase performance in tackling challenging problems in a range of real-life fields. It may be possible to expand the application of AEHO to more practical problems in a range of domains pertaining to feature selection, video navigation, and job scheduling.
Acknowledgements
The authors gratefully acknowledge the support provided by the Deanship of Research and Graduate Studies (DRG) at Ajman University, Ajman, UAE, under Grant No. 2024-IRG-ENIT-25.
Author contributions
Mohammed Azmi Al-Betar conceptualized the study, led the development of the AEHO algorithm, and supervised the research project. Malik Braik contributed to the algorithmic design and implemented the hybridization with PSO, including the memory component and adaptive functions. Qusai Yousef Shambour carried out the experimental evaluations, including benchmark testing and engineering problem applications. Ghazi Al-Naymat was responsible for data analysis, statistical validation, and the comparative performance assessment. Thantrir Porntaveetus contributed to the literature review, manuscript drafting, and the refinement of the algorithmic framework. All authors reviewed and approved the final manuscript.
Data availability
No datasets were generated or analysed during the current study.
Declarations
Conflict of interest
The authors declare that they have no Conflict of interest.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
Abdel-Salam M, Alomari SA, Yang J, Lee S, Saleem K, Smerat A, Snasel V, Abualigah L (2025) Harnessing dynamic turbulent dynamics in parrot optimization algorithm for complex high-dimensional engineering problems. Comput Methods Appl Mech Eng 440(117908):2025
Abdollahzadeh B, Khodadadi N, Barshandeh S, Trojovskỳ P, Gharehchopogh FS, El-kenawy E-SM, Abualigah L, Mirjalili S (2024) Puma optimizer (po): a novel metaheuristic optimization algorithm and its application in machine learning. Clust Comput 1–49
Abualigah, L; Diabat, A; Mirjalili, S; Elaziz, MA; Gandomi, AH. The arithmetic optimization algorithm. Comput Methods Appl Mech Eng; 2021; 376, 4199299 [DOI: https://dx.doi.org/10.1016/j.cma.2020.113609] 113609.
Abualigah, L; Izci, D; Jabari, M; Ekinci, S; Saleem, K; Migdady, H; Smerat, A. Adaptive gbest-guided atom search optimization for designing stable digital iir filters. Circ Syst Signal Proc; 2025; 44,
Agushaka, JO; Ezugwu, AE; Abualigah, L. Dwarf mongoose optimization algorithm. Comput Methods Appl Mech Eng; 2022; 391, 4372742 [DOI: https://dx.doi.org/10.1016/j.cma.2022.114570] 114570.
Ahmadianfar, I; Heidari, AA; Gandomi, AH; Chu, X; Chen, H. Run beyond the metaphor: an efficient optimization algorithm based on Runge Kutta method. Expert Syst Appl; 2021; 181, [DOI: https://dx.doi.org/10.1016/j.eswa.2021.115079] 115079.
Alatas, B. Acroa: artificial chemical reaction optimization algorithm for global optimization. Expert Syst Appl; 2011; 38,
Alavi, A; Dolatabadi, M; Mashhadi, J; Farsangi, EN. Simultaneous optimization approach for combined control-structural design versus the conventional sequential optimization method. Struct Multidiscip Optim; 2021; 63,
Al-Betar, MA; Awadallah, MA; Braik, MS; Makhadmeh, S; Doush, IA. Elk herd optimizer: a novel nature-inspired metaheuristic algorithm. Artif Intell Rev; 2024; 57,
Amiri, MH; Hashjin, NM; Montazeri, M; Mirjalili, S; Khodadadi, N. Hippopotamus optimization algorithm: a novel nature-inspired optimization algorithm. Sci Rep; 2024; 14,
Askarzadeh, A. A novel metaheuristic method for solving constrained engineering optimization problems: crow search algorithm. Comput Struct; 2016; 169, pp. 1-12. [DOI: https://dx.doi.org/10.1016/j.compstruc.2016.03.001]
Atashpaz-Gargari, E. Imperialist competitive algorithm: an algorithm for optimization inspired by imperialistic competition. IEEE Congress Evolut Comput; 2007; 2007, pp. 4661-4667.
Aye, CM; Wansaseub, K; Kumar, S; Tejani, GG; Bureerat, S; Yildiz, AR; Pholdee, N. Airfoil shape optimisation using a multi-fidelity surrogate-assisted metaheuristic with a new multi-objective infill sampling technique. CMES-Comput Model Eng Sci; 2023; [DOI: https://dx.doi.org/10.32604/cmes.2023.028632]
Baba A, thieves the f, braik M, ryalat mh, al-zoubi h. (2022) a novel meta-heuristic algorithm for solving numerical optimization problems. Neural Comput Appl 34:409–455
Babuska R (1998) Fuzzy modeling and identification toolbox. Delft University of Technology, The Netherland, http://lcewww.et.tudelft.nl/bubuska, 204,
Back T (1991) A survey of evolution strategies. In Proc. of Fourth Internal. Conf. on Genetic Algorithms,
Back, T. Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Programming, Genetic Algorithms; 1996; Oxford, Oxford University Press: [DOI: https://dx.doi.org/10.1093/oso/9780195099713.001.0001]
Bäck, T. David b Fogel, and Zbigniew Michalewicz. Handbook Evolut Comput Release; 1997; 97,
Bai, J; Li, Y; Zheng, M; Khatir, S; Benaissa, B; Abualigah, L. And magd abdel wahab. A Sinh Cosh Optimizer Knowledge-Based Syst; 2023; 282, [DOI: https://dx.doi.org/10.1016/j.knosys.2023.111081] 111081.
Bastogne, T; Noura, H; Sibille, P; Richard, A. Multivariable identification of a winding process by subspace methods for tension control. Control Eng Pract; 1998; 6,
Bertsekas, D. Newton’s method for reinforcement learning and model predictive control. Results Control Optim; 2022; 7, [DOI: https://dx.doi.org/10.1016/j.rico.2022.100121] 100121.
Braik, M. A hybrid multi-gene genetic programming with capuchin search algorithm for modeling a nonlinear challenge problem: modeling industrial winding process, case study. Neural Process Lett; 2021; 53,
Braik, MS. Chameleon swarm algorithm: a bio-inspired optimizer for solving engineering design problems. Expert Syst Appl; 2021; 174, [DOI: https://dx.doi.org/10.1016/j.eswa.2021.114685] 114685.
Braik, M. Enhanced ali baba and the forty thieves algorithm for feature selection. Neural Comput Appl; 2023; 35,
Braik, M; Al-Hiary, H. Rüppell’s fox optimizer: a novel meta-heuristic approach for solving global optimization problems. Clust Comput; 2025; 28,
Braik, M; Al-Zoubi, H; Al-Hiary, H. Artificial neural networks training via bio-inspired optimisation algorithms: modelling industrial winding process, case study. Soft Comput; 2021; 25,
Braik, M; Sheta, A; Al-Hiary, H. A novel meta-heuristic search algorithm for solving optimization problems: capuchin search algorithm. Neural Comput Appl; 2021; 33,
Braik, M; Hammouri, A; Atwan, J; Al-Betar, MA; Awadallah, MA. White shark optimizer: a novel bio-inspired meta-heuristic algorithm for global optimization problems. Knowl-Based Syst; 2022; 243, [DOI: https://dx.doi.org/10.1016/j.knosys.2022.108457] 108457.
Braik, MS; Hammouri, AI; Awadallah, MA; Al-Betar, MA; Khtatneh, K. An improved hybrid chameleon swarm algorithm for feature selection in medical diagnosis. Biomed Signal Process Control; 2023; 85, [DOI: https://dx.doi.org/10.1016/j.bspc.2023.105073] 105073.
Braik M, Al-Hiary H (2025) A novel meta-heuristic optimization algorithm inspired by water uptake and transport in plants. Neural Comput Appl 1–82
Bujok P, Kolenovsky P (2022) Eigen crossover in cooperative model of evolutionary algorithms applied to cec 2022 single objective numerical optimisation. In 2022 IEEE Congress on Evolutionary Computation (CEC), pages 1–8. IEEE,
Camacho-Villalón, CL; Dorigo, M; Stützle, T. Exposing the grey wolf, moth-flame, whale, firefly, bat, and antlion algorithms: six misleading optimization techniques inspired by bestial metaphors. Int Trans Oper Res; 2023; 30,
Cao, B; Zhao, J; Lv, Z; Yang, P. Diversified personalized recommendation optimization based on mobile data. IEEE Trans Intell Transp Syst; 2020; 22,
Comert, SE; Yazgan, HR. A new approach based on hybrid ant colony optimization-artificial bee colony algorithm for multi-objective electric vehicle routing problems. Eng Appl Artif Intell; 2023; 123, [DOI: https://dx.doi.org/10.1016/j.engappai.2023.106375] 106375.
Daliri, A; Alimoradi, M; Zabihimayvan, M; Sadeghi, R. World hyper-heuristic: a novel reinforcement learning approach for dynamic exploration and exploitation. Expert Syst Appl; 2024; 244, [DOI: https://dx.doi.org/10.1016/j.eswa.2023.122931] 122931.
Dehghani, M; Trojovskỳ, P. Osprey optimization algorithm: a new bio-inspired metaheuristic algorithm for solving engineering optimization problems. Front Mech Eng; 2023; 8, 1126450. [DOI: https://dx.doi.org/10.3389/fmech.2022.1126450]
Deng, L; Liu, S. Snow ablation optimizer: a novel metaheuristic technique for numerical optimization and engineering design. Expert Syst Appl; 2023; 225, [DOI: https://dx.doi.org/10.1016/j.eswa.2023.120069] 120069.
Dorigo, M; Maniezzo, V; Colorni, A. Ant system: optimization by a colony of cooperating agents. IEEE Trans Syst Man Cybern Part B (Cybernetics); 1996; 26,
Faramarzi, A; Heidarinejad, M; Stephens, B; Mirjalili, S. Equilibrium optimizer: a novel optimization algorithm. Knowl-Based Syst; 2020; 191, [DOI: https://dx.doi.org/10.1016/j.knosys.2019.105190] 105190.
Gandomi AH, Yang X-S (2011) Benchmark problems in structural optimization. In Computational optimization, methods and algorithms, pages 259–281. Springer,
Geem, ZW; Kim, JH; Loganathan, GV. A new heuristic optimization algorithm: harmony search. SIMULATION; 2001; 76,
Gendreau, M; Potvin, JY et al. Handbook of Metaheuristics; 2010; Cham, Springer: [DOI: https://dx.doi.org/10.1007/978-1-4419-1665-5]
Ghasemi, M; Zare, M; Zahedi, A; Akbari, M-A; Mirjalili, S; Abualigah, L. Geyser inspired algorithm: a new geological-inspired meta-heuristic for real-parameter and constrained engineering optimization. J Bionic Eng; 2024; 21,
Ghoneim, RS; Arabasy, M. Ai-driven optimization of indoor environmental quality and energy consumption in smart buildings: a bio-inspired algorithmic approach. J Asian Architecture Building Eng; 2025; 10,
Givi, H; Hubalovska, M. Skill optimization algorithm: a new human-based metaheuristic technique. Comput Mater Continua; 2023; 74,
Hamadneh, T; Batiha, B; Gharib, GM; Montazeri, Z; Dehghani, M; Aribowo, W; Noori, HM; Jawad, RK; Ahmed, MA; Ibraheem, IK et al. Builder optimization algorithm: an effective human-inspired metaheuristic approach for solving optimization problems. Int J Intel Eng Syst; 2025; 18,
Hamadneh T, Batiha B, Gharib GM, Montazeri Z, Dehghani M, Aribowo W, Noori HM, Jawad RK, Ibraheem IK et al (2025) Revolution optimization algorithm: a new human-based metaheuristic algorithm for solving optimization problems. Int J Intel Eng Syst 18(2)
Hashim, FA; Houssein, EH; Mabrouk, MS; Al-Atabany, W; Mirjalili, S. Henry gas solubility optimization: a novel physics-based algorithm. Futur Gener Comput Syst; 2019; 101, pp. 646-667. [DOI: https://dx.doi.org/10.1016/j.future.2019.07.015]
Holland, JH et al. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence; 1992; Cambridge, MIT press: [DOI: https://dx.doi.org/10.7551/mitpress/1090.001.0001]
Houssein, EH; Saad, MR; Hashim, FA; Shaban, H; Hassaballah, M. Lévy flight distribution: a new metaheuristic algorithm for solving engineering optimization problems. Eng Appl Artif Intell; 2020; 94, [DOI: https://dx.doi.org/10.1016/j.engappai.2020.103731] 103731.
Houssein, EH; Oliva, D; Samee, NA; Mahmoud, NF; Emam, MM. Liver cancer algorithm: a novel bio-inspired optimizer. Comput Biol Med; 2023; 165, [DOI: https://dx.doi.org/10.1016/j.compbiomed.2023.107389] 107389.
Izci D, Hashim FA, Mostafa RR, Ekinci S, Smerat A, Migdady H, Abualigah L (2025) Efficient optimization of engineering problems with a particular focus on high-order iir modeling for system identification using modified dandelion optimizer. Optim Control Appl Methods,
Jiaze, T; Chen, H; Wang, M; Gandomi, AH. The colony predation algorithm. J Bionic Eng; 2021; 18, pp. 674-710. [DOI: https://dx.doi.org/10.1007/s42235-021-0050-y]
Karaboga, D; Basturk, B. A powerful and efficient algorithm for numerical function optimization: artificial bee colony (abc) algorithm. J Global Optim; 2007; 39,
Kashan AH (2009) League championship algorithm: a new algorithm for numerical function optimization. 2009 international conference of soft computing and pattern recognition 43–48 (IEEE)
Kaveh A, Zolghadr A (2016) A novel meta-heuristic algorithm: tug of war optimization
Kennedy J (1995) Particle swarm optimization. Proceedings of ICNN’95-international conference on neural networks 4:1942–1948 (ieee)
Kirkpatrick, S. C daniel gelatt, and mario p vecchi. Optim Simulated Annealing Sci; 1983; 220,
Kumar, S; Panagant, N; Tejani, GG; Pholdee, N; Bureerat, S; Mashru, N; Patel, P. A two-archive multi-objective multi-verse optimizer for truss design. Knowl-Based Syst; 2023; 270, [DOI: https://dx.doi.org/10.1016/j.knosys.2023.110529] 110529.
Kumar, S; Tejani, GG; Mehta, P; Sait, SM; Yildiz, AR. Optimization of truss structures using multi-objective cheetah optimizer. Mech Based Des Struct Mach; 2024; 10,
Lam, AYS; Li, VOK. Chemical reaction optimization: a tutorial. Memetic Comput; 2012; 4, pp. 3-17. [DOI: https://dx.doi.org/10.1007/s12293-012-0075-1]
Li, S; Chen, H; Wang, M; Heidari, AA; Mirjalili, S. Slime mould algorithm: a new method for stochastic optimization. Futur Gener Comput Syst; 2020; 111, pp. 300-323. [DOI: https://dx.doi.org/10.1016/j.future.2020.03.055]
Lian, J; Zhu, T; Ma, L; Xincan, W; Heidari, AA; Chen, Y; Chen, H; Hui, G. The educational competition optimizer. Int J Syst Sci; 2024; 55,
Liang JJ, Qu BY, Suganthan PN (2013) Problem definitions and evaluation criteria for the cec 2014 special session and competition on single objective real-parameter numerical optimization. Computational Intelligence Laboratory Zhengzhou University Zhengzhou China and Technical Report Nanyang Technological University Singapore 635(2):2014
Liang J, Yue C, Li G, Qu B, Suganthan PN, Yu K (2020) Problem definitions and evaluation criteria for the cec 2021 on multimodal multiobjective path planning optimization,
Mezura-Montes E, Coello CAC (2005) Useful infeasible solutions in engineering optimization with evolutionary algorithms. In Mexican International Conference on Artificial Intelligence, pages 652–662. Springer,
Mirjalili, S. Moth-flame optimization algorithm: a novel nature-inspired heuristic paradigm. Knowl-Based Syst; 2015; 89, pp. 228-249. [DOI: https://dx.doi.org/10.1016/j.knosys.2015.07.006]
Mirjalili, S. Sca: a sine cosine algorithm for solving optimization problems. Knowl-Based Syst; 2016; 96, pp. 120-133. [DOI: https://dx.doi.org/10.1016/j.knosys.2015.12.022]
Mirjalili, S; Lewis, A. The whale optimization algorithm. Adv Eng Softw; 2016; 95, pp. 51-67. [DOI: https://dx.doi.org/10.1016/j.advengsoft.2016.01.008]
Mirjalili, S; Mirjalili, SM; Hatamlou, A. Multi-verse optimizer: a nature-inspired algorithm for global optimization. Neural Comput Appl; 2016; 27,
Mirjalili, S; Gandomi, AH; Mirjalili, SZ; Saremi, S; Faris, H; Mirjalili, SM. Salp swarm algorithm: a bio-inspired optimizer for engineering design problems. Adv Eng Softw; 2017; 114, pp. 163-191. [DOI: https://dx.doi.org/10.1016/j.advengsoft.2017.07.002]
Mirrashid, M; Naderpour, H. Transit search: an optimization algorithm based on exoplanet exploration. Results Control Optim; 2022; 7, [DOI: https://dx.doi.org/10.1016/j.rico.2022.100127] 100127.
Moosavian, N; Roodsari, BK. Soccer league competition algorithm: a novel meta-heuristic algorithm for optimal design of water distribution networks. Swarm Evol Comput; 2014; 17, pp. 14-24. [DOI: https://dx.doi.org/10.1016/j.swevo.2014.02.002]
Mora-Gutiérrez, RA; Ramírez-Rodríguez, J; Rincón-García, EA. An optimization algorithm inspired by musical composition. Artif Intell Rev; 2014; 41, pp. 301-315. [DOI: https://dx.doi.org/10.1007/s10462-011-9309-8]
Motevali, MM; Shanghooshabad, AM; Aram, RZ; Keshavarz, H. Who: a new evolutionary algorithm bio-inspired by wildebeests with a case study on bank customer segmentation. Int J Pattern Recognit Artif Intell; 2019; 33,
Nonut, A; Kanokmedhakul, Y; Bureerat, S; Kumar, S; Tejani, GG; Artrit, P; Yıldız, AR; Pholdee, N. A small fixed-wing uav system identification using metaheuristics. Cogent Eng; 2022; 9,
Nozari, HA; Banadaki, HD; Mokhtare, M; Vahed, SH. Intelligent non-linear modelling of an industrial winding process using recurrent local linear neuro-fuzzy networks. J Zhejiang Univ Sci C; 2012; 13,
Pan, W-T. A new fruit fly optimization algorithm: taking the financial distress model as an example. Knowl-Based Syst; 2012; 26, pp. 69-74. [DOI: https://dx.doi.org/10.1016/j.knosys.2011.07.001]
Panagant, N; Kumar, S; Tejani, GG; Pholdee, N; Bureerat, S. Many-objective meta-heuristic methods for solving constrained truss optimisation problems: a comparative analysis. MethodsX; 2023; 10, [DOI: https://dx.doi.org/10.1016/j.mex.2023.102181] 102181.
Qais, MH; Hasanien, HM; Turky, RA; Alghuwainem, S; Tostado-Véliz, M; Jurado, F. Circle search algorithm: a geometry-based metaheuristic optimization algorithm. Mathematics; 2022; 10,
Rashedi, E; Nezamabadi-Pour, H; Saryazdi, S. Gsa: a gravitational search algorithm. Inf Sci; 2009; 179,
Ray, T; Liew, K-M. Society and civilization: an optimization algorithm based on the simulation of social behavior. IEEE Trans Evol Comput; 2003; 7,
Rezk, H; Olabi, AG; Wilberforce, T; Sayed, ET. Metaheuristic optimization algorithms for real-world electrical and civil engineering application: a review. Results Eng; 2024; 23, [DOI: https://dx.doi.org/10.1016/j.rineng.2024.102437] 102437.
Rodan, A; Sheta, AF; Faris, H. Bidirectional reservoir networks trained using svm+ privileged information for manufacturing process modeling. Soft Comput; 2017; 21,
Runarsson, TP; Yao, X. Stochastic ranking for constrained evolutionary optimization. IEEE Trans Evol Comput; 2000; 4,
Sadollah, A; Bahreininejad, A; Eskandar, H; Hamdi, M. Mine blast algorithm: a new population based algorithm for solving constrained engineering optimization problems. Appl Soft Comput; 2013; 13,
Sharma, P; Raju, S. Metaheuristic optimization algorithms: a comprehensive overview and classification of benchmark test functions. Soft Comput; 2024; 28,
Shirgir, S; Farahmand-Tabar, S; Aghabeigi, P. Optimum design of real-size reinforced concrete bridge via charged system search algorithm trained by nelder-mead simplex. Expert Syst Appl; 2024; 238, [DOI: https://dx.doi.org/10.1016/j.eswa.2023.121815] 121815.
Simon, D. Biogeography-based optimization. IEEE Trans Evol Comput; 2008; 12,
Sohrabi, M; Fathollahi-Fard, AM; Gromov, VA. Genetic engineering algorithm (gea): an efficient metaheuristic algorithm for solving combinatorial optimization problems. Autom Remote Control; 2024; 85,
Sowmya, R; Premkumar, M; Jangir, P. Newton-raphson-based optimizer: a new population-based metaheuristic algorithm for continuous optimization problems. Eng Appl Artif Intell; 2024; 128, [DOI: https://dx.doi.org/10.1016/j.engappai.2023.107532] 107532.
Storn, R; Price, K. Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. J Global Optim; 1997; 11, pp. 341-359.1479553 [DOI: https://dx.doi.org/10.1023/A:1008202821328]
Tanyildizi, E; Demir, G. Golden sine algorithm: a novel math-inspired algorithm. Adv Electrical Comput Eng; 2017; 17,
Tejani, GG; Bhensdadia, VH; Bureerat, S. Examination of three meta-heuristic algorithms for optimal design of planar steel frames. Adv Comput Design; 2016; 1,
Tejani, GG; Savsani, VJ; Patel, VK; Bureerat, S. Topology, shape, and size optimization of truss structures using modified teaching-learning based optimization. Adv Comput Design; 2017; 2,
thermal exchange optimization. A kaveh and armin dadras. a novel meta-heuristic optimization algorithm. Adv Eng Softw; 2017; 110, pp. 69-84. [DOI: https://dx.doi.org/10.1016/j.advengsoft.2017.03.014]
Vagaská A, Gombár M (2021) Mathematical optimization and application of nonlinear programming. Algorithms as a Basis of Modern Applied Mathematics 461–486
Wang, X. Frigatebird optimizer: a novel metaheuristic algorithm. Phys Scr; 2024; 99,
Wang, X. Draco lizard optimizer: a novel metaheuristic algorithm for global optimization problems. Evol Intel; 2025; 18,
Wang, X. Fishing cat optimizer: a novel metaheuristic technique. Eng Comput; 2025; 42,
Wolpert, DH; Macready, WG. No free lunch theorems for optimization. IEEE Trans Evol Comput; 1997; 1,
Xiao Y, Cui H, Hussien AG, Hashim FA, Msao, (2024) A multi-strategy boosted snow ablation optimizer for global optimization and real-world engineering applications. Adv Eng Inform 61:102464
Xiao, Y; Cui, H; Khurma, RA; Castillo, PA. Artificial lemming algorithm: a novel bionic meta-heuristic technique for solving real-world engineering optimization problems. Artif Intell Rev; 2025; 58,
Xu Y, Cui Z, Zeng J (2010) Social emotional optimization algorithm for nonlinear constrained optimization problems. In International Conference on Swarm, Evolutionary, and Memetic Computing, pages 583–590. Springer,
Xue, J; Shen, B. A novel swarm intelligence optimization approach: sparrow search algorithm. Syst Sci Control Eng; 2020; 8,
Yang X-S. (2009) Firefly algorithms for multimodal optimization. In International symposium on stochastic algorithms, pages 169–178. Springer,
Yang X-S, Deb S, Cuckoo search via lévy flights. (2009) Nature & biologically inspired computing, 2009. nabic 2009. World Congress on 210–214 (IEEE)
Yang X-S (2010) A new metaheuristic bat-inspired algorithm. In Nature inspired cooperative strategies for optimization (NICSO 2010), pages 65–74. Springer,
Youfa, F; Liu, D; Chen, J; He, L. Secretary bird optimization algorithm: a new metaheuristic for solving global optimization problems. Artif Intell Rev; 2024; 57,
Yuan, C; Zhao, D; Heidari, AA; Liu, L; Chen, Y; Chen, H. Polar lights optimizer: algorithm and applications in image segmentation and feature selection. Neurocomputing; 2024; 607, [DOI: https://dx.doi.org/10.1016/j.neucom.2024.128427] 128427.
Zhang LM, Dahlmann C, Zhang Y (2009) Human-inspired algorithms for continuous function optimization. 2009 IEEE international conference on intelligent computing and intelligent systems 1:318–321 (IEEE)
Zhao, W; Wang, L; Zhang, Z. Atom search optimization and its application to solve a hydrogeologic parameter estimation problem. Knowl-Based Syst; 2019; 163, pp. 283-304. [DOI: https://dx.doi.org/10.1016/j.knosys.2018.08.030]
Zhao, S; Zhang, T; Cai, L; Yang, R. Triangulation topology aggregation optimizer: a novel mathematics-based meta-heuristic algorithm for continuous optimization and engineering applications. Expert Syst Appl; 2024; 238, [DOI: https://dx.doi.org/10.1016/j.eswa.2023.121744] 121744.
Zhao, W; Wang, L; Zhang, Z; Fan, H; Zhang, J; Mirjalili, S; Khodadadi, N; Cao, Q. Electric eel foraging optimization: a new bio-inspired optimizer for engineering applications. Expert Syst Appl; 2024; 238, [DOI: https://dx.doi.org/10.1016/j.eswa.2023.122200] 122200.
Zhu, D; Wang, S; Zhou, C; Yan, S; Xue, J. Human memory optimization algorithm: a memory-inspired optimizer for global optimization problems. Expert Syst Appl; 2024; 237, [DOI: https://dx.doi.org/10.1016/j.eswa.2023.121597] 121597.
Zouache, D; Abualigah, L; Boumaza, F. A guided epsilon-dominance arithmetic optimization algorithm for effective multi-objective optimization in engineering design problems. Multimedia Tools Appl; 2024; 83,
© The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.