Content area
The input, hidden and output layers cultivate a hierarchical framework of the feedforward neural networks (FNNs) characterized by unidirectional information flow and feedback feedback-free loop connection, the network highlights attributes of fortified scalability and adaptability, elevated parallel computation and training efficiency, uncluttered structure and easy implementation. The blood-sucking leech optimization (BSLO) is predicated on the foraging patterns of blood-sucking leeches in rice paddies, which incorporates exploration, exploitation, switching mechanism of directional leeches, recherche mechanism of directionless leeches, and re-tracking mechanism to accomplish global coarse discovery and local elaborated extraction, and ascertain the fantastic solution. To expedite solution efficiency and reinforce mining precision, this paper proposes an enhanced BSLO with the simplex method (SBSLO) to train the FNNs, the objective is to quantify the discrepancy between anticipated output and realistic output, assess training efficacy and classification accuracy of prediction samples, and establish the fantastic connection weights and bias thresholds. Simplex method not only strengthens directional exploration precision and bolsters population diversity to mitigate premature convergence and facilitate escape from local optimum but also advances constraint processing capability and emphasizes noteworthy robustness and generalization to reinforce convergence procedure and elevate solution quality. The stability and dependability of the SBSLO are validated by seventeen sample datasets, and the SBSLO is compared with KOA, NRBO, HLOA, IAO, WO, PKO, EGO, HEOA, APO, FLO, PO and BSLO. The experimental results demonstrate that the SBSLO amalgamates the collective cooperative exploration of the BSLO with the refined directional exploitation of the simplex method to leverage complementary advantages, alleviate local search stagnation, boost training efficiency and prediction precision, strengthen stability and robustness, and foster convergence speed and solution quality.
The core advantages of feedforward neural networks (FNNs) are derived from the simplified simulation and powerful mathematical expression of the biological neuron connection mode, one-way propagation of data flow, layered abstraction, and mathematical interpretability. The essence of training FNNs utilizing swarm intelligence algorithms characterized by population diversity, collective cooperation, information dissemination, adaptive discovery, gradient-free optimization is to attain the desirable solution and refine the network parameters (connection weights and bias thresholds) in the high-dimensional parameter area, which facilitates the input–output mapping of the network to approximate the actual objective function, maintains robust data flow fitting, alleviates the limitations of local optimum, gradient vanish or explosion, hyperparameter sensitivity of the traditional gradient descent method. The FNNs exhibit several key characteristics: simplicity and versatility of structure, flexibility and scalability of modular design, efficiency and real-time performance of parallel computing, strong function approximation ability and reliable training stability, broad applicability and lightweight nature, as well as theoretical interpretability and mathematical completeness. The swarm intelligence algorithms are utilized to train the FNNs, such as Kepler optimization algorithm (KOA)1, Newton–Raphson-based optimization (NRBO)2, horned lizard defense tactics (HLOA)3, information acquisition optimization (IAO)4, walrus optimization (WO)5, pied kingfisher optimization (PKO)6, Eel and grouper optimization (EGO)7, human evolutionary optimization algorithm (HEOA)8, arctic puffin optimization (APO)9, frilled lizard optimization (FLO)10, parrot optimization (PO)11, and blood-sucking leech optimization (BSLO)12.
Yang et al. designed the adaptive FNNs to realize the precise trajectory tracking and robust anti-interference of the magnetic levitation platform, this method revealed instructive superiority and scalability to address the constraints of linear compensation, dynamic response, model adaptability, retain system stability and dependability, and accomplish the ultra-precision positioning13. Li et al. adopted a genetic digital background calibration algorithm with feature selection to train the time-delay neural network, this method integrated the global screening mechanism and the feature subset to compensate the capacitance mismatch, comparator offset, delay error, realized feature dimensionality reduction and model lightweight, and improved the pipeline calibration accuracy14. Schreuder et al. deployed a population-based Bayesian hyper-heuristics to train the FNNs, this method conveyed fortified generalization and adaptability to balance global coarse exploration and local precise exploitation, avoid premature convergence and search stagnation, enhance data heterogeneity, improve training efficiency, and achieve the optimal weight and bias parameters15. Ravichandran et al. crafted an unaccompanied learning approach to train brain-inspired FNNs, this method incorporated neurological columns, contentious normalization, Hebbian synaptic versatility, structural adaptability, sparse exertion, and sparse patchy connections to realize network architecture optimization, cross-modal fusion, and a satisfactory solution16. Yang et al. designed a deep Monte Carlo frame workflow and FNNs to provide an efficient and accurate intelligent solution for plane steel frame design, this method offered strong flexibility and practicality, enhancing design efficiency, reducing material costs, and operating under various conditions, which exhibits the potential for cross-domain migration and promotion17. Lasheen et al. deployed a least absolute shrinkage and selection operator with FNNs to conduct the online stability evaluation of isolated microgrids, this method highlighted essential stability and feasibility to foster solution accuracy, expedite training efficiency, ensure stable network operation, and identify dominant microgrid models18. Ebid et al. explored an association-based pruning method to train the FNNs and diminish network capacity by eliminating redundant neurons of the activation correlation, this method exhibited strong consistency and scalability to achieve the collaborative optimization of correlation pruning and weight compensation, reduce the sensitivity of network parameters, obtai accurate redundancy identification, weaken the computational complexity, realize the efficient compression and performance maintenance19. Sarkodie et al. exploited an FNN to predict physico-chemical parameters of barakese, this method relied on the model structure optimization, data-driven features and multi-source data fusion to meet the high index evaluation of the reservoir, realize high-precision, real-time and interpretable prediction of parameters, reduce exploration costs, and achieve the superior convergence accuracy20. Lai et al. implemented a quantum-activated FNNs to achieve arbitrary quadratic unconstrained binary optimization, this method utilized the quantum nodal mechanism to enable the network to evaluate the synergistic effect of multiple solution, and activates the gradient descent and random sampling strategy of the FNNs to determine the optimal solution, avoid gradient disappearance and explosion, improve category separability, and enhance the optimization accuracy and efficiency21. Mehrkash et al. presented a finite element model updating to train multi-layer artificial FNNs, this method combined the measured multi-modal data and network topology selection to enhance the estimation accuracy, minimize the parameter prediction error, improve the training efficiency, overcome the noise sensitivity, destroy the outliers, strengthen the stability and robustness22. Ali et al. crafted a compact date seeds milling unit to train the FNNs, this method integrated the evaluation indexes of power rate, particle size uniformity, energy consumption and equipment stability, and the operating conditions of grinding speed, feed rate, grinding time and screen aperture to obtain multi-objective parameter prediction and interaction, construct complex nonlinear relationships, and strengthen generalization and robustness23. Sun et al. constructed an electro-hydraulic driving algorithm to train the FNNs, this method maintained noticeable robustness and reliability in facilitating the drive switching bias thresholds, strengthening the model positioning accuracy, mitigating excessive energy consumption, cultivating automatic compensation of nonlinear characteristics, facilitating real-time monitoring, and bolstering the training efficiency and accuracy24. Nascimento et al. acknowledged a juxtaposed channel equalization and signal identification for a generalized multiplexing system of frequency division to train the split-complex FNNs, this method clarified instructive superiority and equilibrium in preserving phase information, maintaining the integrity of the complex signal, and diminishing the symbol error rate25. Hedayati-Dezfooli et al. integrated soft computing, fuzzy evaluation, and Taguchi method to explore the injection molding of propellers and achieve optimal index configuration. Additionally, they employed an artificial neural network to assess the shrinkage rate and sink mark, and confirm the optimal parameters. This method demonstrated strong adaptability and practicality in achieving high computational accuracy and rapid convergence efficiency26. Kostyrin et al. established a mathematical model based on Markov stochastic processes to detect dental service management. This method demonstrated strong novel perspectives and reliability in achieving patient treatment decision optimization, resource scheduling and fault correction, medical equipment maintenance and renewal, dental service quality evaluation and improvement27. Widians et al. constructed a hybrid ant colony and grey wolf optimization to resolve global optimization. This method incorporated pheromone fusion with a wolf hierarchy mechanism, dynamic weight allocation, and a local perturbation strategy to balance exploration and exploitation, traverse the solution space, enhance convergence accuracy, and strengthen adaptability and versatility 28. Kaya et al. deployed the optimization algorithms based on swarm intellifent for maximum power point tracking to train the FNNs. These methods not only retained the strong fitting and generalization of neural networks, but also leveraged the advantages of global exploration and local exploitation of swarm intelligence, ultimately achieving higher accuracy and faster response tracking, obtaining optimal optimized weights and bias parameters, and providing a generalizable engineering paradigm29. Song et al. implemented composite neural learning-based adaptive actuator failure compensation control to resolve the full-state constrained autonomous surface vehicle. This method constructed a serial-parallel estimation model to integrate the nonlinear approximation ability of neural networks with the robustness of adaptive control, improve the accuracy and efficiency of failure compensation, strictly guarantees full-state constraints, enhance feasibility and superiority30. Xie et al. explored a PID-fuzzy switching-based strategy to achieve heading control of remote operated vehicle. This method employed multimodal PID adaptation scenarios, fuzzy logic dynamic balancing, and low-complexity engineering to achieve comprehensive optimization of heading control in complex scenarios, enhance the accuracy and efficiency of a remote-operated vehicle, and reduce operation and maintenance costs31.
The BSLO is motivated by the prey-grabbing habits of blood-sucking leeches in rice paddies and emulates the exploration, exploitation, switching mechanism of directional leeches, recherche mechanism of directionless leeches, and the re-tracking mechanism to recognize the fantastic solution. The no-free-lunch (NFL) theorem asserts that there is no distinctive search algorithm that can adequately address various optimization difficulties. The BSLO has limitations in terms of fatigued convergence efficiency, inadequate calculation precision, and susceptibility to search stagnation. Therefore, the BSLO with simplex method (SBSLO) is proposed to train the FNNs, which incorporates the directional and directionless foraging procedures of BSLO and reflection, expansion and contraction of the simplex method to attain supplementary advantages and quantify sample training efficiency and prediction classification precision.
The SBSLO specifically outperforms other hybrid approaches, the advantages stem from the directional exploitation and the simplex method introduces qualitatively new, which are summarized as follows: (1) The SBSLO can achieve conflict-free coordination between global exploration and local exploitation, and overcome performance bottlenecks. The BSLO simulates the leech’s behavior of sensing, adsorbing, and moving based on a concentration gradient, which relies on population diversity to achieve global solution space coverage and avoid getting stuck in local optima with a single solution. The simplex method without relying on random search is a direct optimization algorithm that does not require derivatives, which employs deterministic operations of reflection, expansion, and contraction to finely explore the high-quality solution domain identified by the BSLO, enhance the accuracy of solutions and significantly reduce the error of the optimal solution. (2) The simplex method possesses geometric spatial perception ability, which not only constructs geometric shapes to determine the search direction, but also utilizes the distances between vertices and the differences in objective function values to judge the exploitation direction of the optimal solution, provide clear mathematical logic for local search. (3) The memoryless characteristic of the simplex method enables each iteration to rely solely on the objective function value of the current vertex, without the need to store historical data, enable rapid adaptation to dynamic optimization problems. (4) The mathematical compatibility between BSLO and the simplex method is excellent. The operations can directly act on the individuals of the BSLO’s individual population, eliminate the need for additional interface conversion, avoid information loss during operational integration, achieve collaborative workand leverage the greater advantages. Table 1 conveys the fundamental difference between the SBSLO and other hybrid approaches.
Table 1. The fundamental difference between the SBSLO and other hybrid approaches.
Comparative dimension | BSLO-simplex method | BSLO-GA | BSLO-PSO | BSLO-DE |
|---|---|---|---|---|
Core search mechanism | Global exploration of BSLO, deterministic local refinement of simplex method | Random crossover/mutation operation, simulate biological evolution, heuristic search | Particle velocity update, simulate birds flock foraging, direction correction | Differential mutation, driven by individual differences, and the selection operation |
Operational logic | Space–time separation, early exploration, later refinement, without conflict | Parallel overlapping, simultaneously perform crossover/mutation and BSLO, prone to conflict | Rely on historical optimality, guided by individual/global optimality, prone to lag | Rely on differential vectors, driven by inter-individual differences, without global perception |
Parameter sensitivity | Low, the impact of population size is relatively small | High, crossover/mutation probability determines optimization performance | High, inertia weight/acceleration coefficient affects convergence | High, scaling factor/crossover probability affects accuracy |
Determinacy of solution | Strong determinacy of local search, fixed rules in the simplex method | Strong randomness throughout the entire process, crossover/mutation without a fixed direction | Strong local randomness, uncontrollable speed perturbation | Strong local randomness, random generation of differential vectors |
Utilization mode of the solution space | Global exploration coverage, localized exploitation focus, non-overlapping conflict | Display operational overlap between global exploration and local exploitation, crossover/mutation may destroy local solutions | Relying on the particle pursuit mechanism, prone to falling into a local optimal neighborhood | Relying on differential vectors, insufficient coverage in high-dimensional space |
Robustness to noise | Strong, simplex method smooths noise through multi-solution comparison | Weak, random operation amplifies noise impact | Medium, particle velocity is susceptible to noise interference | Medium, differential variation is sensitive to noise |
Adaptation scenario | Black box, non-smooth, noise, high-dimensional problem | Low-dimensional, smooth, noise-free problem | Low-to-medium dimensionality, weak noise, continuous problem | Low-to-medium dimensionality, continuous and smooth, noise-free problem |
The essential contributions are outlined as follows: (1) The enhanced blood-sucking leech optimization with simplex method (SBSLO) is presented to train the FNNs. (2) Simplex method with directional coarse-grained exploration and refined-grained exploitation increases population diversity, strengthens constraint processing efficiency, suppresses premature convergence, expands search scope, ensures the feasibility of candidate solutions, improves stability and scalability, accelerates convergence speed, and enhances solution quality. (3) The SBSLO is compared with newly published, widely referenced, and highly superior algorithms. (4) The SBSLO is scrutinized against the FNNs by deploying simulation experiments and assessing the results. (5) The SBSLO exhibits remarkable practicality and superiority in attaining complementary benefits and facilitating the connection weights and bias thresholds of neural networks, and outperforms other algorithms. The SBSLO strengthens training efficiency and sample prediction accuracy, bolsters stability and scalability, expedites convergence speed, and elevates solution quality.
This article is separated into the following sections. Section “Mathematical model of feedforward neural networks” reveals the FNNs. Section “Blood-sucking leech optimization (BSLO)” articulates the BSLO. Section “Blood-sucking leech optimization with simplex method (SBSLO)” measures the SBSLO. Section “SBSLO-based feedforward neural networks” illustrates the SBSLO-based FNNs. Section “Experimental results and analysis of SBSLO for resolving FNNs” portrays the comparative experiments and result analysis. Section “SBSLO for resolving real-world engineering designs” portrays the SBSLO tackles the engineering designs. Section “Conclusion and future research” summarizes the conclusion, limitations, and future research.
Mathematical model of feedforward neural networks
Feedforward neural networks (FNNs) are artificial neural networks characterized by unidirectional information flow and feedback-free loop connections, and the core architecture involves the input layer, hidden layer, and output layer. The input layer is responsible for receiving and introducing unprocessed raw feature data by fully connecting all neurons in the first hidden layer, which exhibits no activation function, calculation capability, or any mathematical transformation. The feature dimension of the input data determines the neuron scale. The hidden layer is a computational layer situated between the input layer and the output layer, which employs a nonlinear transformation to transform the original features into abstract and learned practical features, characterized by an activation function. The output layer is the final layer of the network, which typically utilizes linear or probabilistic activation functions to transform practical features into the final prediction results. The task allocation determines the neuron scale, and the activation function is strongly bound to the task, which is the direct object of the loss function calculation. Neurons inside the layer are unconnected, but neurons across layers are either entirely or partly linked. The output computation relies only on the current input. The FNNs have some advantages, including a simple structure, easy implementation, efficient calculation, high training efficiency, flexible parameter adjustment, strong scalability, a mature theory, and stable convergence. Figure 1 illustrates three-layer FNNs.
Fig. 1 [Images not available. See PDF.]
Three-layer FNNs.
In FNNs, the input layer’s node size is , the hidden layer’s node size is , and the output layer’s node size is . There is a unidirectional connection between nodes. Each hidden layer of neurons weights and biases the input values to obtain a weighted sum. The weight accumulation value of the input layer is established as:
1
where constitutes the connection weight between the input layer node and hidden layer node, constitutes the input data, constitutes the deviation threshold of the hidden layer node.The nonlinear activation function is utilized to calculate the output value of each hidden layer, which is crucial for neural networks to fit complex nonlinear relationships and learn complex patterns. The hidden layer node is established as:
2
After calculating the output value of the hidden layer, it is multiplied with the corresponding weights between the hidden layer and the output layer and input to the output layer. The transfer function is utilized to calculate the values of the output layer, which are established as:
3
4
where constitutes the connection weight between the hidden layer node and output layer node, constitutes the deviation threshold of the output layer node. The purpose of training feedforward neural networks is to regulate the connection weights and deviation thresholds of each layer node until the termination condition is reached, and obtain the ideal output from the given input. The proposed algorithm employs group collaborative search and diversity preservation mechanisms to optimize the connection weights and bias thresholds of neural networks, thereby enhancing training efficiency and network performance, improving robustness and generalization ability, accelerating convergence speed, and improving solution quality. The vector constitutes the bias and weight, which is established as:5
where constitutes the input node size, constitutes the connection weight, constitutes the deviation of the hidden node.The purpose of training FNN is to achieve the highest classification, approximation, or prediction accuracy for training and testing samples. The calculation difference between the expected output and the actual output of the network calculates performance. The mean squared error is established as:
6
where constitutes the expected output value of the input unit of the training sample, constitutes the actual output value of the input unit of the training sample.FNN requires training on more sample datasets and evaluating all training samples. The average value of MSE for all training samples is established as:
7
where constitutes the training sample scale, constitutes the output layer’s node size.In summary, the mean squared error is regarded as the fitness value of SBSLO. The objective function of training FNN is established as:
8
To fairly validate the practicality and reproducibility of the SBSLO and the comparative algorithms, the training epoch simultaneously specifies the population scale with 30, the maximum number of iterations with 500, and the number of independent runs with 30, which ensures that different experimenters can stop training at the same node, and avoid result bias due to differences in training duration.
Blood-sucking leech optimization (BSLO)
The leeches are categorized into two varieties: the directional and directionless leeches. The directional leeches swim toward prey at while encountering circular wave stimuli extracted by humans. Other directionless leeches indiscriminately explore the search area. After biting humans for a while, leeches are automatically discarded back into the rice paddies by humans and subsequently seek out humans again. Figure 2 illustrates the foraging behavior of the blood-sucking leeches.
9
10
where constitutes the directional leeches scale, constitutes the directionless leeches, constitutes a function for rounding down, constitutes the current iteration, constitutes the maximum iteration, . Consequently, the majority of leeches may initially orient themselves towards people due to their foraging actions. The quantity of leeches escalates with the rise of iterations, as an increasing proportion of them locate humans.Fig. 2 [Images not available. See PDF.]
The foraging behavior of blood-sucking leeches.
Population initialization
The matrix of the randomly initialized population is established as:
11
where constitutes the blood-sucking leeches, constitutes the position of search agent, constitutes the population size, constitutes the problem dimension.The BSLO initializes a set of randomly distributed candidate solutions, the is established as:
12
where , constitutes the upper bound, constitutes the lower bound.Exploration of directional leeches
When leeches detect water wave stimuli, leeches may maneuver towards humans at a tiny angle. The leeches may explore in the area far away from humans, which is established as:
13
where constitutes the current leech position, constitutes the iterated leech position, constitutes the random leech position, , , , constitutes the optimal position, constitutes the tiny disturbance coefficient that increases exploration diversity.14
where , constitutes the random perturbation vector.15
16
where , , constitutes the perceived distance.17
18
where constitutes the flight distribution function, , .19
20
21
where , . Hence, leeches may first explore the solution area with large steps and then exploit the potential area with small steps, thereby aiding in the hunt for humans.Exploitation of directional leeches
The leeches progressively approach humans and are subjected to increasingly strong stimuli, thus accessing the potential areas of humans. The leeches may exploit the region close to humans, which is established as:
22
where , , leeches find the potential region.23
24
where , constitutes the perceived distance.Switching mechanism of directional leeches
Exploration and exploitation necessitate a switching mechanism. The perceived distance is intended to emulate the distance that leeches sense from humans. Most directional leeches first explore humans. Consequently, most leeches perceive the distance from humans at the outset, so mostly high levels of . Few values of are small at the beginning since some leeches are closer to humans after initialization. With increasing iterations, more and more leeches can finally find or approach the optimal solution. Therefore, is gradually close to zero. When , leeches perceive that they are far away from humans, BSLO enters the exploration phase. When , leeches perceive that they are closer to humans, BSLO enters the exploitation phase. The directional leeches mainly perform exploration at the beginning and exploitation in the later iterations, and exploitation exists in the entire iterations.
Recherche mechanism of directionless leeches
After the reception of stimuli, leeches erroneously interpret information and travel in the incorrect direction. With expanding iterations, the proximity of leeches to humans intensifies, leeches progressively approach zero. Consequently, directionless leeches may meander aimlessly in their vicinity or in proximity to humans, which is established as:
25
26
where constitutes the flight distribution function.Re-tracking mechanism
After participating exploration and exploitation for numerous times , some leeches productively locate humans and extract blood. When humans spreading seeds in rice paddies experience discomfort, they indiscriminately cast the leeches clinging to their feet into the fields. This procedure transpires periodically , the optimum solution and the fitness value of are compared to ascertain if they are equivalent. Subsequently, these discarded leeches may resume their quest for humans.
27
where , , and constitute the fitness values of and , constitutes a redistributed when , which facilitates the evasion of local optimum solutions.Algorithm 1 portrays the pseudocode of the BSLO.
Blood-sucking leech optimization with simplex method (SBSLO)
Simplex method is a reliable and flexible linear programming method with the mathematics of convex polyhedra, the essence is to systematically explore vertices or poles in the potential solution area and progressively converge the appropriate solution32. Simplex method enhances global search efficiency and local optimal escape, strengthens directional search and population diversity, balances global detection and local mining, improves constraint processing efficiency and ensures the feasibility of candidate solutions, optimizes initial solutions and improves mutation or perturbation strategies, improves mining accuracy and accelerates convergence speed, and avoids search stagnation and improves solution quality. Figure 3 illustrates the schematic diagram of simplex method.
Fig. 3 [Images not available. See PDF.]
Schematic diagram of simplex method.
Stage 1: Initial simplex construction based on BSLO
The simplex method provides deterministic directional exploitation for BSLO by constructing a convex polytope vertex iteration model. The core objective is to continuously steer the search direction towards the minimum point of the objective function by eliminating the worst vertex and generating an optimal vertex. The simplex method is a convex polytope formed by linearly independent vertices in dimensional space. The quality of its vertices directly determines the initial efficiency of directional search, and it is necessary to screen high-quality vertices through BSLO global exploration.
Simulate the water wave stimulation perception and intelligent foraging behavior of leeches, generate candidate solutions that are denoted as , and calculate the objective function value of each candidate solution. Filter high-quality vertices and sort them in ascending order to ensure that the simplex method can cover an dimensional local region and provide sufficient space for directional search.
Stage 2: Mathematical derivation of directional search (reflection/expansion/contraction)
Simplex method employs a closed-loop operation of vertex center calculation, directional exploitation derivation and new vertex generation to refine directional search.
(1) Centroid calculation: establishment of directional basis.
The optimal vertex , the suboptimal vertex , the central vertex is established as:
28
where constitutes the “center of gravity” of the current high-quality solution region, and all subsequent directional adjustments are made with as the origin to avoid directional drift caused by the absence of a basis in the original BSLO, which aggregates the positional information of high-quality regions and provides a basis for directional search.(2) Reflection: core direction correction.
Reflection is the core step in adjusting the direction of the simplex method. By symmetrically reflecting the alternative vertex along , which forces the search direction to stay away from the worst areas and move towards the high-quality areas.
Step 1 Derivation of reflection direction.
The vector difference between the alternative vertex and the central vertex is , which points to the worst area, the reflection direction needs to be opposite to .
29
where this vector directly points to the high-quality regional center to determine the optimization direction that is different from the random direction of BSLO.Step 2 Reflective vertex generation.
Introduce the reflection coefficient ( constitutes symmetric reflection). The reflection vertex extends times the length of along direction .
30
Step 3 Determination of directional validity.
Based on the relationship between and the existing vertex objective function value, determine whether the reflection direction is valid. Compare the fitness values of the optimal vertex , reflection vertex and vertex .
If : high-quality reflection direction, enter expansion operation (strengthen this direction).
If : The reflection direction is reasonable. Replace with and proceed to the next iteration.
If : Deviation in reflection direction, proceed to contraction operation (correct direction).
(3) Expansion: strengthening in high-quality directions.
When the reflection vertex is superior to the optimal vertex , the reflection direction is a high-quality direction, which increases the step size to accelerate convergence.
Step 1 Derivation of the expansion direction vector.
The expansion direction extends along the reflection direction, the vector , which is in the same direction as strengthening the high-quality direction.
Step 2 Expansion vertex generation.
Introduce expansion coefficient ( ). The expansion vertex extends times the length of along direction .
31
Step 3 Direction preservation rule.
If , which indicates that the expansion direction is more optimal. Replace with . Otherwise, replace with . to avoid excessive expansion that could lead to deviation in direction.
(4) Contraction: correction of deviation direction.
When the reflection vertex is worse, it is necessary to reduce the step size and amend the direction to avoid deviating from the high-quality region. This is derived in two scenarios:
Step 1 If (Direction slightly deviated). Shrink towards the direction, introduce compression coefficient ( ). The compression vertex is established as:
32
If (Significant deviation in direction). Shrink towards the direction to achieve inward contraction and return to the high-quality region. Replace with .
Step 2 If , introduce the contraction coefficient ( ). The contraction vertex is established as:
33
If , replace with . Otherwise, replace with .
Stage 3: Convergence criterion and direction search termination condition
To ensure that the directional search achieves the preset accuracy, the convergence condition is defined as the dispersion of the objective function values at the vertices of the simplex method being less than the threshold .
34
If the convergence condition is met: the current directional search has approached a local optimum solution and triggered BSLO global re-exploration. If the convergence condition is not met, it is necessary to repeat the directional optimization iteration of vertex evaluation in terms of reflection/expansion/contraction until convergence is achieved.
Algorithm 2 portrays the pseudocode of the simplex method.
Algorithm 3 portrays the pseudocode of the SBSLO.
SBSLO-based feedforward neural networks
Algorithm 4 portrays the pseudocode of the SBSLO-based FNNs. Figure 4 illustrates the flowchart of ETSA for FNNs.
Fig. 4 [Images not available. See PDF.]
Flowchart of SBSLO for FNNs.
Computational complexity
Time complexity: The SBSLO involves the following operational steps: initialization, exploration and exploitation of directional leeches, a directionless leeches’ search mechanism, a re-tracking mechanism, fitness value evaluation, and stagnation judgment. In SBSLO, constitutes the population size, constitutes the maximum iteration, constitutes the issue dimension, constitutes the directional leeches, constitutes the directionless leeches, . For population initialization, the time complexity is . For exploration and exploitation of directional leeches, the position update of all directional leeches needs to be calculated for each iteration, and the time complexity is . For recherche mechanism of directionless leeches, the random search strategy requires traversing all undirected leeches, the time complexity is . For re-tracking mechanism, periodically redistribute the removed leeches, the time complexity is . For fitness value evaluation, the time complexity is , constitutes computational cost of the search agent. For stagnation judgment, the time complexity is . The overall time complexity is .
Space complexity: it measures the trend of the storage space consumption required for algorithm execution as the scale of the input issue grows, which serves as a pivotal metric for assessing the feasibility and efficiency of algorithms, balancing the memory efficiency and operational efficiency. In SBSLO, the constitutes the population size, constitutes the issue dimension. The overall space complexity is .
Experimental results and analysis of SBSLO for resolving FNNs
Experimental setup
Experimental configuration stipulates a 64-bit Windows 11 OS, a 12th Gen Intel(R) Core(TM) i9-12900HX 2.30GHz CPU, 4TB storage, a 16 GB independent graphics card, and 16G RAM. All comparison approaches are implemented in MATLAB R2022b.
Test datasets
The 17 sample datasets are sourced from the Machine Learning Repository of the University of California, Irvine (UCI). All 17 datasets are selected from classic datasets with high citation counts and clear preprocessing schemes, which have reproducibility, comparability, and low processing costs. The core advantages are openness, standardization, and annotation integrity, which revolve around the dual adaptation of FNN training requirements and SBSLO optimization objective. This SBSLO covers five significant task gradient types, feature dimension, feature type, data quality, and sample size to comprehensively verify the algorithm performance in terms of the inertia weight optimization efficiency, generalization ability, and anti-interference ability, ensure the scientific rigor and experimental credibility of the selection logic relying on the openness of the UCI datasets. The inclusion of each dataset corresponds to an explicit validation objective. The low-dimensional noiseless dataset validates the basic optimization ability, the medium-dimensional noisy dataset validates the anti-interference ability, the small sample dataset validates the suppress of overfitting ability, and the mixed feature dataset validates the preprocessing adaptability. The ultimate goal is to establish a performance validation system without blind spots, avoid one-sided experimental conclusions caused by a single dataset. These datasets provide scientific and rigorous data support for the performance demonstration of SBSLO training feedforward neural networks, ensure the engineering practicality and generalizability of research results. Table 2 conveys the comprehensive explanation of the datasets.
Table 2. The comprehensive explanation of the datasets.
Datasets | Attribute | Class | Training | Testing | Input | Hidden | Output |
|---|---|---|---|---|---|---|---|
Blood | 4 | 2 | 493 | 255 | 4 | 9 | 2 |
Scale | 4 | 3 | 412 | 213 | 4 | 9 | 3 |
Survival | 3 | 2 | 202 | 104 | 3 | 7 | 2 |
Liver | 6 | 2 | 227 | 118 | 6 | 13 | 2 |
Seeds | 7 | 3 | 139 | 71 | 7 | 15 | 3 |
Wine | 13 | 3 | 117 | 61 | 13 | 27 | 3 |
Iris | 4 | 3 | 99 | 51 | 4 | 9 | 3 |
Statlog | 13 | 2 | 178 | 92 | 13 | 27 | 2 |
XOR | 3 | 2 | 4 | 4 | 3 | 7 | 2 |
Balloon | 4 | 2 | 10 | 10 | 4 | 9 | 2 |
Cancer | 9 | 2 | 599 | 100 | 9 | 19 | 2 |
Diabetes | 8 | 2 | 507 | 261 | 8 | 17 | 2 |
Gene | 57 | 2 | 70 | 36 | 57 | 115 | 2 |
Parkinson | 22 | 2 | 129 | 66 | 22 | 45 | 2 |
Splice | 60 | 2 | 660 | 340 | 60 | 121 | 2 |
WDBC | 30 | 2 | 394 | 165 | 30 | 61 | 2 |
Zoo | 16 | 7 | 67 | 34 | 16 | 33 | 7 |
Parameter construction
To verify the adaptability and applicability, the SBSLO is compared with the KOA, NRBO, HLOA, IAO, WO, PKO, EGO, HEOA, APO, FLO, PO and BSLO. The sensitivity analysis of parameter tuning is summarized as follows: (1) The SBSLO exhibits strong fault tolerance and mechanism complementarity to reduce parameter sensitivity, offset parameter deviations, and eliminate stringent parameter accuracy requirements. The SBSLO achieves the mutual compensation between the global extensive exploration of the BSLO and the local refined exploration of the simplex method. Suppose the global exploration parameters are slightly insufficient. In that case, the reflection, expansion, and contraction operations of the simplex method can fill the gap promptly, and quickly correct the search direction. If the local fine-tuning parameters exhibit minor deviations, the multiple rounds of global iterations in the BSLO can provide correction space and re-explore high-quality regions. This design fundamentally reduces the reliance on extreme parameter accuracy. (2) The core parameters of the simplex method (reflection coefficient , expansion coefficient , compression coefficient , contraction coefficient ) are determined as representative optimal empirical values and default standard configurations through rigorous mathematical theory derivation and extensive experimental verification, and exhibit strong universality and robustness. Altering these recognized and mature default parameters will undermine the mathematical completeness and convergence guarantees. The representative empirical parameters of the SBSLO are derived from the original articles, and the effectiveness and reliability have been confirmed.
KOA: aleatory numbers , , , , , , , , , constant numbers , .
NRBO: aleatory numbers , , , , , , , , , binary number , deciding factor .
HLOA: hue circle angle , binary number , constant numbers , , , , , aleatory numbers , , .
IAO: aleatory numbers , , , , , , , , , .
WO: aleatory numbers , , , , , , , distress coefficient , constant number , , .
PKO: aleatory number , beating factor , constant numbers , .
EGO: aleatory numbers , , , , , coefficient vector , .
HEOA: aleatory numbers , , constant numbers , .
APO: aleatory number , , synergistic factor , constant number .
FLO: aleatory numbers , .
PO: aleatory number , , constant number .
BSLO: aleatory numbers , , , , , , ratio , constant numbers , , , , , .
SBSLO: aleatory numbers , , , , , , ratio , constant numbers , , , , , , , , , .
Results and analysis
To fairly verify the superiority and feasibility of the comparative algorithms, the population scale is 30, the maximum iteration is 500, the number of independent runs is 30. The optimal value (Best), worst value (Worst), mean value (Mean), standard deviation (Std), median value (Median), classification accuracy, and ranking based on accuracy constitute more comprehensive evaluation metrics to reveal the generalization and adaptability of the SBSLO.
Table 3 conveys the statistical results of numerous sample datasets. Numerous evolutionary algorithms are applied to train the FNNs, with the purpose of utilizing group collaborative exploration and refined directional exploitation to obtain the optimal connection weights and bias thresholds. This approach measures the deviation between the expected output and actual output to evaluate sample training efficiency and prediction classification accuracy. The optimal value measures whether the algorithm can obtain the optimal and suboptimal solutions, and verifies the detection accuracy and convergence upper limit. The worst value measures the worst performance of the algorithm in extreme situations, and verifies the anti-interference ability and stability lower limit. The mean value measures whether the algorithm approaches the optimal solution after multiple rounds of experiments, and verifies the comprehensive convergence and normal consistency. The standard deviation measures the repeatability and convergence stability, the calculation results’ fluctuation, and verifies the discrepancy degree and parameter sensitivity. The median value measures the central tendency of the algorithm, verifying the stability of the distribution and resisting the impact of outliers. The classification accuracy measures the accuracy and adaptability of the algorithm in classification tasks, and verifies the strong practicality and feature discriminability. The ranking is based on accuracy measures, evaluating classification accuracy and generalization ability, and verifying stability and reliability. Through extensive statistical data experiments, the SBSLO exhibits strong universality and generalization, enabling global coverage and local fine-tuning, thereby avoiding search stagnation and premature convergence, and achieving good training efficiency and prediction accuracy. For Scale, Survival, Statlog, Diabetes, Gene and Splice, the optimal values, worst values, mean values, median values, classification accuracies, rankings of the SBSLO are significantly superior to those of the KOA, NRBO, HLOA, IAO, WO, PKO, EGO, HEOA, APO, FLO, PO and BSLO, the SBSLO exhibits notable flexibility and universality to facilitate synergistic benefits of the directional and directionless leeches foraging behavior of BSLO and reflection, expansion and contraction of simplex method to strengthen directional coarse-grained exploration and refined-grained exploitation, obtain comprehensive evaluation metrics, filter out the best connection weight and bias threshold. The standard deviations of the SBSLO for Scale, Statlog and Gene are superior to KOA, NRBO, HLOA, IAO, WO, HEOA, PO and BSLO, but inferior to PKO, EGO, APO and FLO, the SBSLO has strong reliability and security to achieve group collaborative foraging, accurately locate potential high-quality areas, maintain population diversity and stability, avoid search stagnation, determine suitable weight configurations, and enhance the classification accuracy. The standard deviations of the SBSLO for Survival, Diabetes and Splice are superior to KOA and APO, but inferior to NRBO, HLOA, IAO, WO, PKO, EGO, HEOA, FLO, PO and BSLO, the SBSLO has strong robustness and applicability to dynamically cover the search scope, provide deterministic direction guidance, improve sample training efficiency and prediction classification accuracy. For Blood, Liver, XOR and Balloon, the optimal values, mean values, median values, classification accuracies, and rankings of the SBSLO are excessively superior to those of the KOA, NRBO, HLOA, IAO, WO, PKO, EGO, HEOA, APO, FLO, PO and BSLO, but the worst values, standard deviations, are relatively inferior to those of other algorithms. The SBSLO utilizes group cooperative foraging, pheromone sharing, and adaptive adjustment to achieve global exploration coverage and local fine-tuning exploitation, thereby reducing the divergence and oscillation of the search route, enhancing solution quality, and improving convergence accuracy. For Seeds, Wine, Iris, Cancer, Parkinson, WDBC and Zoo, the massive evaluation metrics of the SBSLO are superior to those of the KOA, NRBO, HLOA, IAO, WO, EGO, APO, FLO, PO and BSLO in terms of the optimal values, worst values, mean values, median values, classification accuracies and rankings. The massive standard deviations are excessively superior to those of the NRBO, HLOA, IAO, WO, EGO, HEOA, FLO, PO and BSLO, but inferior to those of the KOA, PKO and APO. The directional leeches utilize the reflection and expansion of simplex method to strengthen refined-grained exploitation, ensure the feasibility of candidate solutions, and enhance solution quality. The directionless leeches utilize the contraction of the simplex method to strengthen coarse-grained exploration, increase population diversity, enhance stability and scalability, expand the search scope, and accelerate convergence speed. The complementary combination of the vertex replacement strategy of the simplex method and the re-tracking strategy of the BSLO is utilized to suppress premature convergence, improve constraint processing efficiency, and reduce parameter sensitivity. The experimental results reveal that the SBSLO not only has strong generalization and scalability to achieve collaborative search of coarse explore and refined exploit to avoid search stagnation, improve training efficiency and prediction classification accuracy, maintain the diversity of solutions, but also has strong feasibility and superiority to accelerate convergence speed, improve solution quality, strengthen the connection weight and bias threshold of neural networks.
Table 3. Statistical results of numerous sample datasets.
Datasets | Result | KOA | NRBO | HLOA | IAO | WO | PKO | EGO | HEOA | APO | FLO | PO | BSLO | SBSLO |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Blood | Best | 0.306919 | 0.307759 | 0.307728 | 0.308608 | 0.306240 | 0.305754 | 0.357897 | 0.355018 | 0.300685 | 0.363712 | 0.314904 | 0.306409 | 0.296110 |
Worst | 0.407708 | 0.358238 | 0.337763 | 0.338929 | 0.350259 | 0.319730 | 0.379410 | 0.364984 | 0.367917 | 0.468798 | 0.357366 | 0.353241 | 0.325101 | |
Mean | 0.325025 | 0.326138 | 0.319888 | 0.316222 | 0.316444 | 0.312039 | 0.366617 | 0.361698 | 0.322148 | 0.403325 | 0.334971 | 0.317710 | 0.306125 | |
Std | 0.025750 | 0.010645 | 0.007278 | 0.006438 | 0.012605 | 0.002993 | 0.005282 | 0.002720 | 0.016585 | 0.030177 | 0.010807 | 0.010851 | 0.006322 | |
Median | 0.315217 | 0.323944 | 0.319585 | 0.313982 | 0.311975 | 0.311859 | 0.365416 | 0.362830 | 0.319912 | 0.398249 | 0.334838 | 0.314451 | 0.306458 | |
Accuracy | 77.37 | 75.28 | 76.43 | 76.60 | 76.84 | 78.02 | 76.41 | 76.43 | 76.33 | 76.47 | 74.82 | 76.95 | 78.09 | |
Rank | 3 | 11 | 8 | 6 | 5 | 2 | 9 | 8 | 10 | 7 | 12 | 4 | 1 | |
Scale | Best | 0.122173 | 0.198805 | 0.167066 | 0.201826 | 0.149359 | 0.148391 | 0.469844 | 0.376710 | 0.148056 | 0.562832 | 0.181771 | 0.150570 | 0.065953 |
Worst | 0.521845 | 0.375545 | 0.478223 | 0.438349 | 0.484441 | 0.220874 | 0.583232 | 0.570513 | 0.286408 | 0.648909 | 0.450194 | 0.236136 | 0.214992 | |
Mean | 0.239093 | 0.258003 | 0.241956 | 0.310748 | 0.193721 | 0.186417 | 0.543077 | 0.530644 | 0.195822 | 0.601018 | 0.263706 | 0.173047 | 0.134280 | |
Std | 0.096848 | 0.042156 | 0.060928 | 0.057368 | 0.070672 | 0.021305 | 0.023340 | 0.046398 | 0.027112 | 0.019808 | 0.060560 | 0.019860 | 0.033166 | |
Median | 0.206311 | 0.252786 | 0.223185 | 0.299939 | 0.169524 | 0.184035 | 0.547572 | 0.541552 | 0.194175 | 0.596213 | 0.257356 | 0.167468 | 0.129006 | |
Accuracy | 84.19 | 78.13 | 80.68 | 72.70 | 84.38 | 87.57 | 23.91 | 30.14 | 86.96 | 2.942 | 78.90 | 86.90 | 87.76 | |
Rank | 6 | 9 | 7 | 10 | 5 | 2 | 12 | 11 | 3 | 13 | 8 | 4 | 1 | |
Survival | Best | 0.326246 | 0.372921 | 0.359869 | 0.352147 | 0.339392 | 0.334458 | 0.417463 | 0.415670 | 0.306866 | 0.424117 | 0.371647 | 0.329323 | 0.290066 |
Worst | 0.435269 | 0.400984 | 0.409263 | 0.396166 | 0.422104 | 0.374140 | 0.435910 | 0.426723 | 0.383977 | 0.466056 | 0.418599 | 0.427273 | 0.363000 | |
Mean | 0.361749 | 0.383871 | 0.379751 | 0.374283 | 0.368616 | 0.355516 | 0.424929 | 0.422714 | 0.346840 | 0.438994 | 0.393042 | 0.360866 | 0.317577 | |
Std | 0.023272 | 0.007693 | 0.010514 | 0.011202 | 0.018016 | 0.010577 | 0.004398 | 0.003681 | 0.022423 | 0.010599 | 0.010133 | 0.017526 | 0.019958 | |
Median | 0.357065 | 0.383310 | 0.377548 | 0.375021 | 0.366739 | 0.357234 | 0.424459 | 0.424990 | 0.349731 | 0.437539 | 0.390823 | 0.357681 | 0.311900 | |
Accuracy | 79.58 | 78.14 | 78.78 | 77.66 | 79.45 | 79.83 | 81.66 | 81.73 | 79.55 | 81.73 | 77.85 | 78.55 | 82.69 | |
Rank | 5 | 10 | 8 | 12 | 7 | 4 | 3 | 2 | 6 | 2 | 11 | 9 | 1 | |
Liver | Best | 0.370710 | 0.423973 | 0.400763 | 0.429565 | 0.403516 | 0.395847 | 0.478293 | 0.480895 | 0.360846 | 0.482895 | 0.424664 | 0.390890 | 0.328653 |
Worst | 0.442883 | 0.470975 | 0.473061 | 0.463333 | 0.450673 | 0.429084 | 0.490657 | 0.485243 | 0.452048 | 0.496549 | 0.477793 | 0.449054 | 0.467698 | |
Mean | 0.408908 | 0.452261 | 0.450385 | 0.450221 | 0.428448 | 0.416637 | 0.484608 | 0.483548 | 0.403347 | 0.487517 | 0.460616 | 0.422117 | 0.408802 | |
Std | 0.019517 | 0.010833 | 0.019802 | 0.009412 | 0.012609 | 0.008397 | 0.002354 | 0.001197 | 0.024419 | 0.003396 | 0.014959 | 0.017017 | 0.029808 | |
Median | 0.410841 | 0.456496 | 0.456337 | 0.451898 | 0.429073 | 0.418984 | 0.484563 | 0.483593 | 0.401630 | 0.486491 | 0.463778 | 0.424038 | 0.409239 | |
Accuracy | 63.89 | 50.96 | 52.42 | 50.39 | 59.66 | 63.55 | 56.12 | 56.72 | 58.31 | 56.77 | 51.63 | 59.57 | 73.72 | |
Rank | 2 | 12 | 10 | 13 | 4 | 3 | 9 | 8 | 6 | 7 | 11 | 5 | 1 | |
Seeds | Best | 0.050360 | 0.095776 | 0.057741 | 0.085424 | 0.043386 | 0.050360 | 0.151079 | 0.316140 | 0.043165 | 0.640530 | 0.092261 | 0.044390 | 0.000848 |
Worst | 0.424460 | 0.410019 | 0.670504 | 0.433713 | 0.480488 | 0.143885 | 0.451926 | 0.665942 | 0.179856 | 0.667256 | 0.247185 | 0.316547 | 0.163337 | |
Mean | 0.122302 | 0.168498 | 0.239972 | 0.225266 | 0.143410 | 0.096228 | 0.296078 | 0.496589 | 0.088983 | 0.662332 | 0.149242 | 0.081555 | 0.083647 | |
Std | 0.065338 | 0.084541 | 0.179080 | 0.105633 | 0.119369 | 0.021444 | 0.096998 | 0.088969 | 0.029960 | 0.007207 | 0.036168 | 0.048099 | 0.047216 | |
Median | 0.115108 | 0.150821 | 0.158654 | 0.187165 | 0.092500 | 0.093525 | 0.298561 | 0.479300 | 0.093525 | 0.665484 | 0.142346 | 0.069623 | 0.089902 | |
Accuracy | 78.87 | 75.86 | 69.38 | 69.81 | 77.55 | 81.50 | 64.83 | 34.31 | 83.09 | 0.00 | 76.52 | 84.22 | 83.14 | |
Rank | 5 | 8 | 10 | 9 | 6 | 4 | 11 | 12 | 3 | 13 | 7 | 1 | 2 | |
Wine | Best | 0.059829 | 0.069165 | 0.101395 | 0.149343 | 0.024277 | 0.025641 | 0.179487 | 0.345515 | 0.025641 | 0.640477 | 0.096721 | 0.019815 | 4.87E-22 |
Worst | 0.324786 | 0.461538 | 0.903704 | 0.436769 | 0.609368 | 0.153846 | 0.650243 | 0.683577 | 0.162393 | 0.666805 | 0.473054 | 0.538462 | 0.411542 | |
Mean | 0.137037 | 0.162019 | 0.490338 | 0.326271 | 0.142596 | 0.085196 | 0.436342 | 0.582172 | 0.074074 | 0.659589 | 0.240361 | 0.105933 | 0.076371 | |
Std | 0.066147 | 0.078709 | 0.301634 | 0.072647 | 0.150209 | 0.033842 | 0.136330 | 0.082167 | 0.034797 | 0.006872 | 0.095754 | 0.095642 | 0.078473 | |
Median | 0.128205 | 0.140696 | 0.405027 | 0.313243 | 0.088403 | 0.081197 | 0.435897 | 0.593371 | 0.068376 | 0.662765 | 0.212160 | 0.083537 | 0.068950 | |
Accuracy | 80.43 | 80.05 | 45.46 | 61.20 | 81.53 | 86.06 | 46.72 | 12.56 | 84.86 | 0.16 | 73.55 | 84.48 | 85.40 | |
Rank | 6 | 7 | 11 | 9 | 5 | 1 | 10 | 12 | 3 | 13 | 8 | 4 | 2 | |
Iris | Best | 0.020202 | 0.052557 | 0.020202 | 0.032888 | 0.022299 | 9.92E-66 | 0.131283 | 0.357071 | 0.020202 | 0.609848 | 0.059404 | 0.015323 | 0 |
Worst | 0.323232 | 0.307871 | 0.636364 | 0.493595 | 0.476269 | 0.101010 | 0.590303 | 0.628058 | 0.191919 | 0.661376 | 0.406153 | 0.195098 | 0.295545 | |
Mean | 0.092579 | 0.152550 | 0.195415 | 0.172561 | 0.092903 | 0.048047 | 0.451046 | 0.516834 | 0.050000 | 0.635916 | 0.170530 | 0.052704 | 0.051609 | |
Std | 0.075221 | 0.077461 | 0.137627 | 0.099649 | 0.096354 | 0.024502 | 0.129129 | 0.056328 | 0.036765 | 0.010765 | 0.095114 | 0.040391 | 0.076481 | |
Median | 0.080614 | 0.118548 | 0.206810 | 0.139864 | 0.061711 | 0.040404 | 0.492965 | 0.524148 | 0.040404 | 0.636011 | 0.145341 | 0.038265 | 0.023847 | |
Accuracy | 82.28 | 68.16 | 47.18 | 61.89 | 74.24 | 83.92 | 36.40 | 25.68 | 82.61 | 0.00 | 71.37 | 86.01 | 79.15 | |
Rank | 4 | 8 | 10 | 9 | 6 | 2 | 11 | 12 | 3 | 13 | 7 | 1 | 5 | |
Statlog | Best | 0.133841 | 0.221363 | 0.228425 | 0.269237 | 0.192629 | 0.146253 | 0.433137 | 0.319483 | 0.185393 | 0.439962 | 0.250815 | 0.177245 | 0.093505 |
Worst | 0.320225 | 0.415730 | 0.664292 | 0.404053 | 0.455451 | 0.292135 | 0.485156 | 0.494835 | 0.410112 | 0.499548 | 0.437822 | 0.375390 | 0.241963 | |
Mean | 0.249593 | 0.274429 | 0.342874 | 0.322244 | 0.276126 | 0.216233 | 0.462282 | 0.421036 | 0.275333 | 0.489699 | 0.310843 | 0.224163 | 0.199091 | |
Std | 0.049158 | 0.041170 | 0.139839 | 0.031968 | 0.070818 | 0.030551 | 0.016307 | 0.055433 | 0.049743 | 0.013856 | 0.046225 | 0.037378 | 0.031282 | |
Median | 0.261236 | 0.261230 | 0.291067 | 0.322963 | 0.252696 | 0.213203 | 0.464577 | 0.418480 | 0.269663 | 0.495369 | 0.294430 | 0.219067 | 0.205676 | |
Accuracy | 76.41 | 75.76 | 68.04 | 66.41 | 75.83 | 79.05 | 46.70 | 58.00 | 75.14 | 56.01 | 70.14 | 76.77 | 79.71 | |
Rank | 4 | 6 | 9 | 10 | 5 | 2 | 13 | 11 | 7 | 12 | 8 | 3 | 1 | |
XOR | Best | 0 | 0 | 0 | 0 | 0 | 0 | 0.250000 | 0.349561 | 0 | 0.497349 | 0 | 0 | 0 |
Worst | 0.250000 | 0.500000 | 0.500000 | 0.500000 | 0.444954 | 0.000332 | 0.500000 | 0.500000 | 0.443664 | 0.500000 | 0.256270 | 0.500000 | 0.500000 | |
Mean | 0.008333 | 0.195360 | 0.225000 | 0.458280 | 0.113841 | 1.13E-05 | 0.463406 | 0.494618 | 0.056467 | 0.499860 | 0.094908 | 0.349356 | 0.025573 | |
Std | 0.045644 | 0.211726 | 0.189714 | 0.115264 | 0.160344 | 6.05E-05 | 0.080666 | 0.027418 | 0.119436 | 0.000513 | 0.108720 | 0.221734 | 0.099581 | |
Median | 0 | 0.172452 | 0.250000 | 0.500000 | 0.000138 | 3.14E-45 | 0.500000 | 0.500000 | 0 | 0.500000 | 0.035600 | 0.500000 | 0 | |
Accuracy | 27.50 | 26.66 | 20.83 | 26.66 | 20.00 | 30.83 | 20.00 | 34.16 | 25.00 | 22.50 | 23.33 | 27.50 | 35.83 | |
Rank | 4 | 5 | 9 | 5 | 10 | 3 | 10 | 2 | 6 | 8 | 7 | 4 | 1 | |
Balloon | Best | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003719 | 0 | 0.406860 | 0 | 0 | 0 |
Worst | 0 | 0.400000 | 0.400000 | 0.200000 | 0.200000 | 4.02E-74 | 0.346009 | 0.354524 | 0 | 0.495586 | 4.66E-05 | 2.71E-22 | 7.4E-258 | |
Mean | 0 | 0.013333 | 0.070000 | 0.023919 | 0.006667 | 1.34E-75 | 0.091130 | 0.215608 | 0 | 0.467820 | 1.63E-06 | 9.04E-24 | 2.5E-259 | |
Std | 0 | 0.073030 | 0.098786 | 0.049564 | 0.036515 | 7.35E-75 | 0.113376 | 0.082492 | 0 | 0.018944 | 8.50E-06 | 4.95E-23 | 0 | |
Median | 0 | 0 | 0 | 0 | 0 | 0 | 0.000761 | 0.226240 | 0 | 0.470532 | 0 | 0 | 0 | |
Accuracy | 75.00 | 61.00 | 50.66 | 47.33 | 81.66 | 72.00 | 58.33 | 65.33 | 79.33 | 57.66 | 78.33 | 53.66 | 82.33 | |
Rank | 5 | 8 | 12 | 13 | 2 | 6 | 9 | 7 | 3 | 10 | 4 | 11 | 1 | |
Cancer | Best | 0.046745 | 0.050084 | 0.050893 | 0.050431 | 0.043406 | 0.040067 | 0.068447 | 0.085835 | 0.045075 | 0.252521 | 0.052301 | 0.037724 | 0.024152 |
Worst | 0.083472 | 0.260269 | 0.273790 | 0.268688 | 0.104227 | 0.061554 | 0.131886 | 0.308709 | 0.075125 | 0.493245 | 0.093823 | 0.059465 | 0.076138 | |
Mean | 0.057880 | 0.070977 | 0.194424 | 0.085063 | 0.062420 | 0.051340 | 0.097729 | 0.188977 | 0.053868 | 0.385911 | 0.068009 | 0.049734 | 0.051715 | |
Std | 0.008094 | 0.037432 | 0.093203 | 0.050988 | 0.013477 | 0.005759 | 0.015538 | 0.074450 | 0.006261 | 0.045994 | 0.009528 | 0.006397 | 0.013502 | |
Median | 0.056761 | 0.063323 | 0.260364 | 0.069524 | 0.058840 | 0.051643 | 0.095159 | 0.156341 | 0.052588 | 0.385464 | 0.069390 | 0.049666 | 0.053745 | |
Accuracy | 95.83 | 93.70 | 31.96 | 83.76 | 92.83 | 95.86 | 83.96 | 45.70 | 94.36 | 2.03 | 95.16 | 97.13 | 96.60 | |
Rank | 4 | 7 | 12 | 10 | 8 | 3 | 9 | 11 | 6 | 13 | 5 | 1 | 2 | |
Diabetes | Best | 0.305684 | 0.337313 | 0.328270 | 0.343713 | 0.311305 | 0.317056 | 0.421004 | 0.381792 | 0.304619 | 0.452752 | 0.338400 | 0.315984 | 0.252869 |
Worst | 0.581354 | 0.372056 | 0.416112 | 0.420925 | 0.417009 | 0.362678 | 0.466748 | 0.464534 | 0.467456 | 0.489134 | 0.439358 | 0.436625 | 0.354880 | |
Mean | 0.344084 | 0.352242 | 0.357736 | 0.370739 | 0.328117 | 0.328298 | 0.449404 | 0.445151 | 0.404638 | 0.468587 | 0.371658 | 0.339434 | 0.306757 | |
Std | 0.056814 | 0.009454 | 0.020930 | 0.015871 | 0.018722 | 0.009198 | 0.012104 | 0.019289 | 0.031851 | 0.010008 | 0.022846 | 0.025052 | 0.029398 | |
Median | 0.326621 | 0.350884 | 0.351824 | 0.367978 | 0.324479 | 0.326636 | 0.450370 | 0.449991 | 0.404339 | 0.466422 | 0.366563 | 0.332176 | 0.318441 | |
Accuracy | 75.93 | 73.16 | 71.01 | 68.27 | 75.40 | 76.71 | 64.55 | 65.92 | 77.04 | 67.71 | 68.97 | 73.94 | 77.65 | |
Rank | 4 | 7 | 8 | 10 | 5 | 3 | 13 | 12 | 2 | 11 | 9 | 6 | 1 | |
Gene | Best | 0.119905 | 0.146922 | 0.252044 | 0.271045 | 0.131832 | 0.142963 | 0.285714 | 0.315405 | 0.114286 | 0.352259 | 0.238993 | 0.121271 | 0.075119 |
Worst | 0.342857 | 0.428571 | 0.800000 | 0.374255 | 0.400000 | 0.342857 | 0.389488 | 0.374970 | 0.314286 | 0.494921 | 0.406527 | 0.394585 | 0.258665 | |
Mean | 0.247511 | 0.285751 | 0.589850 | 0.315152 | 0.234197 | 0.236401 | 0.367658 | 0.356914 | 0.232381 | 0.408443 | 0.335944 | 0.238231 | 0.151449 | |
Std | 0.065169 | 0.081189 | 0.206347 | 0.024042 | 0.083302 | 0.048660 | 0.019402 | 0.015618 | 0.054873 | 0.039621 | 0.038133 | 0.091085 | 0.048639 | |
Median | 0.242857 | 0.307143 | 0.711813 | 0.318046 | 0.214286 | 0.242857 | 0.371719 | 0.364246 | 0.242857 | 0.392697 | 0.342857 | 0.202635 | 0.150667 | |
Accuracy | 3.98 | 4.90 | 1.11 | 2.96 | 6.20 | 3.42 | 0.27 | 0.00 | 4.44 | 0.92 | 1.48 | 6.66 | 8.15 | |
Rank | 6 | 4 | 10 | 8 | 3 | 7 | 12 | 13 | 5 | 11 | 9 | 2 | 1 | |
Parkinson | Best | 0.062016 | 0.079172 | 0.108527 | 0.055547 | 0.093023 | 0.039180 | 0.116279 | 0.175734 | 0.054264 | 0.289685 | 0.097875 | 0.033444 | 0.056681 |
Worst | 0.255814 | 0.255814 | 0.358447 | 0.248209 | 0.248062 | 0.139535 | 0.279688 | 0.303939 | 0.162791 | 0.475160 | 0.254467 | 0.277360 | 0.224806 | |
Mean | 0.109392 | 0.141125 | 0.217777 | 0.134453 | 0.143755 | 0.093356 | 0.217959 | 0.232924 | 0.108786 | 0.362976 | 0.166484 | 0.129439 | 0.122750 | |
Std | 0.034864 | 0.046017 | 0.078344 | 0.044557 | 0.053121 | 0.023882 | 0.045492 | 0.034938 | 0.027949 | 0.049355 | 0.046478 | 0.061126 | 0.040720 | |
Median | 0.100775 | 0.128370 | 0.224188 | 0.133470 | 0.124031 | 0.093025 | 0.230260 | 0.219541 | 0.100775 | 0.361527 | 0.152105 | 0.108336 | 0.122480 | |
Accuracy | 68.23 | 65.40 | 58.33 | 62.77 | 64.84 | 68.68 | 62.37 | 63.33 | 66.96 | 63.28 | 64.49 | 66.06 | 65.40 | |
Rank | 2 | 5 | 12 | 10 | 6 | 1 | 11 | 8 | 3 | 9 | 7 | 4 | 5 | |
Splice | Best | 0.334863 | 0.297849 | 0.426188 | 0.444814 | 0.280097 | 0.248251 | 0.483034 | 0.461467 | 0.301515 | 0.494656 | 0.381152 | 0.277079 | 0.147617 |
Worst | 0.559091 | 0.548485 | 0.498365 | 0.492596 | 0.484964 | 0.559922 | 0.499550 | 0.702810 | 0.540909 | 0.499967 | 0.640856 | 0.499996 | 0.348771 | |
Mean | 0.467498 | 0.370661 | 0.470193 | 0.469933 | 0.371105 | 0.420815 | 0.496129 | 0.499163 | 0.401061 | 0.498936 | 0.487044 | 0.397206 | 0.262526 | |
Std | 0.058136 | 0.041270 | 0.020807 | 0.011456 | 0.052538 | 0.086101 | 0.004446 | 0.039960 | 0.055747 | 0.001201 | 0.059240 | 0.077616 | 0.047773 | |
Median | 0.479545 | 0.371924 | 0.473517 | 0.470407 | 0.359489 | 0.429414 | 0.497933 | 0.497156 | 0.386364 | 0.499198 | 0.476825 | 0.360891 | 0.259916 | |
Accuracy | 62.41 | 59.78 | 39.41 | 37.75 | 59.78 | 64.48 | 31.42 | 40.26 | 67.19 | 50.19 | 41.45 | 47.35 | 72.41 | |
Rank | 4 | 5 | 10 | 11 | 5 | 3 | 12 | 9 | 2 | 6 | 8 | 7 | 1 | |
WDBC | Best | 0.030137 | 0.063929 | 0.071213 | 0.075605 | 0.058376 | 0.032740 | 0.106599 | 0.136779 | 0.025381 | 0.430310 | 0.085602 | 0.044910 | 0.043897 |
Worst | 0.109845 | 0.204975 | 0.504434 | 0.210196 | 0.342640 | 0.086274 | 0.394006 | 0.441366 | 0.119289 | 0.497308 | 0.205423 | 0.348691 | 0.172168 | |
Mean | 0.048787 | 0.113345 | 0.205103 | 0.143235 | 0.128683 | 0.052556 | 0.217626 | 0.255709 | 0.067347 | 0.481478 | 0.138657 | 0.113775 | 0.099162 | |
Std | 0.019705 | 0.035277 | 0.133664 | 0.037272 | 0.071446 | 0.013410 | 0.075509 | 0.064569 | 0.022135 | 0.016319 | 0.030423 | 0.074674 | 0.028790 | |
Median | 0.044251 | 0.103445 | 0.144764 | 0.143054 | 0.112944 | 0.051841 | 0.194162 | 0.251795 | 0.064721 | 0.489237 | 0.139172 | 0.091930 | 0.097667 | |
Accuracy | 92.70 | 87.61 | 83.87 | 86.38 | 86.92 | 92.36 | 81.23 | 85.31 | 89.21 | 72.78 | 85.61 | 86.51 | 89.10 | |
Rank | 1 | 5 | 11 | 8 | 6 | 2 | 12 | 10 | 3 | 13 | 9 | 7 | 4 | |
Zoo | Best | 0.059701 | 0.113080 | 0.388060 | 0.313520 | 0.046755 | 0.119403 | 0.238806 | 0.476778 | 0.044776 | 0.786485 | 0.205846 | 0.059940 | 0 |
Worst | 0.388060 | 0.746269 | 1.000000 | 0.564858 | 0.552239 | 0.358209 | 0.600450 | 1.036154 | 0.343284 | 0.862250 | 0.611940 | 0.830684 | 0.567164 | |
Mean | 0.223383 | 0.278571 | 0.715790 | 0.447402 | 0.231915 | 0.227363 | 0.422746 | 0.757719 | 0.156716 | 0.840905 | 0.344955 | 0.321605 | 0.218566 | |
Std | 0.090955 | 0.117621 | 0.200050 | 0.070271 | 0.121018 | 0.068753 | 0.092066 | 0.142214 | 0.084771 | 0.020266 | 0.080680 | 0.242479 | 0.177875 | |
Median | 0.223881 | 0.253090 | 0.656716 | 0.456387 | 0.216919 | 0.223881 | 0.414907 | 0.819649 | 0.134328 | 0.847685 | 0.343079 | 0.216659 | 0.201493 | |
Accuracy | 61.66 | 55.98 | 18.82 | 38.13 | 60.49 | 61.86 | 44.80 | 8.92 | 66.66 | 0.00 | 51.86 | 15.99 | 61.76 | |
Rank | 4 | 6 | 10 | 9 | 5 | 2 | 8 | 12 | 1 | 13 | 7 | 11 | 3 |
Convergence analysis
Figure 5 illustrates the convergence trajectories of the SBSLO and other comparative methods via distinct sample datasets. The convergence trajectories can precisely quantify the convergence frequency and accurately measure the convergence quality, revealing fluctuations and oscillations in the solution, evaluating the sensitivity of initial conditions, and distinguishing between search strategies. They also approach the iteration speed of the appropriate solution and attain the precision of the superior solution. For Blood, Scale, Survival, Statlog, Diabetes, Gene and Splice, the optimal values, worst values, mean values, median values, classification accuracies, and rankings of the SBSLO are superior to those of the KOA, NRBO, HLOA, IAO, WO, PKO, EGO, HEOA, APO, FLO, PO and BSLO. The SBSLO boasts strong scalability and anti-interference capabilities, thereby accelerating convergence efficiency and enhancing evaluation accuracy. The SBSLO utilizes the simplex method to quadratically optimize the current solution and improve convergence accuracy, which combines the to equilibrate discovery and extraction. For Liver, XOR and Balloon, the evaluation metrics of the SBSLO are excessively superior to those of the KOA, NRBO, HLOA, IAO, WO, PKO, EGO, HEOA, APO, FLO, PO and BSLO in terms of the optimal values, mean values, median values, classification accuracies, and rankings. The SBSLO has strong comprehensive convergence and normal consistency. The SBSLO rapidly locks onto the potential optimal region by steeply decreasing the convergence slope in the initial stage, and then gradually converges smoothly to the lowest point in the later stage, accurately acquiring the global optimal solution. The SBSLO is an efficient and stable algorithm that utilizes a dynamic balance mechanism and individual collaborative search to continuously optimize and avoid premature convergence. For Seeds, Wine, Iris , Cancer, Parkinson, WDBC and Zoo, the massive optimal values, worst values, mean values, median values, classification accuracies and rankings are excessively superior to those of the KOA, NRBO, HLOA, IAO, WO, PKO, EGO, HEOA, APO, FLO, PO and BSLO. The SBSLO has strong feasibility and superiority, offering faster convergence speed and higher solution quality. The simplex method intervenes to slow down the decline rate of the fitness values, continuously improving convergence accuracy, forcing the fitness values to decrease towards the optimal solution, strengthening constraint processing efficiency, and ensuring the feasibility of candidate solutions. The SBSLO integrates the directional and directionless foraging mechanisms of the BSLO, as well as the reflection, expansion, and contraction of the simplex method, to achieve complementary advantages between directional coarse-grained exploration and refined-grained exploitation. This approach expands the search scope and increases population diversity, improving sample training efficiency and prediction classification accuracy. The SBSLO exhibits substantial adaptability and scalability, which expedite convergence speed, promote solution quality, enhance robustness and reliability, and facilitate the acquisition of non-oscillatory and smooth convergence curves, thereby strengthening training effectiveness.
Fig. 5 [Images not available. See PDF.]
Convergence trajectories of the SBSLO and other comparative methods via distinct sample datasets.
Boxplot analysis
Figure 6 illustrates the boxplots of the SBSLO and other comparative methods via distinct sample datasets. The boxplots can intuitively quantify the dispersion of sample datasets and objectively reflect the anti-interference ability, which evaluates convergence accuracy and stability, reveals potential issues, verifies the effectiveness and feasibility of the search strategy, guides parameter tuning and structural optimization, and quantifies detection and mining. For Blood, the standard deviation of the SBSLO is obviously better than that of the KOA, NRBO, HLOA, IAO, WO, APO, FLO, PO and BSLO, but slightly worse than that of the PKO, EGO, HEOA, the SBSLO utilizes the geometric transformation of reflection, expansion and contraction of the simplex method to cover the detection solution space, realize refined search, maintain the solution diversity, reduce random interference, and ensure stable low deviation output. For Scale, Seeds, Statlog and Gene, the massive standard deviations of the SBSLO are excessively superior to those of the KOA, NRBO, HLOA, IAO, WO, HEOA, APO, PO and BSLO, but slightly inferior to those of the PKO, EGO and FLO, the SBSLO has strong robustness and stability, good training efficiency and convergence accuracy, extensive applicability and anti-interference. For Wine, Iris, XOR, Cancer, Parkinson and WDBC, the massive standard deviations of the SBSLO are excessively superior to those of the NRBO, HLOA, IAO, WO, EGO, , PO and BSLO, but slightly inferor to those of the KOA, PKO, HEOA , APO and FLO, the SBSLO utilizes the global cyclopedic explore of the re-tracking strategy and local refined exploit of simplex method to avoid gradient disappearance or combination explosion, realize complementary advantages and collaborative search, inhibit premature convergence and search stagnation, ensure reliability and stability. For Survival, Liver, Diabetes, Splice and Zoo, compared with other algorithms, the SBSLO has relatively worse standard deviations, the SBSLO is prone to local optima and early convergence to increase standard deviations in resolving these sample datasets. For Balloon, the standard deviation of the SBSLO is consistent with that of the KOA and APO, and the standard deviation of the SBSLO is excessively superior to those of the NRBO, HLOA, IAO, WO, PKO, EGO, HEOA, FLO, PO and BSLO. The SBSLO combines the directional and directionless leeches foraging mechanism of BSLO with reflection, expansion and contraction of simplex method to achieve the dynamic balance between global detection and local mining, improve the sample training efficiency and prediction classification accuracy, strengthen the convergence speed and solution quality, obtain ultra-low standard deviation and generalization ability, enhance robustness and stability, which provides a universal and efficient optimization scheme for training FNNs.
Fig. 6 [Images not available. See PDF.]
Boxplots of the SBSLO and other comparative methods via distinct sample datasets.
The computational time of SBSLO and other comparative algorithms
Table 4 conveys the computational time of different comparative algorithms for running independently once. For each algorithm, the population scale is 30, the maximum iteration is 500, and the number of independent runs is 30. For computational time of SBSLO and other comparative algorithms, KOA is 3.4598 min, NRBO is 12.0634 min, HLOA is 5.1725 min, IAO is 13.0536 min, WO is 3.5797 min, PKO is 5.6958 min, EGO is 34.2377 min, HEOA is 5.2245 min, APO is 11.2676 min, FLO is 5.7812 min, PO is 7.3661 min, BSLO is 4.1124 min, SBSLO is 6.3726 min. The computational time of SBSLO is superior to that of the NRBO, IAO, EGO, APO, PO, but inferior to KOA, HLOA, WO, PKO, HEOA, FLO and BSLO. The experimental results reveal that the SBSLO incorporates the BSLO with directional and directionless foraging procedures and the simplex method with reflection, expansion and contraction to attain supplementary advantages, strengthen stability and robustness, foster convergence speed and improve solution quality.
Table 4. The computational time of different comparative algorithms for running independently once.
Algorithm | KOA | NRBO | HLOA | IAO | WO | PKO | EGO | HEOA | APO | FLO | PO | BSLO | SBSLO |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Time (minutes) | 3.4598 | 12.0634 | 5.1725 | 13.0536 | 3.5797 | 5.6958 | 34.2377 | 5.2245 | 11.2676 | 5.7812 | 7.3661 | 4.1124 | 6.3726 |
Wilcoxon rank-sum test
This article experimentally verifies the accuracy improvement of the SBSLO, and provides preliminary empirical support for the performance advantages. The Wilcoxon rank-sum test is an essential non-parametric statistical method (requiring no assumption that the data follow a specific distribution, such as a normal distribution). The essence is to assign ranks to sample data, compare the rank-sum differences between two sets of data, and estimate whether there is a significant difference in the distribution of the two sets of data. The Wilcoxon rank-sum test is adopted to judge whether the difference in accuracy is statistically significant, avoid the one-sidedness of conclusions caused by random errors or sample bias, enhance the credibility and rigor of the conclusion33. Table 5 conveys the results of the p-value Wilcoxon rank-sum test. If , it constitutes a noteworthy difference. If , it constitutes a non-noteworthy difference. N/A constitutes “not applicable”. The experimental results demonstrate that the difference in the Wilcoxon rank-sum between the SBSLO and other algorithms is noteworthy. The probability of accuracy improvement is caused by the advantage of the SBSLO exceeding 95%, eliminating the interference of accidental factors. The introduction of the simplex method is genuinely practical, avoide the blind exaggeration of conclusions, rather than an unintentional performance improvement observed.
Table 5. Results of the p-value Wilcoxon rank-sum test.
Datasets | KOA | NRBO | HLOA | IAO | WO | PKO | EGO | HEOA | APO | FLO | PO | BSLO |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
Blood | 6.53E−07 | 4.20E−10 | 8.48E−09 | 9.83E−08 | 2.00E−05 | 7.74E−06 | 3.02E−11 | 3.02E−11 | 1.34E−05 | 3.02E−11 | 6.70E−11 | 8.20E−07 |
Scale | 2.19E−08 | 4.50E−11 | 1.78E−10 | 3.34E−11 | 2.68E−06 | 1.25E−07 | 3.02E−11 | 3.02E−11 | 2.01E−08 | 3.02E−11 | 6.70E−11 | 1.09E−05 |
Survival | 9.26E−09 | 3.02E−11 | 3.69E−11 | 6.07E−11 | 4.20E−10 | 1.01E−08 | 3.02E−11 | 3.02E−11 | 7.74E−06 | 3.02E−11 | 3.02E−11 | 2.92E−09 |
Liver | 6.95E−03 | 5.97E−09 | 2.38E−07 | 7.77E−09 | 8.56E−04 | 2.06E−02 | 3.02E−11 | 3.02E−11 | 1.90E−02 | 3.02E−11 | 8.89E−10 | 4.51E−02 |
Seeds | 1.65E−02 | 3.83E−06 | 3.09E−06 | 3.08E−08 | 1.29E−02 | 3.78E−02 | 5.46E−11 | 3.02E−11 | 8.18E−01 | 3.02E−11 | 1.67E−06 | 3.18E−02 |
Wine | 1.88E−04 | 1.49E−06 | 4.19E−10 | 3.16E−10 | 2.14E−02 | 1.31E−02 | 1.20E−10 | 3.69E−11 | 6.51E−01 | 3.02E−11 | 3.82E−09 | 6.78E−02 |
Iris | 3.51E−04 | 8.84E−07 | 7.04E−07 | 5.60E−07 | 8.12E−04 | 4.67E−02 | 9.91E−11 | 3.02E−11 | 3.31E−02 | 3.02E−11 | 3.52E−07 | 4.35E−02 |
Statlog | 9.44E−06 | 5.49E−11 | 3.69E−11 | 3.02E−11 | 9.83E−08 | 1.08E−02 | 3.02E−11 | 3.02E−11 | 1.09E−08 | 3.02E−11 | 3.02E−11 | 1.07E−02 |
XOR | 5.45E−02 | 1.93E−03 | 1.51E−05 | 2.30E−10 | 9.85E−08 | 2.75E−04 | 1.60E−11 | 4.07E−11 | 5.86E−01 | 1.82E−11 | 1.52E−08 | 6.90E−09 |
Balloon | 3.33E−02 | N/A | 2.02E−04 | 9.80E−03 | 1.60E−02 | N/A | 6.03E−06 | 1.72E−12 | 3.33E−02 | 1.72E−12 | 7.82E−03 | N/A |
Cancer | 7.00E−02 | 3.77E−04 | 2.59E−08 | 4.12E−06 | 8.68E−03 | 5.39E−01 | 4.46E−11 | 3.02E−11 | 7.95E−01 | 3.02E−11 | 5.46E−06 | 2.77E−02 |
Diabetes | 4.22E−03 | 2.67E−09 | 2.03E−09 | 6.07E−11 | 2.62E−03 | 2.39E−04 | 3.02E−11 | 3.02E−11 | 3.47E−10 | 3.02E−11 | 9.92E−11 | 1.86E−06 |
Gene | 3.48E−07 | 2.01E−08 | 3.65E−11 | 3.02E−11 | 6.74E−05 | 2.76E−07 | 3.02E−11 | 3.02E−11 | 1.84E−06 | 3.02E−11 | 4.07E−11 | 1.11E−04 |
Parkinson | 7.38E−02 | 1.25E−02 | 3.57E−06 | 2.33E−03 | 2.63E−02 | 2.04E−03 | 7.74E−09 | 2.87E−10 | 1.70E−02 | 3.02E−11 | 2.25E−04 | 7.73E−01 |
Splice | 3.69E−11 | 1.96E−10 | 3.02E−11 | 3.02E−11 | 1.17E−09 | 2.60E−08 | 3.02E−11 | 3.02E−11 | 9.91E−11 | 3.02E−11 | 3.02E−11 | 1.86E−09 |
WDBC | 2.19E−08 | 1.76E−02 | 2.15E−06 | 2.28E−05 | 1.88E−02 | 6.52E−09 | 5.56E−10 | 4.08E−11 | 3.15E−05 | 3.02E−11 | 1.64E−05 | 6.10E−01 |
Zoo | 5.63E−03 | 9.04E−02 | 3.24E−10 | 4.10E−06 | 4.59E−02 | 4.19E−02 | 2.42E−05 | 1.20E−10 | 4.20E−02 | 3.01E−11 | 1.44E−03 | 1.00E−02 |
Scalability and computational complexity of large-scale deep networks based on SBSLO
We elaborate on the scalability, the relationship between the simplex step scale and computational complexity of the SBSLO in tackling the large-scale deep networks (tens of thousands of parameters) from four aspects: algorithm performance adaptability, simplex adjustment mechanism and calculation cost control , which are summarized as follows: (1) Algorithm performance adaptability. The SBSLO utilizes the group collaborative foraging behavior of BSLO and the vertex iteration update of the simplex method to balance exploration and exploitation. The performance of large-scale deep networks needs to closely buckle the matching relationship between the SBSLO with directional leeches’ exploration and exploitation, directionless leeches’ random recherche and re-tracking mechanisms, and network pain points with high-dimensional parameters, non-convexity, and efficiency requirements. The SBSLO simulates the behavioral characteristics of leech population partitioning, exploration, and information sharing, which can divide tens of thousands of parameters into subgroups based on network functional modules. It utilizes a global optimal position synchronization mechanism to achieve cross-module collaboration and adapt to the redundancy of a high-dimensional parameter space. The SBSLO relies on simplex vertex performance ranking (comparing the model loss values of different parameter combinations, retaining the best vertices, and replacing the worst vertices), which can achieve high-frequency iteration of high-sensitivity parameters and low-frequency temporary storage of low-sensitivity redundant parameters, avoid the intermediate gradient storage overhead of chain differentiation in large-scale networks. (2) Simplex adjustment mechanism. The simplex step scale is known as the parameter iteration moving distance, which is not monotonically increasing with parameter size, but is jointly controlled by the optimization stage objective, parameter sensitivity, and network layer functionality. The SBSLO can dynamically adjust the dominant step size to achieve a moderate increase in exploration period and a fixed decrease in development period. In the early stage of large-scale deep network optimization, the goal is to locate the global optimal region, which requires expanding the step size to cover the high-dimensional parameter space. In the stage of local fine-tuning, the goal is to approximate the exact optimal solution, the step size is only determined by the smoothness of the loss function, and independent of the parameter size. The SBSLO can utilize parameter-driven step size differentiation to achieve fixed small values for highly sensitive parameters and relaxed values for less sensitive parameters. For high-sensitivity parameters, the fixed step size is relatively small and is independent of the scale. For low sensitivity/redundant parameters, the step size can be moderately relaxed and subject to coupling constraints. (3) Calculation cost control. The core computational cost of SBSLO is the performance evaluation of the simplex vertex model, which calculates the loss value for each parameter combination. Large-scale networks need to synchronize the optimal parameters of each subpopulation through a global controller, and communication costs increase with the number of subpopulations. We adopt dynamic subpopulation partitioning and distributed parallel computing, vertex evaluation pruning, and an early termination mechanism, as well as sparse parameter storage and memory reuse, to significantly reduce the dimensionality that SBSLO needs to process directly.
Robustness and stability evaluation of SBSLO on datasets with imbalanced classes or noisy data
The robustness and stability evaluation of SBSLO on datasets with imbalanced classes or noisy data are summarized as follows: (1) The performance evaluation of SBSLO on datasets with imbalanced classes mainly focuses on the recognition ability of minority class samples, and the compatibility between oversampling and undersampling. The SBSLO exhibits a reasonable and intelligent mechanism for adjusting sample weights based on minority class samples, which can automatically endow high connection weights, emphasize the sample features, and elevate recognition accuracy. The SBSLO not only effectively employs oversampling techniques to synthesize minority class samples, expand the sample size of minority classes, and enrich training feature information, but also employs undersampling techniques to accurately screen and retain representative and high-value samples in the majority class, avoid substantial information loss, maintain the learning effect of the model. (2) The performance evaluation of SBSLO on datasets with noisy data mainly focuses on the recognition and processing mechanism of noisy samples, and the anti-interference ability of the model. The SBSLO exhibits an effective and reliable mechanism for identifying noisy samples based on sample distribution or model prediction uncertainty, which can directly reduce or remove the weight of noisy samples, weaken the interference of noise on model learning, and strengthen the stability of the model on noisy data. The SBSLO not only employs regularization technology to constrain model complexity, prevent overfitting of noisy data, and consolidate generalized features, but also employs an integrated learning strategy to combine the prediction results of multiple models, reduce the impact of noise on a single model, upgrade the anti-interference ability of the model on data sets, and ensure the recognition accuracy of model prediction. (3) The robustness and stability evaluation of SBSLO on datasets mainly focuses on the performance maintenance in complex environments. The SBSLO can simultaneously leverage the mechanism advantages of handling class imbalance (such as adjusting sample weights reasonably) and dealing with data noise (such as identifying and processing noisy samples), which can maintain a certain degree of detection accuracy and recall, accurately identify minority samples without excessive noise interference, and demonstrate good robustness and stability.
SBSLO for resolving real-world engineering designs
To strengthen the practical contribution and expand the application scope, the SBSLO is designed to resolve the real-world engineering designs: cantilever beam6, three-bar truss34, tubular column35, piston lever7, tension/compression spring36, car side impact37.
Cantilever beam
The ultimate motivation is to ameliorate the cumulative weight as illustrated in Fig. 7, which highlights five assessment metrics: beam width . The mathematical framework is established as:
Fig. 7 [Images not available. See PDF.]
Cantilever beam.
Consider
35
Minimize
36
Subject to
37
Variable range
38
Table 6 conveys the statistical results of cantilever beam. The SBSLO exhibits strong adaptability and versatility, enabling it to reap supplementary advantages, strengthen directional exploration precision, bolster population diversity, mitigate premature convergence, facilitate escape from local optima, and expedite solution efficiency. The extraction fitness and assessment metrics of the SBSLO surpass those of other algorithms. The SBSLO identifies the optimum weight’s fitness with the assessment metrics .Table 6
Statistical results of cantilever beam.
Algorithm | Optimum variables | Optimum weight | ||||
|---|---|---|---|---|---|---|
|
|
|
|
| ||
BWO38 | 6.2094 | 6.2094 | 6.2094 | 6.2094 | 6.2094 | 1.9373625 |
GPEA39 | 6.014808 | 5.306724 | 4.493232 | 3.505168 | 2.153781 | 1.339982 |
IAS39 | 5.9914 | 5.3085 | 4.5119 | 3.5021 | 2.1601 | 1.34 |
FLA1 | 5.5907 | 5.5357 | 4.3654 | 3.83 | 2.3748 | 1.353859 |
COA1 | 6.5562 | 5.412 | 4.516 | 3.168 | 2.0082 | 1.351601 |
WOA1 | 5.7240 | 5.5860 | 4.6935 | 3.3631 | 2.1942 | 1.345389 |
FFA40 | 4.9987 | 4.9995 | 4.9937 | 5.7251 | 4.9983 | 1.6005 |
SCA40 | 6.2001 | 5.6914 | 4.3141 | 3.5473 | 1.9471 | 1.3506 |
GCRA41 | 5.389588 | 5.020725 | 4.361692 | 3.212994 | 2.040863 | 1.3705 |
ASO42 | 6.0378 | 5.3076 | 4.4870 | 3.4991 | 2.1425 | 1.34 |
ChOA42 | 5.9364 | 5.2961 | 4.4700 | 3.4297 | 2.1106 | 1.3424 |
GJO42 | 6.0054 | 5.3041 | 4.5090 | 3.4991 | 2.1562 | 1.34 |
RSO42 | 7.2404 | 4.6475 | 3.9739 | 9.1907 | 1.8512 | 1.6788 |
SOA42 | 5.9589 | 5.3466 | 4.4790 | 3.5236 | 2.1685 | 1.3401 |
STOA42 | 6.0236 | 5.3252 | 4.5124 | 3.5176 | 2.0994 | 1.3402 |
TSA42 | 5.9426 | 5.3743 | 4.4655 | 3.5156 | 2.1830 | 1.3404 |
CPO42 | 6.0233 | 5.3196 | 4.4780 | 3.5097 | 2.1436 | 1.34 |
HEOA8 | 6.12 | 5.28 | 4.46 | 3.51 | 2.12 | 1.34 |
EO8 | 6.02 | 5.31 | 4.49 | 3.51 | 2.15 | 1.34 |
PKO6 | 6.099039338 | 5.336923937 | 4.408008379 | 3.441886599 | 2.197658491 | 1.340571445 |
SBSLO | 6.01863 | 5.30752 | 4.49497 | 3.50146 | 2.15371 | 1.33997 |
Three-bar truss
The ultimate motivation is to ameliorate the cumulative weight as illustrated in Fig. 8, which highlights two assessment metrics: cross-sections with and . The mathematical framework is established as:
Fig. 8 [Images not available. See PDF.]
Three-bar truss.
Consider
39
Minimize
40
Subject to
41
42
43
44
Variable range
45
Table 7 conveys the statistical results of three-bar truss. The SBSLO exhibits practicality and reliability to advance constraint processing capability, emphasize noteworthy robustness and generalization, reinforce the convergence procedure, elevate solution quality, forestall exaggerated convergence and cultivate desirable solutions. The extraction fitness and assessment metrics of the SBSLO surpass those of other algorithms. The SBSLO identifies the optimum weight’s fitness with the assessment metrics .Table 7
Statistical results of three-bar truss.
Algorithm | Optimum variables | Optimum weight | |
|---|---|---|---|
|
| ||
KOA1 | 0.788675 | 0.408248 | 263.895843 |
COA1 | 0.788057 | 0.410073 | 263.903379 |
RUN1 | 0.788793 | 0.407916 | 263.895854 |
SMA1 | 0.788541 | 0.408627 | 263.895857 |
DO1 | 0.788643 | 0.408339 | 263.895844 |
POA1 | 0.788675 | 0.408248 | 263.895843 |
SELO43 | 0.7878 | 0.4108 | 263.8964 |
HBO43 | 0.7887 | 0.4082 | 263.8959 |
LFD43 | 0.7879 | 0.4106 | 263.8963 |
SETO43 | 0.7886 | 0.4083 | 263.8958 |
BKA44 | 0.788675 | 0.408248 | 263.895843 |
SHO44 | 0.788898 | 0.40762 | 263.895881 |
AEFA45 | 0.78848449 | 0.408787881 | 263.9195433 |
PSA45 | 0.789351215 | 0.406573046 | 263.8958824 |
BSLO12 | 0.78867930 | 0.40823651 | 263.8958434 |
FOX12 | 0.78870269 | 0.4081704 | 263.8958523 |
ARSCA42 | 0.7887 | 0.4081 | 263.8958 |
CPO42 | 0.7885 | 0.4088 | 263.8959 |
PKO6 | 0.7886870838 | 0.4082144942 | 263.8958435 |
SFOA34 | 0.78868 | 0.40825 | 263.89584 |
SBSLO | 0.789462 | 0.410824 | 263.895723 |
Tubular column
The ultimate motivation is to ameliorate the cumulative cost as illustrated in Fig. 9, which highlights two assessment metrics: average width and tube depth . The mathematical framework is established as:
Fig. 9 [Images not available. See PDF.]
Tubular column.
Consider
46
Minimize
47
Subject to
48
49
50
51
52
53
54
Variable range
55
Table 8 conveys the statistical results of tubular column. The SBSLO incorporates the directional and directionless foraging procedures of BSLO and reflection, expansion and contraction of the simplex method to increase population diversity, strengthen constraint processing efficiency, suppress premature convergence, expand search scope, ensure the feasibility of candidate solutions, improve stability and scalability, accelerate convergence speed, and enhance solution quality. The extraction fitness and assessment metrics of the SBSLO surpass those of other algorithms. The SBSLO identifies the optimum weight’s fitness with the assessment metrics .Table 8
Statistical results of tubular column.
Algorithm | Optimum variables | Optimum cost | |
|---|---|---|---|
|
| ||
KH46 | 5.451278 | 0.291957 | 26.5314 |
BOA46 | 5.448426 | 0.292463 | 26.512782 |
HFBOA46 | 5.451157 | 0.291966 | 26.499503 |
EM47 | 5.452383 | 0.29190 | 26.53380 |
HEM47 | 5.451083 | 0.29199 | 26.53227 |
KOA1 | 5.4512 | 0.2920 | 26.499497 |
FLA1 | 5.4801 | 0.2905 | 26.563266 |
COA1 | 5.4511 | 0.2920 | 26.501823 |
GTO1 | 5.4512 | 0.2920 | 26.499497 |
RUN1 | 5.4512 | 0.2920 | 26.499497 |
GWO1 | 5.4511 | 0.2920 | 26.499770 |
SMA1 | 5.4512 | 0.2920 | 26.499538 |
DO1 | 5.4512 | 0.2920 | 26.499497 |
POA1 | 5.4512 | 0.2920 | 26.499497 |
PDO48 | 6.182678144 | 0.2 | 24.615 |
DMOA48 | 6.182683216 | 0.2 | 24.615 |
AOA48 | 5.45115623 | 0.29196547 | 24.616 |
SSA48 | 5.45115623 | 0.29196547 | 24.615 |
GSA35 | 5.451163397 | 0.291965509 | 26.531364472 |
TTAO35 | 5.452181 | 0.291626 | 26.51816147 |
SBSLO | 5.451104 | 0.291927 | 26.499495 |
Piston lever
The ultimate motivation is to ameliorate the cumulative oil volume as illustrated in Fig. 10, which highlights four assessment metrics: , , and . The mathematical framework is established as:
Fig. 10 [Images not available. See PDF.]
Piston lever.
Consider
56
Minimize
57
Subject to
58
59
60
61
62
63
64
65
66
Variable range
67
Table 9 conveys the statistical results of piston lever. The SBSLO utilizes group cooperative foraging, pheromone sharing and adaptive adjustment to achieve global exploration coverage and local fine-tuning exploitation, reduce the divergence and oscillation of the search route, enhance solution quality and improve convergence accuracy. The extraction fitness and assessment metrics of the SBSLO surpass those of other algorithms. The SBSLO identifies the optimum weight’s fitness with the assessment metrics .Table 9
Statistical results of piston lever.
Algorithm | Optimum variables | Optimum weight | |||
|---|---|---|---|---|---|
|
|
|
| ||
CS49 | 0.050 | 2.043 | 120 | 4.085 | 8.427 |
SNS50 | 0.050 | 2.042 | 120 | 4.083 | 8.412698349 |
CSO38 | 0.050 | 2.399 | 85.68 | 4.0804 | 13.7094866557362 |
WAO38 | 0.099 | 2.057 | 118.4 | 4.112 | 9.05943208079399 |
SSA38 | 0.050 | 2.073 | 116.32 | 4.145 | 8.80243253777633 |
GSA38 | 497.49 | 500 | 60.041 | 2.215 | 168.094363238712 |
BWO38 | 12.364 | 12.801 | 172.02 | 3.074 | 95.9980864948937 |
AOS51 | 0.05 | 2.042112482 | 119.951727 | 4.084004492 | 8.419142742 |
GTO52 | 0.05 | 2.052859 | 119.6392 | 4.089713 | 8.41270 |
MFO52 | 0.05 | 2.041514 | 120 | 4.083365 | 8.412698 |
WOA52 | 0.051874 | 2.045915 | 119.9579 | 4.085849 | 8.449975 |
ISA48 | N/A | N/A | N/A | N/A | 8.4 |
CGO48 | N/A | N/A | N/A | N/A | 8.41281381 |
MGA48 | N/A | N/A | N/A | N/A | 8.41340665 |
TTAO35 | 0.05 | 2.041514 | 4.083027 | 120 | 8.412698323 |
EGO7 | 1.979653079 | 3.652740666 | 2.031507236 | 526.379188 | 8.41269886 |
MVO7 | 0.05 | 2.046900355 | 4.095582502 | 119.92924 | 8.57509432 |
ALO7 | 0.05 | 2.051360067 | 4.102693186 | 118.821159 | 8.53445096 |
CS-EO7 | 0.05 | 2.041514 | 4.083027 | 120 | 8.412698 |
SBSLO | 0.050 | 2.041 | 119.98 | 4.084 | 8.409875 |
Tension/compression spring
The ultimate motivation is to ameliorate the cumulative weight as illustrated in Fig. 11, which highlights three assessment metrics: wire diameter ( ), mean coil diameter ( ) and active coils magnitude . The mathematical framework is established as:
Fig. 11 [Images not available. See PDF.]
Tension/compression spring.
Consider
68
Minimize
69
Subject to
70
71
72
73
Variable range
74
Table 10 conveys the statistical results of the tension/compression spring. The SBSLO emphasizes formidable flexibility and sustainability to achieve collaborative search of coarse explore and refined exploit to avoid search stagnation, maintain the diversity of solutions, strengthen feasibility and superiority, and accelerate convergence speed. The extraction fitness and assessment metrics of the SBSLO surpass those of other algorithms. The SBSLO identifies the optimum weight’s fitness with the assessment metrics .Table 10
Statistical results of tension/compression spring.
Algorithm | Optimum variables | Optimum weight | ||
|---|---|---|---|---|
|
|
| ||
BKA44 | 0.051173 | 0.344426 | 12.047782 | 0.01267027 |
SHO44 | 0.0508 | 0.334800 | 11.702 | 0.012681 |
TTAO35 | 0.051674 | 0.051674 | 11.31044 | 0.012665 |
WO5 | 0.05 | 0.3115 | 14.8923 | 0.012665 |
RCGO53 | 0.052866 | 0.385549 | 9.787463 | 0.012701 |
AGWO53 | 0.051082 | 0.34226 | 12.19919 | 0.012681 |
MTBO54 | 0.05173 | 0.35771 | 11.2312 | 0.012665 |
OOBO55 | 0.05107 | 0.34288 | 12.08809 | 0.012655 |
AEFA45 | 0.0572892 | 0.5068545 | 9.8167697 | 0.0196575 |
PSA45 | 0.05 | 0.3174241 | 14.0323211 | 0.0127226 |
BSLO12 | 0.051669 | 0.356226 | 11.31788 | 0.0126652 |
FOX12 | 0.051983 | 0.363808 | 10.88657 | 0.0126686 |
FLO10 | 0.0516891 | 0.3567177 | 11.288966 | 0.0126652 |
LFD56 | 0.0517 | 0.3575 | 11.2442 | 0.0127 |
BBOA56 | 0.051344 | 0.334881 | 12.6223 | 0.012667 |
GRO56 | 0.0517082206 | 0.35717883 | 11.2619852 | 0.012665 |
MadDE36 | 0.0518 | 0.358 | 11.2 | 0.0127 |
RIME36 | 0.0693 | 0.940 | 2 | 0.0181 |
SBOA36 | 0.0517 | 0.357 | 11.3 | 0.0127 |
SFOA34 | 0.0517 | 0.35686 | 11.28038 | 0.01267 |
SBSLO | 0.051089 | 0.342909 | 12.0896 | 0.0126546 |
Car side impact
The ultimate motivation is to ameliorate the cumulative as illustrated in Fig. 12, which highlights eleven assessment metrics: B-pillar inner width , B-pillar reinforcement , floor side inner , cross magnitude , door beam , door beltline reinforcement , roof rail , B-pillar inner length , floor side inner , barrier height and crashing location . The mathematical framework is established as:
Fig. 12 [Images not available. See PDF.]
Car side impact.
Consider
75
Minimize
76
Subject to
77
78
79
80
81
82
83
84
85
86
Variable range
87
Table 11 conveys the statistical results of car side impact. The SBSLO combines the directional and directionless leeches foraging mechanism of BSLO with reflection, expansion and contraction of the simplex method to achieve the dynamic balance between global exploration and local exploitation, expedite convergence speed, promote solution quality, enhance robustness and reliability. The extraction fitness and assessment metrics of the SBSLO surpass those of other algorithms. The SBSLO identifies the optimum weight’s fitness with the assessment metrics .Table 11
Statistical results of car side impact.
Algorithm | Optimum variables | Optimum weight | |||||
|---|---|---|---|---|---|---|---|
|
|
|
|
|
| ||
|
|
|
|
| |||
HHO57 | 0.5 | 1.15627 | 0.5 | 1.27133 | 0.5 | 1.4777 | |
0.5 | 0.345 | 0.192 | − 14.592 | − 2.4898 | 22.98537 | ||
BOA57 | 0.8246 | 1.03224 | 0.54007 | 1.35639 | 0.6377 | 1.26889 | |
0.5854 | 0.192 | 0.345 | − 5.7333 | 0.4352 | 25.06573 | ||
HGSO57 | 0.5 | 1.22375 | 0.5 | 1.27111 | 0.5 | 1.31085 | |
0.5 | 0.345 | 0.345 | − 4.3235 | 2.93676 | 23.43457 | ||
DOA58 | 0.5081 | 1.2021 | 0.5318 | 1.3052 | 0.5719 | 1.4954 | |
0.5557 | 0.303 | 0.2585 | − 24.8171 | 3.4047 | 23.9682 | ||
DCS58 | 0.5772 | 1.2586 | 0.5195 | 1.2002 | 0.5463 | 1.258 | |
0.5073 | 0.278 | 0.2669 | 2.0888 | 5.4035 | 23.9995 | ||
COA58 | 0.5 | 1.2791 | 0.5 | 1.2739 | 1.2828 | 0.5 | |
0.5 | 0.2954 | 0.192 | 3.557 | 19.0792 | 25.2083 | ||
MSA58 | 0.5151 | 1.2684 | 0.5545 | 1.3737 | 0.5261 | 1.3484 | |
0.7156 | 0.2869 | 0.2167 | − 7.2394 | 11.7869 | 25.2334 | ||
HLOA58 | 0.5 | 1.0669 | 0.8016 | 1.0704 | 0.504 | 1.4873 | |
0.5 | 0.192 | 0.192 | − 29.9786 | 3.2119 | 23.6956 | ||
AROA58 | 0.5 | 1.5 | 0.5 | 1.2928 | 0.5 | 0.5 | |
ETO37 | 0.5 | 0.192 | 0.3195 | 8.8265 | 23.0874 | 25.3642 | |
0.50282 | 1.2414 | 0.51604 | 1.2201 | 0.60334 | 1.3878 | ||
0.5 | 0.74832 | 0.06747 | 2.2526 | − 7.2818 | 23.2574 | ||
SCHO37 | 0.5 | 1.10286 | 0.87088 | 0.88643 | 0.52609 | 1.49992 | |
0.5 | 0.03508 | 0.19439 | − 30 | − 0.5913 | 23.7209 | ||
AOA37 | 0.5 | 1.2279 | 0.5 | 1.4332 | 0.5 | 1.5 | |
0.5 | 0.61018 | 0.21619 | 0.00126 | − 0.0765 | 24.1125 | ||
HGS37 | 0.5 | 1.10612 | 1.11044 | 0.5 | 0.5 | 1.5 | |
GJO37 | 0.5 | 4.4E− 09 | 0.00000 | − 30 | − 6.0E− 09 | 23.8188 | |
0.5 | 1.20309 | 0.50327 | 1.28778 | 0.51053 | 1.5 | ||
0.5 | 0.00000 | 9.5E− 05 | − 22.115 | − 0.0536 | 23.4052 | ||
ROA59 | 1.098334 901 | 0.957459058 | 1.112521155 | 1.043356648 | 0.730817433 | 1.009550656 | |
0.51561597 | 0.345 | 0.345 | 0.053235933 | 0.042350889 | 28.40584747 | ||
MSROA59 | 0.5 | 1.228404689 | 0.5 | 1.212576244 | 0.5 | 0.982707235 | |
0.5 | 0.345 | 0.345 | 0.205169794 | 2.462754222 | 23.23089984 | ||
SCSO59 | 0.502366774 | 1.23533939 | 0.5 | 1.223008761 | 0.515267967 | 1.39187245 | |
0.50003369 | 0.340647775 | 0.211950171 | 1.374158706 | − 7.77399175 | 23.35787723 | ||
SHO59 | 1.5 | 1.267885192 | 1.5 | 0.768783364 | 1.11811662 | 0.74785158 | |
0.56089667 | 0.345 | 0.345 | 2.050521688 | 3.263049114 | 34.86111849 | ||
SOA59 | 0.500139239 | 1.254868587 | 0.5 | 1.205871077 | 0.739233716 | 0.772309974 | |
0.5 | 0.316999014 | 0.30308334 | 0.749660043 | 2.039711514 | 23.8070425 | ||
SFOA34 | 0.5 | 1.234 | 0.5 | 1.187 | 0.875 | 0.892 | |
0.4 | 0.345 | 0.192 | 1.5 | 0.572 | 23.5616 | ||
SBSLO | 0.5 | 1.11674 | 0.5 | 1.30207 | 0.5 | 1.5 | |
0.5 | 0.345 | 0.192 | − 19.67521 | − 0.00432 | 22.84294 | ||
Conclusion and future research
In this paper, an enhanced BSLO with simplex method (SBSLO) is proposed to train the FNNs, the purpose is not only to utilize group collaborative exploration and the refined directional exploitation to obtain the optimal connection weights and bias thresholds, but also to measure the deviation between expected output and actual output to evaluate sample training efficiency and prediction classification accuracy. The simplex method can strengthen directional coarse-grained exploration and refined-grained exploitation, enhance stability and scalability, increase population diversity, improve constraint processing efficiency, ensure the feasibility of candidate solutions, suppress premature convergence, expand the search scope, reduce parameter sensitivity, accelerate convergence speed, and enhance solution quality. The BSLO is motivated by the prey-grabbing habits of blood-sucking leeches in rice paddies and emulates the exploration, exploitation, switching mechanism of directional leeches, the recherche mechanism of directionless leeches, and the re-tracking mechanism to accomplish global coarse discovery and local elaborated extraction, thereby ascertaining the fantastic solution. The SBSLO integrates the directional and directionless foraging behavior of BSLO with reflection, expansion, and contraction of the simplex method to achieve synergistic advantages, improve sample training efficiency, refresh the leeches’ positions, reinforce the convergence procedure, and elevate solution quality. The experimental results demonstrate that the convergence speed, solution quality, sample training efficiency, prediction classification accuracy, stability and reliability of the SBSLO are superior to those of KOA, NRBO, HLOA, IAO, WO, PKO, EGO, HEOA, APO, FLO, PO and BSLO. The SBSLO exhibits remarkable practicality and superiority in strengthening the connection weights and bias thresholds of neural networks, thereby enhancing robustness and generalization, which is an efficient and reliable approach for training FNNs.
The limitations of the SBSLO are outlined as follows: (1) The scalability bottleneck of deep neural networks (NN) mainly focuses on insufficient model architecture and parallelization efficiency, poor robustness between small sample and extremely imbalanced data, data scale sensitivity and dependency, insufficient hardware dependency and real-time performance, and the contradiction between model complexity and computing resources. (2) The SBSLO utilizes the non-linear fitting and temporal feature extraction of NN to highlight the influence weights of critical variables and elevate computational efficiency. The SBSLO not only employs short-term, high-frequency time series forecasting with memory networks and gated recurrent units to tackle electricity load or traffic flow prediction, but also employs long-term, low-frequency time series forecasting with attention mechanism networks and temporal convolutional networks to capture economic indicators or environmental detection. (3) The SBSLO utilizes NN to transform temporal data into high-dimensional features and capture nonlinear temporal patterns. The SBSLO not only employs univariate time series classification with one-dimensional convolutional networks to tackle equipment fault type recognition and human behavior classification, but also employs multivariate time series classification with bidirectional long short-term memory networks to capture financial risk identification and medical condition classification. (4) The SBSLO may encounter some potential challenges in terms of computational complexity, local optimal trap, parameter sensitively, high-dimensional spatial adaptability, population diversity maintenance, mathematical theory support, and convergence proof, which may delay convergence efficiency and reduce calculation accuracy. (5) For nonlinear high-dimensional and dynamic multi-objective constraint application scenarios, SBSLO may lead to the adaptation defects of the feasible region, the search oscillation of the constraint boundary, high-dimensional disasters and conflicts, the collapse of the computational efficiency, and the imbalance between exploration and exploitation. (6) SBSLO exhibits robust stability and feasibility to train the FNNs and obtain the superior training efficiency, prediction accuracy, convergence speed and solution quality. However, the practicality and universality of the SBSLO still need more application research to verify.
Future research directions will mainly focus on modifying SBSLO to adapt to complex data, integrating new technologies and methods, expand application areas, expanding application areas. (1) The SBSLO integrates the deep learning attention mechanism and sample weight adjustment strategy to achieve feature complexity and sample correlation, accurately reflect the importance of sample training, refine noise sample recognition method, elevate recognition accuracy, automatically follow important samples, ignore noise samples, strengthen the adaptability and stability to complex data. (2) The SBSLO adopts an adversarial training mechanism to enhance the model noise resistance, and generate generative adversarial networks, create minority class samples, and ameliorate class imbalance. The SBSLO integrates transfer learning to leverage noise-free data from other fields, and enhance generality and robustness on complex datasets. (3) The SBSLO will be widely applied in practical fields on datasets with imbalanced classes and noise data, such as financial fraud detection, and industrial fault diagnosis, which will further validate and refine algorithm performance, accumulate rich practical experience, and furnish forceful support for solving complex data issues. (4) We will formally analyze the convergence guarantees of SBSLO through rigorous mathematical theory derivation and convexity assumption construction. We will focus on defining premises, deriving convergence conclusions, and clarifying logical chains of rate and boundary to conduct argumentation, to avoid additional experimental verification steps. (5) We will combine reinforcement learning to dynamically adjust multi-objective weights to avoid search stagnation, utilize quantum entanglement and quantum state vertex encoding to map simplex vertices into quantum superposition states, embed a multi-agent collaborative control mechanism to improve multi-modal global detection, couple digital twin driving mechanism and meta-learning parameter tuning to overcome the hyperparameter dimension disaster. (6) Relying on the platform of Anhui Provincial Engineering Research Center for Intelligent Equipment of Forest Undergrowth Crops, we will use multi-dimensional sensor fusion and environmental perception to obtain real-time data on understory environment and crop growth, and achieve data transmission and cloud analysis. We will develop intelligent operation equipment based on understory navigation and obstacle avoidance, precise operation execution, and low-power, lightweight design to achieve precise sowing, fertilizing, spraying, and harvesting. We will utilize machine learning and deep learning, edge computing and cloud computing, human–machine collaboration, and remote control to achieve intelligent decision-making and automatic control of operational equipment, improving adaptability and reliability, and reducing the failure rate.
Acknowledgements
This work was partially funded by Natural Science Key Research Project of Anhui Educational Committee under Grant No. 2024AH051989, Horizontal topic: Research on path planning technology of smart agriculture and forestry harvesting robots based on evolutionary algorithms under Grant No. 0045024064, Start-up Fund for Distinguished Scholars of West Anhui University under Grant No. WGKQ2022052, School-level Quality Engineering (Teaching and Research Project) under Grand No. wxxy2023079, School-level Quality Engineering (School-enterprise Cooperation Development Curriculum Resource Construction) under Grant No. wxxy2022101. The authors would like to thank the editor and anonymous reviewers for their helpful comments and suggestions.
Author contributions
Anqi Jin: Investigation, Software, Resources, Supervision, Validation, Formal analysis, Project administration, Visualization, Writing – original draft, Writing – review & editing. Jinzhong Zhang: Conceptualization, Methodology, Software, Validation, Data curation, Formal analysis, Visualization, Writing – original draft and Funding acquisition.
Funding
This work was partially funded by Natural Science Key Research Project of Anhui Educational Committee under Grant No. 2024AH051989, Horizontal topic: Research on path planning technology of smart agriculture and forestry harvesting robots based on evolutionary algorithms under Grant No. 0045024064, Start-up Fund for Distinguished Scholars of West Anhui University under Grant No. WGKQ2022052, School-level Quality Engineering (Teaching and Research Project) under Grand No. wxxy2023079, School-level Quality Engineering (School-enterprise Cooperation Development Curriculum Resource Construction) under Grant No. wxxy2022101. The authors would like to thank the editor and anonymous reviewers for their helpful comments and suggestions.
Data availability
The datasets used and/or analysed during the current study available from the corresponding author on reasonable request. All data generated or analyzed during this study are included directly in the text of this submitted manuscript. There are no additional external files with datasets.
Declarations
Competing interests
The authors declare no competing interests.
Abbreviations
BSLOBlood-sucking leech optimization
SBSLOBlood-sucking leech optimization with simplex method
KOAKepler optimization algorithm
NRBONewton-Raphson-based optimization
HLOAHorned lizard defense tactics
IAOInformation acquisition optimization
WOWalrus optimization
PKOPied kingfisher optimization
EGOEel and grouper optimization
HEOAHuman evolutionary optimization algorithm
APOArctic puffin optimization
FLOFrilled lizard optimization
POParrot optimization
FNNsFeedforward neural networks
NFLNo-free-lunch
BestOptimal value
WorstWorst value
MeanMean value
StdStandard deviation
MedianMedian value
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
1. Abdel-Basset, M; Mohamed, R; Azeem, SAA; Jameel, M; Abouhawwash, M. Kepler optimization algorithm: A new metaheuristic algorithm inspired by Kepler’s laws of planetary motion. Knowl. Based Syst.; 2023; 268, [DOI: https://dx.doi.org/10.1016/j.knosys.2023.110454] 110454.
2. Sowmya, R; Premkumar, M; Jangir, P. Newton-Raphson-based optimizer: A new population-based metaheuristic algorithm for continuous optimization problems. Eng. Appl. Artif. Intell.; 2024; 128, [DOI: https://dx.doi.org/10.1016/j.engappai.2023.107532] 107532.
3. Peraza-Vázquez, H; Peña-Delgado, A; Merino-Treviño, M; Morales-Cepeda, AB; Sinha, N. A novel metaheuristic inspired by horned lizard defense tactics. Artif. Intell. Rev.; 2024; 57, 59. [DOI: https://dx.doi.org/10.1007/s10462-023-10653-7]
4. Wu, X; Li, S; Jiang, X; Zhou, Y. Information acquisition optimizer: a new efficient algorithm for solving numerical and constrained engineering optimization problems J. . Supercomput.; 2024; 80, pp. 25736-25791. [DOI: https://dx.doi.org/10.1007/s11227-024-06384-3]
5. Han, M et al. Walrus optimizer: A novel nature-inspired metaheuristic algorithm. Expert Syst. Appl.; 2024; 239, [DOI: https://dx.doi.org/10.1016/j.eswa.2023.122413] 122413.
6. Bouaouda, A; Hashim, FA; Sayouti, Y; Hussien, AG. Pied kingfisher optimizer: a new bio-inspired algorithm for solving numerical optimization and industrial engineering problems. Neural Comput. Appl.; 2024; 36, pp. 15455-15513.
7. Mohammadzadeh, A; Mirjalili, S. Eel and grouper optimizer: a nature-inspired optimization algorithm. Clust. Comput.; 2024; 27, pp. 12745-12786. [DOI: https://dx.doi.org/10.1007/s10586-024-04545-w]
8. Lian, J; Hui, G. Human evolutionary optimization algorithm. Expert Syst. Appl.; 2024; 241, [DOI: https://dx.doi.org/10.1016/j.eswa.2023.122638] 122638.
9. Wang, W; Tian, W; Xu, D; Zang, H. Arctic puffin optimization: A bio-inspired metaheuristic algorithm for solving engineering design optimization. Adv. Eng. Softw.; 2024; 195, [DOI: https://dx.doi.org/10.1016/j.advengsoft.2024.103694] 103694.
10. Falahah, I. A. et al. Frilled lizard optimization: A novel bio-inspired optimizer for solving engineering applications. Comput. Mater. Contin.79 (2024).
11. Lian, J et al. Parrot optimizer: Algorithm and applications to medical problems. Comput. Biol. Med.; 2024; 172, [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/38452469][DOI: https://dx.doi.org/10.1016/j.compbiomed.2024.108064] 108064.
12. Bai, J et al. Blood-sucking leech optimizer. Adv. Eng. Softw.; 2024; 195, [DOI: https://dx.doi.org/10.1016/j.advengsoft.2024.103696] 103696.
13. Yang, J. et al. Feedforward-adaptive neural network control for ultra-precision positioning in magnetic levitation motion stages. Precis. Eng. (2025).
14. Li, T et al. A digital background calibration method for SAR ADC based on dual-layer feedforward neural network. Microelectron. J.; 2025; 159, [DOI: https://dx.doi.org/10.1016/j.mejo.2025.106645] 106645.
15. Schreuder, A; Bosman, AS; Engelbrecht, AP; Cleghorn, CW. Training feedforward neural networks with Bayesian hyper-heuristics. Inf. Sci.; 2025; 686, [DOI: https://dx.doi.org/10.1016/j.ins.2024.121363] 121363.
16. Ravichandran, N; Lansner, A; Herman, P. Unsupervised representation learningwith Hebbian synaptic and structural plasticity inbrain-like feedforward neural networks. Neurocomputing; 2025; 626, [DOI: https://dx.doi.org/10.1016/j.neucom.2025.129440] 129440.
17. Yang, Z; Yang, Q; Lu, W-Z. DeepMonte-Frame: an intelligent workflow for planar steel frame design based on Monte Carlo Tree search and feedforward neural networks. Adv. Eng. Inform.; 2025; 66, [DOI: https://dx.doi.org/10.1016/j.aei.2025.103510] 103510.
18. Lasheen, A; Sindi, HF; Zeineldin, HH; Morgan, MY. Online stability assessment for isolated microgrid via LASSO based neural network algorithm. Energy Convers. Manag. X; 2025; 25, 100849.
19. Ebid, SE; El-Tantawy, S; Shawky, D; Abdel-Malek, HL. Correlation-based pruning algorithm with weight compensation for feedforward neural networks. Neural Comput. Appl.; 2025; 37, pp. 6351-6367. [DOI: https://dx.doi.org/10.1007/s00521-024-10932-6]
20. Sarkodie, K. et al. Predicting physico-chemical parameters of barekese reservoir using feedforward neural network. Sci. Afr. e02779 (2025).
21. Lai, C-T; Blank, C; Schmelcher, P; Mukherjee, R. Towards arbitrary QUBO optimization: analysis of classical and quantum-activated feedforward neural networks. Mach. Learn. Sci. Technol.; 2025; 6, 2025MLS&T..6b5049L [DOI: https://dx.doi.org/10.1088/2632-2153/addb97] 025049.
22. Mehrkash, M; Santini-Bell, E. Robustness analysis of multi-layer feedforward artificial neural networks for finite element model updating. Appl. Soft Comput.; 2025; 171, [DOI: https://dx.doi.org/10.1016/j.asoc.2025.112799] 112799.
23. Ali, KAM et al. Performance evaluation and prediction of optimal operational conditions for a compact date seeds milling unit using feedforward neural networks. Sci. Rep.; 2025; 15, 4764.2025NatSR.15.4764A1:CAS:528:DC%2BB2MXjsFKjtL8%3D [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/39922878][PubMedCentral: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11807147][DOI: https://dx.doi.org/10.1038/s41598-025-87508-4]
24. Sun, M et al. The driving strategy optimization for an electro-hydraulic valve based on finite element model and feedforward neural network. J. Braz. Soc. Mech. Sci. Eng.; 2025; 47, pp. 1-17.2025NSE..199..1S [DOI: https://dx.doi.org/10.1007/s40430-025-05588-9]
25. Nascimento, M; Müller, C; Mayer, KS. Split-complex feedforward neural network for GFDM joint channel equalization and signal detection. Signal Process.; 2025; 233, [DOI: https://dx.doi.org/10.1016/j.sigpro.2025.109956] 109956.
26. Hedayati-Dezfooli, M et al. Optimizing Injection molding for propellers with soft computing, fuzzy evaluation, and Taguchi method. Emerg Sci J; 2024; 8, pp. 2101-2119. [DOI: https://dx.doi.org/10.28991/ESJ-2024-08-05-025]
27. Kostyrin, EV; Perekhodov, SN; Bagdasaryan, GG; Arutyunov, SD. A novel view on dental service management optimization using Markov processes. J Human Earth Future; 2024; 5, pp. 366-386. [DOI: https://dx.doi.org/10.28991/HEF-2024-05-03-05]
28. Widians, JA; Wardoyo, R; Hartati, S. A hybrid ant colony and grey wolf optimization algorithm for exploitation-exploration balance. Emerg. Sci. J.; 2024; 8, pp. 1642-1654. [DOI: https://dx.doi.org/10.28991/ESJ-2024-08-04-023]
29. Kaya, E et al. Training of feed-forward neural networks by using optimization algorithms based on swarm-intelligent for maximum power point tracking. Biomimetics; 2023; 8, 402. [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/37754153][PubMedCentral: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10526777][DOI: https://dx.doi.org/10.3390/biomimetics8050402]
30. Song, S; Jiang, Y; Song, X; Stojanovic, V. Composite neural learning-based adaptive actuator failure compensation control for full-state constrained autonomous surface vehicle. Neural Comput. Appl.; 2025; 37, pp. 6369-6381. [DOI: https://dx.doi.org/10.1007/s00521-024-10651-y]
31. Xie, B et al. PID-fuzzy switching-based strategy to heading control for remote operated vehicle. Neural Comput. Appl.; 2024; 37, pp. 16131-16147. [DOI: https://dx.doi.org/10.1007/s00521-024-10911-x]
32. Duan, Z et al. Fast magnetic field compensation algorithm based on the Nelder-Mead simplex method for dual-beam optically pumped magnetometers. Measurement; 2025; 243, [DOI: https://dx.doi.org/10.1016/j.measurement.2024.116376] 116376.
33. Zhang, J; Liu, W; Zhang, G; Zhang, T. Quantum encoding whale optimization algorithm for global optimization and adaptive infinite impulse response system identification. Artif. Intell. Rev.; 2025; 58, 158.1:CAS:528:DC%2BB2MXhtF2it7rF [DOI: https://dx.doi.org/10.1007/s10462-025-11120-1]
34. Zhong, C et al. Starfish optimization algorithm (SFOA): a bio-inspired metaheuristic algorithm for global optimization compared with 100 optimizers. Neural Comput. Appl.; 2025; 37, pp. 3641-3683. [DOI: https://dx.doi.org/10.1007/s00521-024-10694-1]
35. Zhao, S; Zhang, T; Cai, L; Yang, R. Triangulation topology aggregation optimizer: A novel mathematics-based meta-heuristic algorithm for continuous optimization and engineering applications. Expert Syst. Appl.; 2024; 238, [DOI: https://dx.doi.org/10.1016/j.eswa.2023.121744] 121744.
36. Fu, Y; Liu, D; Chen, J; He, L. Secretary bird optimization algorithm: a new metaheuristic for solving global optimization problems. Artif. Intell. Rev.; 2024; 57, 123. [DOI: https://dx.doi.org/10.1007/s10462-024-10729-y]
37. Luan, TM; Khatir, S; Tran, MT; De Baets, B; Cuong-Le, T. Exponential-trigonometric optimization algorithm for solving complicated engineering problems. Comput. Methods Appl. Mech. Eng.; 2024; 432, 4803764 [DOI: https://dx.doi.org/10.1016/j.cma.2024.117411] 117411.
38. Seyyedabbasi, A; Kiani, F. Sand Cat swarm optimization: A nature-inspired algorithm to solve global optimization problems. Eng. Comput.; 2023; 39, pp. 2627-2651. [DOI: https://dx.doi.org/10.1007/s00366-022-01604-x]
39. Naruei, I; Keynia, F; Sabbagh Molahosseini, A. Hunter–prey optimization: Algorithm and applications. Soft Comput.; 2022; 26, pp. 1279-1314. [DOI: https://dx.doi.org/10.1007/s00500-021-06401-0]
40. Rizk-Allah, RM; Hassanien, AE. A movable damped wave algorithm for solving global optimization problems. Evol. Intell.; 2019; 12, pp. 49-72. [DOI: https://dx.doi.org/10.1007/s12065-018-0187-8]
41. Agushaka, J. O. et al. Greater cane rat algorithm (GCRA): A nature-inspired metaheuristic for optimization problems. Heliyon10 (2024).
42. Guo, Z; Liu, G; Jiang, F. Chinese Pangolin Optimizer: A novel bio-inspired metaheuristic for solving optimization problems J. . Supercomput.; 2025; 81, 517. [DOI: https://dx.doi.org/10.1007/s11227-025-07004-4]
43. Emami, H. Stock exchange trading optimization algorithm: a human-inspired method for global optimization. J. Supercomput.; 2022; 78, pp. 2125-2174. [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34188358][DOI: https://dx.doi.org/10.1007/s11227-021-03943-w]
44. Wang, J; Wang, W; Hu, X; Qiu, L; Zang, H. Black-winged kite algorithm: a nature-inspired meta-heuristic for solving benchmark functions and engineering problems. Artif. Intell. Rev.; 2024; 57, 98. [DOI: https://dx.doi.org/10.1007/s10462-024-10723-4]
45. Qais, MH; Hasanien, HM; Alghuwainem, S; Loo, KH. Propagation search algorithm: a physics-based optimizer for engineering applications. Mathematics; 2023; 11, 4224. [DOI: https://dx.doi.org/10.3390/math11204224]
46. Zhang, M; Wang, D; Yang, J. Hybrid-flash butterfly optimization algorithm with logistic mapping for solving the engineering constrained optimization problems. Entropy; 2022; 24, 525.2022Entrp.24.525Z4414894 [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35455188][PubMedCentral: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9028546][DOI: https://dx.doi.org/10.3390/e24040525]
47. Rocha, AMA; Fernandes, EM. Hybridizing the electromagnetism-like algorithm with descent search for solving engineering design problems. Int. J. Comput. Math.; 2009; 86, pp. 1932-1946. [DOI: https://dx.doi.org/10.1080/00207160902971533]
48. Ezugwu, AE; Agushaka, JO; Abualigah, L; Mirjalili, S; Gandomi, AH. Prairie dog optimization algorithm. Neural Comput. Appl.; 2022; 34, pp. 20017-20065. [DOI: https://dx.doi.org/10.1007/s00521-022-07530-9]
49. Rechenberg, I. Evolutionsstrategien. in Simulationsmethoden Med. Biol. Workshop Hann. 29 Sept–1 Okt 1977 83–114 (Springer, 1978).
50. Bayzidi, H; Talatahari, S; Saraee, M; Lamarche, C-P. Social network search for solving engineering optimization problems. Comput. Intell. Neurosci.; 2021; 2021, 8548639. [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34630556][PubMedCentral: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8497131][DOI: https://dx.doi.org/10.1155/2021/8548639]
51. Azizi, M; Talatahari, S; Giaralis, A. Optimization of engineering design problems using atomic orbital search algorithm. IEEE Access; 2021; 9, pp. 102497-102519. [DOI: https://dx.doi.org/10.1109/ACCESS.2021.3096726]
52. Sadeeq, HT; Abdulazeez, AM. Giant trevally optimizer (GTO): A novel metaheuristic algorithm for global optimization and challenging engineering problems. Ieee Access; 2022; 10, pp. 121615-121640. [DOI: https://dx.doi.org/10.1109/ACCESS.2022.3223388]
53. Jia, H; Rao, H; Wen, C; Mirjalili, S. Crayfish optimization algorithm. Artif. Intell. Rev.; 2023; 56, pp. 1919-1979. [DOI: https://dx.doi.org/10.1007/s10462-023-10567-4]
54. Faridmehr, I; Nehdi, ML; Davoudkhani, IF; Poolad, A. Mountaineering team-based optimization: A novel human-based metaheuristic algorithm. Mathematics; 2023; 11, 1273. [DOI: https://dx.doi.org/10.3390/math11051273]
55. Dehghani, M; Trojovská, E; Trojovskỳ, P; Malik, OP. OOBO: a new metaheuristic algorithm for solving optimization problems. Biomimetics; 2023; 8, 468. [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/37887599][PubMedCentral: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10604662][DOI: https://dx.doi.org/10.3390/biomimetics8060468]
56. Zolfi, K. Gold rush optimizer: A new population-based metaheuristic algorithm. Oper. Res. Decis.; 2023; 33, pp. 113-150.
57. Zamani, H; Nadimi-Shahraki, MH; Gandomi, AH. Starling murmuration optimizer: A novel bio-inspired algorithm for global and engineering optimization. Comput. Methods Appl. Mech. Eng.; 2022; 392, 2022CMAME.392k4616Z4379773 [DOI: https://dx.doi.org/10.1016/j.cma.2022.114616] 114616.
58. Lang, Y; Gao, Y. Dream optimization algorithm (DOA): A novel metaheuristic optimization algorithm inspired by human dreams and its applications to real-world engineering problems. Comput. Methods Appl. Mech. Eng.; 2025; 436, [DOI: https://dx.doi.org/10.1016/j.cma.2024.117718] 117718.
59. Jia, H et al. Multi-strategy remora optimization algorithm for solving multi-extremum problems. J. Comput. Des. Eng.; 2023; 10, pp. 1315-1349.
© The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the "License"). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.