Content area
Curvature lines are special and important curves on surfaces. It is of great significance to construct developable surface interpolated on curvature lines in engineering applications. In this paper, the shape optimization of generalized cubic ball developable surface interpolated on the curvature line is studied by using the improved reptile search algorithm. Firstly, based on the curvature line of generalized cubic ball curve with shape adjustable, this paper gives the construction method of SGC-Ball developable surface interpolated on the curve. Secondly, the feedback mechanism, adaptive parameters and mutation strategy are introduced into the reptile search algorithm, and the Feedback mechanism-driven improved reptile search algorithm effectively improves the solving precision. On IEEE congress on evolutionary computation 2014, 2017, 2019 and four engineering design problems, the feedback mechanism-driven improved reptile search algorithm is compared with other representative methods, and the result indicates that the solution performance of the feedback mechanism-driven improved reptile search algorithm is competitive. At last, taking the minimum energy as the evaluation index, the shape optimization model of SGC-Ball interpolation developable surface is established. The developable surface with the minimum energy is achieved with the help of the feedback mechanism-driven improved reptile search algorithm, and the comparison experiment verifies the superiority of the feedback mechanism-driven improved reptile search algorithm for the shape optimization problem.
Introduction
A developable surface can be expanded into a complete plane without stretching or tearing, it often be applied to the manufacturing field using non stretchable materials, for example, leather, paper and metal plates, and are common in automobile and ship shape design, leather shoes and hats, origami and architectural design [1, 2–3]. Due to the excellent properties of developable surfaces, they are widely used in the manufacturing field. Therefore, the research on their construction, splicing and optimization are hot issues in Computer-Aided Geometric Design (CAGD).
In terms of the construction method of developable surface, point geometry representation is a construction method of developable surface [4, 5–6], it makes the ruled surface developable by imposing certain constraints. Aumann [4] constructed the cubic and quartic Bézier curves, respectively. On this basis, Maekawa et al. [5] studied the design method of B-spline developable surface, and Ref. [6] constructed Bézier developable surface. However, it cannot be expressed explicitly, and it is difficult to calculate because of the developability constraints imposed on the ruled surface. The line/plane geometric representation is another method [7, 8–9], which uses the duality idea between the midpoints and planes. Refs. [7, 8] give the method for constructing cubic Bézier, B-spline developable surfaces by using the duality idea. Pottmann et al. [9] constructed NURBS developable surfaces using this idea. The developable surface using line/surface geometry is an explicit expression, but the shape of the surface formed by this method is difficult to control. Therefore, developable surfaces with shape parameters appear in public view. For example, Zhou et al. [10] constructed C-Bézier developable surface with one shape parameter, which has certain shape adjustability. Hu et al. [11] proposed H-Bézier developable surface with three local shape parameters, which improved the local adjustment ability of the surface. In addition, as the Ball basis functions not only has some properties of Bernstein basis functions, but also has fast calculation speed, the generalized cubic Ball basis functions with four shape parameters are proposed in Ref. [12], and the corresponding developable surfaces are constructed, which have good global and local shape adjustment ability.
In addition, developable surface is established through interpolating into special curves. Curvature line is a special and important curve, which is common in surface analysis and can be used in geometric design, shape judgment and surface drawing, etc. [13, 14]. For example, Zhang et al. [15] put forward a calculation way for curvature lines of implicit surfaces, and Li et al. [16] studied parametric surfaces with given curves as curvature lines. With regard to the construction of developable surface by interpolation on the curvature line, Ref. [17] provides a method for designing developable surfaces by using given curvature lines, and Ref. [18] constructs H-Bézier developable surfaces by interpolating curvature lines. However, there are few studies on constructing Ball developable surface interpolated with curvature lines. Therefore, this paper uses the Shape optimization of Generalized Cubic Ball (namely SGC-Ball) curve with good performance proposed in Ref. [12] to construct SGC-Ball developable surface interpolated with curvature lines.
Since the SGC-Ball curve in Ref. [12] has good shape adjustability, the SGC-Ball interpolation developable surface constructed based on this structure is also shape-adjustable. The key problem is that how to ensure the optimal shape of developable surface. Thus, we use the overall energy value [19, 20, 21, 22–23] to measure the advantage and disadvantage of the developable surface shape. The shape is good if the overall energy is small. The shape optimization model of SGC-Ball interpolation developable surface is rebuilt by combining the bound of shape parameters and the overall energy.
However, the construction of SGC-Ball interpolation developable surface involves Frénet frame, cross product and other operations, and there is derivative operation in the objective function of its shape optimization model, which makes the solution of the model more complicated. It becomes difficult to solve using commonly used classical methods. At present, intelligent algorithm has become the mainstream trend due to its characteristics of simple calculation and strong adaptability [24, 25–26]. Intelligent algorithms are inspired the nature laws and used to solve various problems. Intelligent algorithms mainly include evolution [27], physics [28, 29–30], human behavior [31, 32] and population. Among them, population-based algorithms mainly imitate the collective behavior of different creatures, for instance, Particle Swarm Optimization (PSO) algorithm [33] and Ant Colony Optimization (ACO) [34], and their inspiration sources are the cooperative foraging behavior of bird colonies and ant colonies respectively. In recent years, various algorithms to simulate the biological habits of nature are put forward. Yang et al. [35] proposed Bat Algorithm (BA) imitating the behavior of bats using sonar to detection or positioning. Gandomi et al. [36] proposed Cuckoo Search (CS) algorithm according to the breeding characteristics of cuckoos. Refs. [37, 38–39] proposed the Whale Optimization Algorithm (WOA), Salp Swarm Algorithm (SSA), Grey Wolf Optimizer (GWO). Heidari et al. [40] presented the Harris hawks algorithm (HHO) imitate the predation behavior of Harris hawks. Abualigaha et al. [41] proposed the Reptile search algorithm (RSA) come from the hunting behavior of crocodiles. In addition, there are also Slime Mold Algorithm (SMA) [42], Chameleon Swarm Algorithm (CSA) [43], Wild Horse Optimizer (WHO) [44], Aquila Optimizer (AO) [45], Rat Swarm Optimization (RSO) algorithm [46], Jellyfish Search (JS) algorithm [47], Sine Cosine Algorithm (SCA) [48], Barnacles Mating Optimizer (BMO)[49], Lévy Flight Distribution (LFD) [50], Archimedes Optimization Algorithm (AOA) [51], Tunicate Swarm Algorithm (TSA) [52], Hunger Games Search (HGS) [53], Honey Badger Algorithm (HBA) [54] and Prairie Dog Optimization (PDO) algorithm [55], Dwarf Mongoose Optimization (DMO) algorithm [56], Ebola Optimization Search Algorithm (EOSA) [57] other emerging swarm intelligence algorithms.
Among them, RSA [41] is proposed by Abualigaha et al. in 2022. The special feature of this algorithm lies in the use of four new mechanisms to update individual positions. It has stronger searching ability and performs well for the complex practical problems. In Ref. [58], RSA was proposed to solve geotechnical parameters, and good prediction results were achieved. HRSA, which combines the original RSA and Remora Optimization Algorithm (ROA) and handles these mechanisms’ search processes by a novel transition method, HRSA has significant efficacy when it used for various clustering problems [59]. ACO and RSA are integrated to select important Feature Subsets (FS) for loss prediction. Based on the results and statistical analysis, ACO-RSA is an effective and superior method compared to other competitors' algorithms on most datasets [60]. A method for identifying engineering design parameters based on the improved optimization method IRSA has been proposed, using traditional RSA and Mutation Techniques (MT). These two search methods are used to find the optimal parameter value for a given problem and are based on a new mean shift mechanism. The proposed mean shift mechanism adjusts the search process by changing the search process (RSA or MT) to avoid the main weakness of the original RSA: alignment convergence and imbalance between search methods [61]. In order to overcome the shortcomings of RSA, an improved RSA (mRSA), which combines RSA with RUN algorithm is proposed and has been proven to be a more successful multi-level threshold segmentation method [62]. A new packaging scheme based on the FS method, which combines Chaotic Mapping (CM) and binary reptile search algorithm called CRSA to solve various FS problems [63]. A Reptile Search Algorithm based on Levy flight and Interactive Crossover strategy (LICRSA) has been proposed and solved engineering problems [64]. However, RSA is still easily to sink into local trap and the limitation of solution accuracy when solving complex problems. RSA still has room for improvement. Therefore, in order to further overcome these limitations, this paper proposes an improved RSA, the Feedback mechanism-driven improved Reptile Search Algorithm (for short, FRSA) introduces feedback mechanism, adaptive parameters and the mutation strategy into RSA. At the same time, the proposed algorithm is used to obtain the optimal shape of the SGC-Ball interpolation developed surface constructed above. The important contributions are listed below:
Taking the given SGC-Ball curve as the curvature line, the concrete steps of constructing SGC-Ball developable surface interpolated on the curvature line are given, and an example of SGC-Ball interpolation developable surface is given.
The feedback mechanism based on the optimal solution and the adaptive parameter strategy are added in the exploration stage of RSA, and exert the mutation operation in the exploitation stage. These strategies have the following effects:
Implementing the optimal solution’s feedback mechanism in the early exploration stage enlarge the scope about RSA and enhance the randomness of the candidate solutions, while adding adaptive parameters in the late stage of exploration can enhance the adaptability of asymmetric search.
Introducing mutation strategy in the exploitation stage can increases the diversity of individuals, enhances RSA’s the global searching capability and solution’s precision to some extent.
With the help of FRSA, combing the shape parameters and minimum overall energy of the developable surface, the shape optimization model of the SGC-Ball interpolation developable surface is established and solved. Meanwhile, the specific comparative examples are given.
The remaining research contents are in below: Sect. 2 constructed the SGC-Ball developable surface interpolated in specific curvature line, and its construction steps are given. Section 3 developed the improved FRSA and described the specific process. The calculation results of FRSA on CEC2014, CEC2017 and CEC2019 are arranged in Sect. 4, it is verified that FRSA has better solution performance. The specific examples and analysis are given to solve the shape optimization model of SGC-Ball interpolated developable surface using FRSA in the Sect. 5, the implementation of four numerical experiments explained the feasibility and the superiority of FRSA on the parameters problem. Section 6 summarizes the full text.
SGC-Ball Developable Surface Interpolated on Curvature Lines
Preliminaries
The main goal of this section is to construct developable surface interpolated with specific curvature lines. The preliminary knowledge about ruled surface, developable surface and curvature lines of surfaces are firstly introduced to clearly deduce the expression of developable surface.
Definition 2.1
Define the parametric equation of the ruled surface:
1
where,,, and represent the directrix and unit vector of the ruled surface.Definition 2.2
For any generatrix on a ruled surface, if all points on this generatrix share a tangent plane, this kind of ruled surface is called developable surface.
Definition 2.3
For parametric surface satisfying the following conditions:
2
the surface is regular surface. In Eq. (2), and are the first partial derivative of about and . And the unit normal vector can be expressed by the following formula3
Definition 2.4
A curve on surface is called curvature line of the surface if its tangent direction at every point is principal.
Theorem 2.1
The necessary and sufficient conditions for a ruled surface to be developable are
4
Lemma 2.1
A curve on a surface is a line of curvature if and only if the normal surface along this curve is a developable surface.
Definition 2.5
The define of SGC-Ball curve [12]:
5
6
where, are record the global and local shape parameters,,. In addition, the basis functions of of SGC-Ball curve are marked .
The Construction of SGC-Ball Interpolation Developable Surfaces
Given a SGC-Ball curve in the space and regarded it as a curvature line, a developable surface interpolated on the curvature line can be constructed. At this time, we give the detailed construction processes:
Combined with the control points , define a spatial generalized cubic Ball curve:
7
First, the normal surface of the given curve Eq. (7) be defined as follows:
8
9
in Eq. (9), represents the angle between and normal vector , / represent the unit principal/auxiliary normal vector at a point of the curve , and the specific expressions can see Eqs. (10) and (11).
10
11
According to the definition of ruled surface, it is known that is a ruled surface. Based on Theorem 2.1, have
12
Equation (12) can be expressed as the following equation through simple calculation
13
where, represents the unit tangent vector that forms Frénet frame with and . and are the first derivative, and detail express can see Eqs. (14) and (15).14
15
Let
16
17
and represent the curvature and torsion of curve at a certain point P respectively.
Based on the method in Ref. [18] and the operation formula of mixed product between vectors, the following formula can be deduced from Eq. (13):
18
where,. Therefore, is developable surface only if the angle satisfies the condition in Eq. (18).Then, for a SGC-Ball curve of given control points, take the satisfying the Eq. (18) and define the following equation
19
based on Eq. (19), expression of surface of the surface can be constructed with Eq. (20).20
Theorem 2.2
The surfaceshown in Eq. (20) satisfies: (i) developable; (ii) surface interpolated on curve, andis curvature line on developable surface. See Appendix A for the proof process.
For any given spatial SGC-Ball curve , the specific construction process of SGC-Ball developable surface interpolated on the curve and taking it as the curvature line can be given. The final expression of the developable surface is shown in Eq. (20). The construction process can be summarized in succinct steps:
Step 1. For a spatial SGC-Ball curve with given control points, give its normal surface expression as shown in Eq. (8);
Step 2. Find the angle value that meets the condition of Eq. (18) and substitute it into Eq. (9). In this case, the normal surface shown in Eq. (8) is a developable surface;
Step 3. Substitute the result obtained by Eq. (9) into Eq. (19) to obtain , and then substitute into Eq. (20) to construct the developable surface interpolated on the curve .
A series of generalized cubic Ball developable surfaces interpolated with specific curvature lines can be constructed by using the above steps. Since the developable surface constructed in this paper is interpolated on the curvature line, and the curvature line itself is a generalized cubic Ball curve with shape-adjustable, the developable surface obtained has good shape adjustability. Specific examples are given below.
Example 1.
***the developable surface obtained***
21
Equation (21) is a several of the control points about the spatial SGC-Ball curve. Taking the SGC-Ball curve as the curvature line, the developable surface interpolated on the curvature line can be constructed. The developable surface diagrams formed by adjusting the four shape parameters are given in Figs. 1, 2, 3, 4. The red solid lines are the control polygon composed of four control points, and the red star lines on the drawn developable surface are the curvature lines (i.e., the interpolated SGC-Ball curves).
[See PDF for image]
Fig. 1
Developable surfaces obtained by adjusting shape parameter
[See PDF for image]
Fig. 2
Developable surfaces obtained by adjusting shape parameter
[See PDF for image]
Fig. 3
Developable surfaces obtained by adjusting shape parameter
[See PDF for image]
Fig. 4
Developable surfaces obtained by adjusting shape parameter
Under the shape parameters remain invariant, the developable surface expands (or shrinks) in the direction of the whole control polygon as grow larger (or smaller) from Fig. 1. According to Fig. 2, with the fixed value , with the shape parameter grow larger (or smaller), the whole developable surface shrinks (or expands) to the interior of the surface along the direction of control point P1. For the given value , with the increase (or decrease) of shape parameter , the developable surface shrinks (or expands) to the interior of the surface along the direction of control points P1 and P2 from Fig. 3. According to Fig. 4, for fixed , the developable surface shrinks (or expands) to the interior of the surface along the direction of control point P2, with grow larger (or smaller).
Feedback Mechanism-driven Mutation Reptile Search Algorithm
Reptile Search Algorithm
The inspiration of Reptile Search Algorithm (RSA) [41] comes from the encirclement, hunting mechanism and social behavior of crocodiles in nature. The RSA mainly consists of three stages: population initialization, exploration and exploitation phase.
Initialization Phase
The initial population matrix W can see Eq. (22). n is defined the number of individuals.
22
23
where, is the jth dim location of the ith crocodile, and represent the bound, rand is a arbitrary number from 0 to 1.Exploration Phase
Exploration phase mainly act in two ways: high walking and belly walking. These two ways of encircling prey correspond to the first half iteration time of RSA.The position update of RSA is shown in the following Eq. (24).
24
25
26
27
In Eq. (24), is the j-th optimal solution, represents the i-th solution’s rand location, iter and MAXiter are the current and maximum iteration number. is the hunting operator. is sensitive parameter, which controls the exploration precision of the encircling phase in the whole iteration process, and its value is 0.1. in Eq. (26) is used to narrow the search area. and are the arbitrary integers between 1 and n, . Evolutionary consciousness is a probability ratio with decreases randomly. is the percentage difference between the optimal solution and the current solution, which can be calculated by Eq. (28).
28
where, = 0.1 controls the exploration precision of cooperative hunting in the iterative process. Equation (29) is the average value.29
Exploitation Phase
The second half iteration of RSA mainly simulates two modes of crocodiles hunting: coordination hunting and cooperation hunting.
30
Improved Reptile Search Algorithm
RSA is still easily to sink into local trap and the limitation of solution accuracy when solving complex problems. RSA still has room for improvement. Therefore, in order to further overcome these limitations, this paper proposes an improved RSA, the feedback mechanism-driven improved Reptile Search Algorithm (for short, FRSA) introduces feedback mechanism, adaptive parameters and the mutation strategy into RSA.
Firstly, in terms of population initialization, chaotic initialization method is a common strategy for improving initial population [65, 66], which boosts the diversity by using chaotic number instead of random value in initialization. In this section, chaotic numbers as shown in Eq. (31) are used to replace random numbers.
31
Therefore, the population initialization shown in Eq. (23) can be changed to Eq. (32).
32
In addition, the original RSA takes the optimal solution in the population as a reference and moves towards the optimal solution at the beginning of the exploration phase, which is beneficial to the convergence, but will limit the exploration scope. In addition, the parameter ES in the second exploratory stage has only three possible values, including 0. Since the range of most solving problems is symmetric, aiming at the problem that the convergence accuracy of RSA is not high enough, the capability of escaping from the local optima should strengthen in the exploitation phase. Therefore, this section introduces the following three strategies into RSA.
Feedback Mechanism Based on Optimal Solution
In the first quarter of exploration stage, the original update of the individual near the optimal location is changed to the update of the individual position near the random individual. At this time, the random individual is based on the feedback of the optimal solution and the random solution, it not only effectively maintains population diversity, but also avoids blindness, and accelerate the rate of convergence. The mathematical expression of the process can be described with Eq. (33).
33
where, is the best solution’s position. Therefore, the position update at the first quarter of exploration stage is as follows:34
where, h is the hunting operator, which can be calculated by Eq. (25). The calculation formula of R is shown in Eq. (26).Adaptive Parameter Strategy
Since the parameter ES in Eq. (27) has only three possible values and contains 0, the algorithm may fall into local optimization for different solving problems. Therefore, the parameter ES is updated thought the Eq. (35) with a probability of 50% for adjusting the position of candidate solution:
35
The ES is updated with Eq. (35), which ensures that the overall value range of the parameter ES remains unchanged, and reduces the probability of individual position 0, which is more in line with the solution requirements of the search space asymmetry problem.
Mutation Strategy
The mutation operation is merged at the last quarter period to enlarge the search scope in the later iteration stage. Mutation can generate new individuals. Mutation operator in DE and Gaussian local mutation are commonly used mutation strategies [67]. These two mutation strategies are introduced into RSA in this section. First of all, according to the fitness value of each candidate to determine whether to carry out mutation, that is
36
where,37
38
is objective function value in Eq. (37), .
And the mutation process of DE:
39
where, χ is the scale factor of mutation, represents the best solution, and are arbitrarily solution. The specific calculation formula of Gaussian local mutation is in below:40
These two mutation strategies have a 50% probability of being implemented. Through mutation, new individuals can avoid falling into a local trap.
The Specific Steps of The Feedback Mechanism-driven Mutation Reptile Search Algorithm
The feedback mechanism based on optimal solution, adaptive parameter strategy and mutation strategy are added into RSA to strengthen the convergence accuracy and expand the search range of population, and the improved RSA proposed is called FRSA. The seven steps summarize the detailed process of FRSA.
Step 1. Define the initial parameters of FRSA, and generate chaotic initialization population according to Eq. (32);
Step 2. Computing the fitness value of the initial candidate, and record FBest and Good based on the results;
Step 3. If , renew ES, R and E0 according to their formula;
Step 4. Judge the current iteration number, if , carry out Eq. (34); if , carry out Eq. (24(b)); if , carry out Eq. (30(a)); otherwise, carry out Eq. (36);
Step 5. Judge whether individual position exceed the bounds. If exceeds the upper bound, replace it with UPb. If exceeds the lower bound, replace it with LOb;
Step 6. Calculate and compare with the fitness value of previous candidate solution, and select individual with better results to enter the next iteration. At the same time, compared with FBest, if it is superior to FBest, recorded as a new FBest, and the individual position is updated as Best;
Step 7. if , go back Step3; in contrast, output FBest and Best.
Combined steps, the pseudo-code of FRSA is organized in below.
Considering the Time Complexity of FRSA
The estimation of FRSA Time complexity depends on the number of variables d, the number of individuals n, and the maximum number of iterations MAXiter. Because chaos map initialization and adaptive parameters do not increase the complexity, Time complexity of FRSA mainly includes RSA, mutation strategy and feedback mechanism, and mutation strategy and feedback mechanism replace the first quarter and the last quarter update strategies of RSA respectively. Therefore, the time complexity of FRSA and RSA is the same, namely:
41
Numerical Experiment and Analysis
The test set CEC2014, CEC2017 and CEC2019 contains various types of functions, can fully test and measure the performance of FRSA. For a more fair comparison of algorithms, all experiments were conducted on the same computer, running in environments such as "Windows 10 (64 bit)", "Intel (R) Core (TM) i5-8265U CPU @ 1.60GHz 1.80 GHz", and "Matlab 2018a".
Experiment and Analysis on The CEC2014 Test Tet
Parameter Analysis
Different parameters have different effects on the performance of the algorithm. We test the influence of parameters and in FRSA on solution results. It can be seen from Ref. [41] that the parameter ranges of and β are [0.1, 0.9]. In this section, these two parameters are selected at equal intervals as 0.1, 0.5 and 0.9, respectively, so here have nine combinations of about and . The performance of each combination parameter is tested on the CEC2014. The evaluating indicator and Rank of each combination on 30 test functions are shown in Table 1. Rank in each test function is the rank of the mean value. The best average values were special remarked. Figure 5 shows the histogram of the rank sum of the combined parameters on all test functions.
Table 1. Parameters sensitivity of FRSA
F | Index | Parameters value | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|
α = 0.1 β = 0.1 | α = 0.5 β = 0.1 | α = 0.9 β = 0.1 | α = 0.1 β = 0.5 | α = 0.5 β = 0.5 | α = 0.9 β = 05 | α = 0.1 β = 0.9 | α = 0.5 β = 0.9 | α = 0.9 β = 0.9 | ||
F1 | Ave | 1.237E+05 | 9.845E+04 | 1.341E+05 | 8.319E+04 | 1.085E+05 | 9.736E+04 | 1.253E+05 | 1.701E+05 | 5.180E+05 |
Std | 6.570E+04 | 8.520E+04 | 1.825E+05 | 6.128E+04 | 8.627E+04 | 6.003E+04 | 1.670E+05 | 2.471E+05 | 1.525E+06 | |
Rank | 5 | 3 | 7 | 1 | 4 | 2 | 6 | 8 | 9 | |
F2 | Ave | 5.011E+03 | 3.229E+03 | 4.684E+03 | 3.906E+03 | 3.000E+03 | 2.924E+03 | 4.084E+03 | 5.149E+03 | 3.077E+03 |
Std | 3.645E+03 | 2.481E+03 | 3.195E+03 | 3.088E+03 | 2.306E+03 | 3.051E+03 | 3.927E+03 | 4.343E+03 | 3.611E+03 | |
Rank | 8 | 4 | 7 | 5 | 2 | 1 | 6 | 9 | 3 | |
F3 | Ave | 4.215E+03 | 2.837E+03 | 1.709E+03 | 4.474E+03 | 1.764E+03 | 1.436E+03 | 2.551E+03 | 1.456E+03 | 1.300E+03 |
Std | 3.332E+03 | 2.712E+03 | 1.120E+03 | 3.529E+03 | 1.236E+03 | 6.434E+02 | 2.042E+03 | 4.853E+02 | 3.476E+02 | |
Rank | 8 | 7 | 4 | 9 | 5 | 2 | 6 | 3 | 1 | |
F4 | Ave | 4.256E+02 | 4.155E+02 | 4.174E+02 | 4.201E+02 | 4.243E+02 | 4.376E+02 | 4.141E+02 | 4.256E+02 | 4.282E+02 |
Std | 1.475E+01 | 1.999E+01 | 2.704E+01 | 1.680E+01 | 2.899E+01 | 3.053E+01 | 2.291E+01 | 2.845E+01 | 2.767E+01 | |
Rank | 6 | 2 | 3 | 4 | 5 | 9 | 1 | 7 | 8 | |
F5 | Ave | 520.4630 | 520.4249 | 520.5082 | 520.4595 | 520.4482 | 520.4485 | 520.4188 | 520.4389 | 520.4747 |
Std | 1.202E−01 | 8.480E−02 | 1.131E−01 | 1.390E−01 | 8.892E−02 | 1.190E−01 | 1.320E−01 | 9.430E−02 | 1.038E−01 | |
Rank | 7 | 2 | 9 | 6 | 4 | 5 | 1 | 3 | 8 | |
F6 | Ave | 602.1417 | 602.1420 | 602.3667 | 602.3964 | 601.7120 | 602.3493 | 601.9782 | 602.4584 | 603.9017 |
Std | 1.385E+00 | 1.211E+00 | 9.054E−01 | 1.592E+00 | 8.110E−01 | 1.601E+00 | 1.075E+00 | 1.281E+00 | 1.658E+00 | |
Rank | 3 | 4 | 6 | 7 | 1 | 5 | 2 | 8 | 9 | |
F7 | Ave | 700.2437 | 700.1472 | 700.2133 | 700.1926 | 700.1819 | 700.1713 | 700.1530 | 700.2183 | 700.3770 |
Std | 1.912E−01 | 8.043E−02 | 2.154E−01 | 1.440E−01 | 1.442E−01 | 1.010E−01 | 9.675E−02 | 1.736E−01 | 4.878E−01 | |
Rank | 8 | 1 | 6 | 5 | 4 | 3 | 2 | 7 | 9 | |
F8 | Ave | 8.073E+02 | 8.061E+02 | 8.071E+02 | 8.055E+02 | 8.076E+02 | 8.089E+02 | 8.077E+02 | 8.077E+02 | 8.134E+02 |
Std | 4.534E+00 | 2.536E+00 | 3.110E+00 | 2.712E+00 | 4.628E+00 | 4.648E+00 | 4.075E+00 | 3.678E+00 | 6.491E+00 | |
Rank | 4 | 2 | 3 | 1 | 5 | 8 | 6 | 6 | 9 | |
F9 | Ave | 9.119E+02 | 9.137E+02 | 9.154E+02 | 9.129E+02 | 9.164E+02 | 9.210E+02 | 9.151E+02 | 9.214E+02 | 9.302E+02 |
Std | 6.054E+00 | 9.731E+00 | 5.104E+00 | 7.035E+00 | 6.829E+00 | 9.224E+00 | 6.991E+00 | 9.908E+00 | 1.259E+01 | |
Rank | 1 | 3 | 5 | 2 | 6 | 7 | 4 | 8 | 9 | |
F10 | Ave | 1.089E+03 | 1.158E+03 | 1.180E+03 | 1.106E+03 | 1.294E+03 | 1.479E+03 | 1.154E+03 | 1.300E+03 | 1.751E+03 |
Std | 7.755E+01 | 1.799E+02 | 2.611E+02 | 7.950E+01 | 1.942E+02 | 3.253E+02 | 9.794E+01 | 2.410E+02 | 5.265E+02 | |
Rank | 1 | 4 | 5 | 2 | 6 | 8 | 3 | 7 | 9 | |
F11 | Ave | 2.074E+03 | 2.003E+03 | 1.989E+03 | 2.210E+03 | 1.950E+03 | 2.029E+03 | 2.106E+03 | 2.100E+03 | 1.930E+03 |
Std | 3.537E+02 | 4.509E+02 | 3.456E+02 | 3.614E+02 | 3.442E+02 | 3.187E+02 | 4.393E+02 | 3.988E+02 | 3.579E+02 | |
Rank | 6 | 4 | 3 | 9 | 2 | 5 | 8 | 7 | 1 | |
F12 | Ave | 1201.2340 | 1201.3781 | 1201.2551 | 1201.2698 | 1201.1892 | 1201.1945 | 1201.2895 | 1201.3838 | 1201.1741 |
Std | 5.019E−01 | 3.306E−01 | 4.687E−01 | 3.431E−01 | 4.059E−01 | 3.984E−01 | 4.448E−01 | 3.348E−01 | 3.000E−01 | |
Rank | 4 | 8 | 5 | 6 | 2 | 3 | 7 | 9 | 1 | |
F13 | Ave | 1300.2572 | 1300.2551 | 1300.2871 | 1300.2679 | 1300.4186 | 1300.4249 | 1300.3025 | 1300.3834 | 1300.5299 |
Std | 6.586E−02 | 5.578E−02 | 5.989E−02 | 6.311E−02 | 1.417E−01 | 1.069E−01 | 7.787E−02 | 1.072E−01 | 2.004E−01 | |
Rank | 2 | 1 | 4 | 3 | 7 | 8 | 5 | 6 | 9 | |
F14 | Ave | 1400.2729 | 1400.2763 | 1400.2661 | 1400.2716 | 1400.3409 | 1400.4037 | 1400.3024 | 1400.3987 | 1400.6037 |
Std | 1.031E−01 | 6.362E−02 | 4.710E−02 | 6.556E−02 | 9.584E−02 | 1.819E−01 | 4.891E−02 | 1.439E−01 | 4.533E−01 | |
Rank | 3 | 4 | 1 | 2 | 6 | 8 | 5 | 7 | 9 | |
F15 | Ave | 1502.1695 | 1502.1546 | 1501.8740 | 1501.6778 | 1501.3636 | 1501.8115 | 1501.6407 | 1502.0571 | 1502.6966 |
Std | 9.487E−01 | 9.292E−01 | 9.631E−01 | 6.766E−01 | 5.849E−01 | 9.864E−01 | 7.965E−01 | 1.209E+00 | 1.721E+00 | |
Rank | 8 | 7 | 5 | 3 | 1 | 4 | 2 | 6 | 9 | |
F16 | Ave | 1603.2196 | 1603.1063 | 1603.0528 | 1603.0333 | 1603.2312 | 1603.0408 | 1603.0678 | 1603.2863 | 1603.2237 |
Std | 2.354E−01 | 3.533E−01 | 3.058E−01 | 3.914E−01 | 3.654E−01 | 2.275E−01 | 2.711E−01 | 2.617E−01 | 2.589E−01 | |
Rank | 6 | 5 | 3 | 1 | 8 | 2 | 4 | 9 | 7 | |
F17 | Ave | 7.209E+04 | 1.804E+04 | 1.940E+04 | 2.293E+04 | 1.373E+04 | 4.242E+04 | 1.485E+04 | 2.966E+04 | 4.306E+04 |
Std | 1.140E+05 | 1.983E+04 | 2.152E+04 | 3.102E+04 | 1.743E+04 | 5.158E+04 | 1.401E+04 | 3.318E+04 | 4.657E+04 | |
Rank | 9 | 3 | 4 | 5 | 1 | 7 | 2 | 6 | 8 | |
F18 | Ave | 1.750E+04 | 1.011E+04 | 1.020E+04 | 1.102E+04 | 1.081E+04 | 1.248E+04 | 8.364E+03 | 1.152E+04 | 1.259E+04 |
Std | 1.326E+04 | 6.478E+03 | 4.475E+03 | 7.948E+03 | 2.533E+03 | 2.159E+03 | 4.944E+03 | 3.181E+03 | 4.003E+03 | |
Rank | 9 | 2 | 3 | 5 | 4 | 7 | 1 | 6 | 8 | |
F19 | Ave | 1.902E+03 | 1.901E+03 | 1.902E+03 | 1.901E+03 | 1.902E+03 | 1.902E+03 | 1.902E+03 | 1.902E+03 | 1.902E+03 |
Std | 1.056E+00 | 5.500E−01 | 4.072E−01 | 6.314E−01 | 7.004E−01 | 1.124E+00 | 6.095E−01 | 6.804E−01 | 1.427E+00 | |
Rank | 3 | 1 | 3 | 1 | 3 | 3 | 3 | 3 | 3 | |
F20 | Ave | 1.156E+04 | 7.954E+03 | 5.595E+03 | 9.299E+03 | 5.991E+03 | 7.407E+03 | 6.445E+03 | 6.194E+03 | 6.724E+03 |
Std | 1.019E+04 | 4.997E+03 | 3.106E+03 | 7.738E+03 | 2.412E+03 | 1.714E+03 | 4.445E+03 | 2.412E+03 | 2.556E+03 | |
Rank | 9 | 7 | 1 | 8 | 2 | 6 | 4 | 3 | 5 | |
F21 | Ave | 9.177E+03 | 8.627E+03 | 8.690E+03 | 9.775E+03 | 1.042E+04 | 1.055E+04 | 9.441E+03 | 1.029E+04 | 7.194E+03 |
Std | 7.057E+03 | 6.056E+03 | 3.936E+03 | 8.799E+03 | 6.919E+03 | 8.393E+03 | 7.483E+03 | 8.962E+03 | 6.306E+03 | |
Rank | 4 | 2 | 3 | 6 | 8 | 9 | 5 | 7 | 1 | |
F22 | Ave | 2.242E+03 | 2.219E+03 | 2.227E+03 | 2.219E+03 | 2.218E+03 | 2.220E+03 | 2.217E+03 | 2.221E+03 | 2.265E+03 |
Std | 5.921E+01 | 7.742E+00 | 2.823E+01 | 7.095E+00 | 9.931E+00 | 1.432E+01 | 8.745E+00 | 1.280E+01 | 6.049E+01 | |
Rank | 8 | 3 | 7 | 3 | 2 | 5 | 1 | 6 | 9 | |
F23 | Ave | 2500 | 2500 | 2500 | 2500 | 2500 | 2500 | 2500 | 2500 | 2500 |
Std | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
Rank | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | |
F24 | Ave | 2.528E+03 | 2.521E+03 | 2.520E+03 | 2.520E+03 | 2.520E+03 | 2.528E+03 | 2.519E+03 | 2.523E+03 | 2.587E+03 |
Std | 2.538E+01 | 1.944E+01 | 7.536E+00 | 8.843E+00 | 7.191E+00 | 1.015E+01 | 6.935E+00 | 5.684E+00 | 2.619E+01 | |
Rank | 7 | 5 | 2 | 2 | 2 | 7 | 1 | 6 | 9 | |
F25 | Ave | 2.690E+03 | 2.694E+03 | 2.696E+03 | 2700 | 2.695E+03 | 2.693E+03 | 2.691E+03 | 2700 | 2.698E+03 |
Std | 2.397E+01 | 1.849E+01 | 1.418E+01 | 0 | 1.424E+01 | 1.558E+01 | 2.252E+01 | 0 | 7.751E+00 | |
Rank | 1 | 4 | 6 | 8 | 5 | 3 | 2 | 8 | 7 | |
F26 | Ave | 2.700E+03 | 2.700E+03 | 2.700E+03 | 2700 | 2.700E+03 | 2.700E+03 | 2.700E+03 | 2700 | 2.700E+03 |
Std | 6.467E−02 | 6.566E−02 | 7.345E−02 | 6.529E−02 | 1.009E−01 | 1.367E−01 | 8.333E−02 | 1.492E−01 | 1.868E−01 | |
Rank | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | |
F27 | Ave | 2.900E+03 | 2.881E+03 | 2.890E+03 | 2900 | 2.842E+03 | 2.842E+03 | 2.890E+03 | 2.852E+03 | 2.890E+03 |
Std | 0.000E+00 | 5.964E+01 | 4.363E+01 | 0 | 9.126E+01 | 9.116E+01 | 4.353E+01 | 8.483E+01 | 4.269E+01 | |
Rank | 8 | 4 | 5 | 8 | 1 | 1 | 5 | 3 | 5 | |
F28 | Ave | 3000 | 3000 | 3000 | 3000 | 3000 | 3000 | 3000 | 3000 | 3000 |
Std | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
Rank | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | |
F29 | Ave | 3100 | 3100 | 3100 | 3100 | 3100 | 3100 | 3100 | 3100 | 3100 |
Std | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
Rank | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | |
F30 | Ave | 3200 | 3200 | 3200 | 3200 | 3200 | 3200 | 3200 | 3200 | 3200 |
Std | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
Rank | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | |
Mean Rank | 3.9667 | 2.5333 | 3.2000 | 3.4667 | 2.5667 | 3.6333 | 2.6667 | 4.1667 | 5.4000 | |
Result | 7 | 1 | 4 | 5 | 2 | 6 | 3 | 8 | 9 | |
Bold represents the optimal values of the evaluation indicators
[See PDF for image]
Fig. 5
The histogram of rank sum of each parameter combination
In Table 1, the average rank obtained by the second parameter combination is the lowest, with a value of 2.5333, followed by the average rank of the fifth parameter combination, with a value of 2.5667. The worst results of average rank are the combination of the 1st, 8th and 9th parameter combinations, whose average rank are larger. On function F12, F16, F23, F26, F29 and F30, the values obtained by each algorithm have little difference, indicating that two parameters have basically no influence on the six test functions. The algorithm of the second parameter combination acquires the optimal result on function F7, the algorithm of the fifth parameter combination gets the optimal result on F6, F15 and F17, the algorithm of the seventh parameter combination holds the best result on F4, F18, F22 and F24, and the algorithm of the third parameter combination achieves the optimal value on function F20. The penultimate line and the rank sum in Fig. 5 indicates that the combined result of α = 0.1 and β = 0.1 is not much different from that of combination 5 and combination 7. The second parameter value is selected in this section for subsequent experimental comparison.
The Ablation Experiments of FRSA
In order to to verify the effectiveness of the different strategies of FRSA, FRSA is compared with its 6 incomplete algorithms and RSA. The incomplete algorithms include the feedback mechanism, adaptive parameter and the mutation strategy correspond to FRSA1, FRSA2, FRSA3, and select combination strategies FRSA4 (the feedback mechanism and the mutation strategy), FRSA5 (the feedback mechanism and adaptive parameter), FRSA6 (adaptive parameter and the mutation strategy) to evaluate their impact on convergence speed and accuracy. Due to article space constraints, this paper only gives the convergence curves of some test sets in Fig. 6.
[See PDF for image]
Fig. 6
Convergence curves of incomplete algorithms on 10-dimensional CEC2014
Figure 6 indicates that FRSA are better than its incomplete algorithms and RSA. Although the rate of convergence of FRSA on functions F4, F6, F17 is lower than that of FRSA5, it has better convergence precision. In general, every improvement strategy of FRSA is effective and its incomplete algorithms all improve RSA to different degrees in both exploration and exploitation.
Comparison Experiments of FRSA and Other Intelligent Algorithms
The performance of FRSA is tested on 30-dimensional CEC2014. All algorithm runs 20 times, and gives some evaluating indicator (Ave, Std), Rank of multiple runs, the average rank and final ranking. Population size n = 60, and the comparison algorithms are: GA [27], WOA [38], SCA [48], BMO [49], LFD [50], AOA [51], TSA [52], AO [45], RSO [46], HHO [40] and RSA.
Table 2 lists the average (avg.), standard deviation (std.), and obtained by each algorithm on the CEC 2014 test functions in 30-dimensional space, where the best average fitness value is highlighted in bold black. The average rank of FRSA is 1.7667, which is superior to other methods. Most methods obtain the same average results on functions F12, F13 and F16, and there is no significant difference. For the other 27 test functions, FRSA achieve small values on 22 test functions, especially from the functions F17-F30, FRSA achieved good solutions on some complex function. In addition, the number of functions with smaller values of GA, BMO and are 3, 1 and 1 respectively. Therefore, based on the average rank, FRSA has better overall effect.
Table 2. Comparison results of all algorithms (30-dimensional CEC2014 test set)
F | Index | Algorithms | |||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
GA | AOA | RSO | BMO | AO | LFD | WOA | TSA | HHO | SCA | RSA | FRSA | ||
F1 | Ave | 6.16E+08 | 6.40E+08 | 9.47E+08 | 1.42E+08 | 1.07E+08 | 1.69E+08 | 1.41E+08 | 3.80E+08 | 5.53E+07 | 3.73E+08 | 1.46E+09 | 1.18E+07 |
Std | 1.62E+08 | 1.15E+08 | 1.81E+08 | 6.31E+07 | 3.99E+07 | 6.58E+07 | 6.69E+07 | 1.85E+08 | 1.76E+07 | 9.71E+07 | 4.20E+08 | 8.23E+06 | |
Rank | 9 | 10 | 11 | 5 | 3 | 6 | 4 | 8 | 2 | 7 | 12 | 1 | |
F2 | Ave | 6.89E+09 | 6.60E + 10 | 5.84E + 10 | 4.25E+09 | 1.92E+09 | 3.64E+09 | 2.02E+09 | 2.86E + 10 | 9.35E+07 | 2.60E + 10 | 6.92E + 10 | 1.56E+04 |
Std | 8.67E+08 | 6.48E+09 | 4.51E+09 | 3.09E+09 | 9.05E+08 | 1.66E+09 | 7.68E+08 | 5.77E+09 | 5.41E+07 | 3.65E+09 | 1.09E + 10 | 1.14E+04 | |
Rank | 7 | 11 | 10 | 6 | 3 | 5 | 4 | 9 | 2 | 8 | 12 | 1 | |
F3 | Ave | 5.85E+05 | 5.40E+04 | 7.14E+04 | 2.81E+04 | 7.02E+04 | 1.37E+05 | 9.83E+04 | 5.21E+04 | 3.33E+04 | 6.41E+04 | 2.06E+05 | 2.86E+04 |
Std | 5.52E+05 | 6.70E+03 | 5.26E+03 | 1.21E+04 | 7.20E+03 | 2.35E+04 | 4.79E+04 | 9.81E+03 | 8.05E+03 | 1.68E+04 | 6.75E+04 | 8.59E+03 | |
Rank | 12 | 5 | 8 | 1 | 7 | 10 | 9 | 4 | 3 | 6 | 11 | 2 | |
F4 | Ave | 2.31E+03 | 1.15E+04 | 5.14E+03 | 8.34E+02 | 7.92E+02 | 1.33E+03 | 8.71E+02 | 3.06E+03 | 6.65E+02 | 2.37E+03 | 1.51E+04 | 5.32E+02 |
Std | 2.94E+02 | 1.96E+03 | 1.48E+03 | 1.61E+02 | 1.31E+02 | 5.86E+02 | 1.41E+02 | 2.12E+03 | 7.27E+01 | 8.82E+02 | 2.71E+03 | 2.15E+01 | |
Rank | 7 | 11 | 10 | 4 | 3 | 6 | 5 | 9 | 2 | 8 | 12 | 1 | |
F5 | Ave | 520.1817 | 520.9924 | 521.0336 | 520.2943 | 520.9906 | 520.0000 | 520.7881 | 521.0431 | 520.6955 | 521.0370 | 520.9509 | 521.0079 |
Std | 7.78E−02 | 7.53E−02 | 4.81E−02 | 1.89E−01 | 7.99E−02 | 2.75E−04 | 1.15E−01 | 6.62E−02 | 1.22E−01 | 4.79E−02 | 1.41E−01 | 6.19E−02 | |
Rank | 2 | 8 | 10 | 3 | 7 | 1 | 5 | 12 | 4 | 11 | 6 | 9 | |
F6 | Ave | 623.4397 | 637.6241 | 637.6981 | 627.5952 | 629.5445 | 636.9638 | 636.9382 | 631.1241 | 634.5941 | 638.1607 | 644.6722 | 614.5911 |
Std | 2.34E+00 | 2.05E+00 | 2.78E+00 | 3.29E+00 | 3.84E+00 | 2.61E+00 | 4.21E+00 | 2.84E+00 | 3.06E+00 | 1.81E+00 | 2.46E+00 | 2.49E+00 | |
Rank | 2 | 9 | 10 | 3 | 4 | 8 | 7 | 5 | 6 | 11 | 12 | 1 | |
F7 | Ave | 7.20E+02 | 1.24E+03 | 1.34E+03 | 7.35E+02 | 7.16E+02 | 7.60E+02 | 7.12E+02 | 9.39E+02 | 7.02E+02 | 9.12E+02 | 1.39E+03 | 7.00E+02 |
Std | 7.16E+00 | 5.79E+01 | 3.74E+01 | 2.29E+01 | 6.27E+00 | 3.10E+01 | 4.16E+00 | 9.13E+01 | 4.32E−01 | 3.34E+01 | 8.70E+01 | 1.22E−02 | |
Rank | 5 | 10 | 11 | 6 | 4 | 7 | 3 | 9 | 2 | 8 | 12 | 1 | |
F8 | Ave | 8.55E+02 | 1.07E+03 | 1.11E+03 | 9.47E+02 | 9.34E+02 | 9.60E+02 | 1.02E+03 | 1.06E+03 | 9.42E+02 | 1.08E+03 | 1.17E+03 | 8.50E+02 |
Std | 1.07E+01 | 1.57E+01 | 2.27E+01 | 1.83E+01 | 2.01E+01 | 4.78E+01 | 4.42E+01 | 3.37E+01 | 2.02E+01 | 2.19E+01 | 3.83E+01 | 3.02E+01 | |
Rank | 2 | 9 | 11 | 5 | 3 | 6 | 7 | 8 | 4 | 10 | 12 | 1 | |
F9 | Ave | 1.04E+03 | 1.20E+03 | 1.19E+03 | 1.10E+03 | 1.09E+03 | 1.09E+03 | 1.16E+03 | 1.22E+03 | 1.09E+03 | 1.21E+03 | 1.35E+03 | 1.11E+03 |
Std | 2.00E+01 | 1.50E+01 | 2.22E+01 | 1.19E+01 | 2.86E+01 | 3.07E+01 | 4.81E+01 | 4.45E+01 | 1.95E+01 | 2.33E+01 | 6.36E+01 | 3.90E+01 | |
Rank | 1 | 9 | 8 | 5 | 2 | 2 | 7 | 11 | 2 | 10 | 12 | 6 | |
F10 | Ave | 2.24E+03 | 6.80E+03 | 7.85E+03 | 4.20E+03 | 4.50E+03 | 5.25E+03 | 6.01E+03 | 6.32E+03 | 4.06E+03 | 7.66E+03 | 8.54E+03 | 5.98E+03 |
Std | 2.72E+02 | 5.99E+02 | 5.73E+02 | 5.78E+02 | 6.77E+02 | 1.12E+03 | 8.67E+02 | 7.76E+02 | 9.18E+02 | 5.15E+02 | 3.83E+02 | 8.60E+02 | |
Rank | 1 | 9 | 11 | 3 | 4 | 5 | 7 | 8 | 2 | 10 | 12 | 6 | |
F11 | Ave | 4.61E+03 | 8.26E+03 | 8.01E+03 | 5.43E+03 | 5.83E+03 | 5.94E+03 | 6.92E+03 | 7.28E+03 | 5.78E+03 | 8.82E+03 | 9.26E+03 | 8.50E+03 |
Std | 3.97E+02 | 6.03E+02 | 8.71E+02 | 6.06E+02 | 5.78E+02 | 1.17E+03 | 6.81E+02 | 7.35E+02 | 8.02E+02 | 2.53E+02 | 4.03E+02 | 3.85E+02 | |
Rank | 1 | 9 | 8 | 2 | 4 | 5 | 6 | 7 | 3 | 11 | 12 | 10 | |
F12 | Ave | 1200.458 | 1202.744 | 1202.930 | 1201.104 | 1201.941 | 1201.732 | 1202.220 | 1202.976 | 1202.090 | 1203.061 | 1201.997 | 1203.168 |
Std | 1.35E−01 | 4.91E−01 | 6.14E−01 | 5.35E−01 | 6.07E−01 | 5.03E−01 | 5.78E−01 | 4.93E−01 | 5.47E−01 | 4.61E−01 | 9.24E−01 | 5.31E−01 | |
Rank | 1 | 9 | 9 | 2 | 4 | 3 | 7 | 10 | 6 | 11 | 5 | 12 | |
F13 | Ave | 1300.330 | 1307.387 | 1306.230 | 1300.713 | 1300.749 | 1301.793 | 1300.560 | 1304.057 | 1300.510 | 1303.778 | 1308.047 | 1300.540 |
Std | 6.24E−02 | 5.15E−01 | 4.49E−01 | 2.59E−01 | 3.29E−01 | 9.94E−01 | 1.18E−01 | 5.82E−01 | 1.16E−01 | 3.78E−01 | 8.64E−01 | 7.51E−02 | |
Rank | 1 | 11 | 10 | 5 | 6 | 7 | 4 | 9 | 2 | 8 | 12 | 3 | |
F14 | Ave | 1400.951 | 1616.777 | 1639.597 | 1406.981 | 1405.028 | 1424.047 | 1401.953 | 1483.221 | 1400.276 | 1473.699 | 1659.028 | 1400.357 |
Std | 3.21E+00 | 2.30E+01 | 1.94E+01 | 7.23E+00 | 6.93E+00 | 1.15E+01 | 3.69E+00 | 2.29E+01 | 6.90E−02 | 2.12E+01 | 3.28E+01 | 5.25E−02 | |
Rank | 1 | 10 | 11 | 5 | 5 | 7 | 1 | 9 | 1 | 8 | 12 | 1 | |
F15 | Ave | 1.04E+07 | 6.16E+04 | 1.72E+05 | 1.93E+03 | 1.56E+03 | 1.62E+03 | 1.70E+03 | 1.94E+04 | 1.55E+03 | 1.93E+04 | 1.86E+05 | 1.51E+03 |
Std | 7.55E+06 | 3.63E+04 | 7.00E+04 | 3.50E+02 | 3.27E+01 | 4.91E+01 | 1.54E+02 | 2.76E+04 | 1.04E+01 | 1.35E+04 | 8.76E+04 | 5.52E+00 | |
Rank | 12 | 9 | 10 | 6 | 3 | 4 | 5 | 8 | 2 | 7 | 11 | 1 | |
F16 | Ave | 1611.813 | 1613.055 | 1612.845 | 1612.904 | 1612.646 | 1612.855 | 1613.053 | 1612.913 | 1612.430 | 1613.217 | 1613.406 | 1613.057 |
Std | 5.17E−01 | 3.31E−01 | 4.29E−01 | 2.27E−01 | 4.92E−01 | 3.63E−01 | 4.16E−01 | 2.87E−01 | 4.64E−01 | 2.68E−01 | 3.37E−01 | 2.76E−01 | |
Rank | 1 | 9 | 4 | 6 | 3 | 5 | 8 | 7 | 2 | 11 | 12 | 10 | |
F17 | Ave | 1.67E+08 | 6.62E+07 | 5.51E+07 | 4.86E+06 | 5.49E+06 | 1.90E+07 | 1.81E+07 | 9.76E+06 | 6.82E+06 | 1.36E+07 | 1.64E+08 | 1.77E+06 |
Std | 6.68E+07 | 3.13E+07 | 2.46E+07 | 4.45E+06 | 3.45E+06 | 1.64E+07 | 1.39E+07 | 1.36E+07 | 5.73E+06 | 5.69E+06 | 5.18E+07 | 1.06E+06 | |
Rank | 12 | 10 | 9 | 2 | 3 | 8 | 7 | 5 | 4 | 6 | 11 | 1 | |
F18 | Ave | 9.73E+08 | 3.26E+09 | 1.00E+09 | 7.03E+03 | 4.19E+05 | 4.37E+04 | 4.90E+05 | 4.14E+08 | 3.52E+05 | 3.09E+08 | 5.02E+09 | 3.74E+03 |
Std | 3.86E+08 | 9.68E+08 | 9.08E+08 | 4.80E+03 | 3.14E+05 | 1.20E+05 | 5.99E+05 | 8.62E+08 | 5.20E+05 | 1.43E+08 | 2.34E+09 | 1.76E+03 | |
Rank | 9 | 11 | 10 | 2 | 5 | 3 | 6 | 8 | 4 | 7 | 12 | 1 | |
F19 | Ave | 2056.288 | 2263.119 | 2136.766 | 1939.479 | 1949.248 | 2066.012 | 1991.064 | 2065.617 | 1975.592 | 2045.412 | 2450.200 | 1911.239 |
Std | 3.21E+01 | 5.74E+01 | 3.74E+01 | 2.06E+01 | 2.82E+01 | 4.82E+01 | 5.01E+01 | 7.63E+01 | 4.44E+01 | 4.18E+01 | 1.24E+02 | 1.57E+01 | |
Rank | 7 | 11 | 10 | 2 | 3 | 8 | 5 | 8 | 4 | 6 | 12 | 1 | |
F20 | Ave | 1.05E+07 | 4.73E+04 | 1.36E+05 | 6.66E+04 | 1.07E+05 | 1.36E+05 | 7.72E+04 | 4.60E+04 | 3.86E+04 | 5.67E+04 | 3.56E+06 | 2.88E+04 |
Std | 8.61E+06 | 1.84E+04 | 4.98E+04 | 5.11E+04 | 7.76E+04 | 1.20E+05 | 5.58E+04 | 3.46E+04 | 1.89E+04 | 3.56E+04 | 3.05E+06 | 1.67E+04 | |
Rank | 12 | 4 | 9 | 6 | 8 | 9 | 7 | 3 | 2 | 5 | 11 | 1 | |
F21 | Ave | 9.70E+07 | 2.48E+07 | 1.27E+07 | 8.35E+05 | 1.97E+06 | 5.87E+06 | 7.70E+06 | 3.87E+06 | 7.20E+05 | 3.08E+06 | 1.01E+08 | 5.59E+05 |
Std | 4.78E+07 | 1.09E+07 | 4.48E+06 | 5.79E+05 | 1.49E+06 | 7.29E+06 | 6.71E+06 | 5.56E+06 | 5.68E+05 | 1.42E+06 | 7.28E+07 | 3.92E+05 | |
Rank | 11 | 10 | 9 | 3 | 4 | 7 | 8 | 6 | 2 | 5 | 12 | 1 | |
F22 | Ave | 4.52E+04 | 5117.806 | 2952.945 | 2845.496 | 2952.945 | 3235.684 | 3135.555 | 3250.193 | 3024.443 | 3238.140 | 1.19E+04 | 2623.296 |
Std | 5.74E+04 | 1.37E+03 | 2.98E+02 | 2.53E+02 | 2.89E+02 | 2.68E+02 | 2.24E+02 | 5.92E+02 | 2.32E+02 | 1.86E+02 | 1.20E+04 | 1.91E+02 | |
Rank | 12 | 10 | 9 | 2 | 3 | 6 | 5 | 8 | 4 | 6 | 11 | 1 | |
F23 | Ave | 2653.478 | 2602.178 | 2500.282 | 2500.000 | 2500.282 | 2500.002 | 2685.292 | 2696.331 | 2500.000 | 2707.103 | 2500.000 | 2500.000 |
Std | 1.33E+01 | 2.10E+02 | 8.97E+01 | 0 | 2.86E−01 | 4.62E−04 | 2.27E+01 | 7.30E+01 | 0 | 2.19E+01 | 0 | 0 | |
Rank | 9 | 8 | 7 | 1 | 5 | 5 | 10 | 11 | 1 | 12 | 1 | 1 | |
F24 | Ave | 2631.294 | 2600.000 | 2600.000 | 2600.000 | 2600.000 | 2600.161 | 2609.962 | 2609.975 | 2600.000 | 2618.948 | 2600.000 | 2600.000 |
Std | 2.40E+00 | 6.33E−04 | 0 | 0 | 0 | 1.21E−01 | 7.33E+00 | 1.43E+01 | 1.89E−04 | 1.23E+01 | 0 | 0 | |
Rank | 12 | 6 | 1 | 1 | 1 | 6 | 9 | 9 | 6 | 11 | 1 | 1 | |
F25 | Ave | 2712.083 | 2700.000 | 2700.000 | 2700.000 | 2700.000 | 2700.000 | 2722.048 | 2727.624 | 2700.000 | 2742.145 | 2700.000 | 2700.000 |
Std | 1.76E+00 | 2.33E-13 | 0 | 0 | 0 | 6.40E−06 | 2.16E+01 | 8.27E+00 | 0 | 1.18E+01 | 0 | 0 | |
Rank | 9 | 7 | 1 | 1 | 1 | 7 | 10 | 11 | 1 | 12 | 1 | 1 | |
F26 | Ave | 2765.295 | 2783.499 | 2755.242 | 2755.274 | 2755.242 | 2703.802 | 2725.309 | 2760.821 | 2750.252 | 2703.556 | 2774.633 | 2700.565 |
Std | 4.88E+01 | 3.40E+01 | 2.08E+01 | 5.07E+01 | 5.08E+01 | 2.05E+00 | 7.12E+01 | 6.87E+01 | 5.10E+01 | 4.22E−01 | 4.01E+01 | 1.11E−01 | |
Rank | 10 | 12 | 4 | 7 | 7 | 1 | 5 | 7 | 6 | 1 | 10 | 1 | |
F27 | Ave | 3569.392 | 3840.987 | 3356.470 | 2900.000 | 2905.136 | 2996.946 | 3885.501 | 3766.006 | 2900.000 | 3706.025 | 2900.000 | 2900.000 |
Std | 2.46E+02 | 3.29E+02 | 5.11E+02 | 0 | 4.66E+00 | 3.05E+02 | 3.50E+02 | 3.74E+02 | 0 | 3.31E+02 | 0 | 0 | |
Rank | 8 | 11 | 7 | 1 | 5 | 6 | 12 | 10 | 1 | 9 | 1 | 1 | |
F28 | Ave | 6311.489 | 6341.009 | 5388.270 | 3000.000 | 3000.982 | 5126.206 | 5479.707 | 7013.898 | 3000.000 | 5504.414 | 3000.000 | 3000.000 |
Std | 9.17E+02 | 1.26E+03 | 1.47E+03 | 0 | 1.47E+00 | 2.11E+03 | 6.15E+02 | 9.90E+02 | 0 | 4.47E+02 | 0 | 0 | |
Rank | 10 | 11 | 7 | 1 | 5 | 6 | 8 | 12 | 1 | 9 | 1 | 1 | |
F29 | Ave | 7.19E+07 | 2.41E+07 | 3.89E+07 | 3.10E+03 | 1.19E+07 | 3.18E+05 | 1.12E+07 | 5.60E+07 | 1.12E+06 | 2.94E+07 | 3100.000 | 3100.000 |
Std | 3.83E+07 | 1.06E+08 | 4.54E+07 | 0 | 1.27E+07 | 2.60E+05 | 8.30E+06 | 2.70E+07 | 4.98E+06 | 1.17E+07 | 0 | 0 | |
Rank | 12 | 8 | 10 | 1 | 7 | 4 | 6 | 11 | 5 | 9 | 1 | 1 | |
F30 | Ave | 7.20E+06 | 1.47E+06 | 7.65E+05 | 3200.000 | 1.91E+05 | 3.13E+05 | 2.69E+05 | 4.69E+05 | 7.33E+04 | 4.84E+05 | 3200.000 | 3200.000 |
Std | 3.29E+06 | 9.78E+05 | 8.30E+05 | 0 | 9.46E+04 | 3.20E+05 | 1.56E+05 | 4.64E+05 | 1.22E+05 | 1.67E+05 | 0 | 0 | |
Rank | 12 | 11 | 10 | 1 | 5 | 7 | 6 | 8 | 4 | 9 | 1 | 1 | |
+ / = / − | 3/1/26 | 1/3/26 | 1/7/22 | 5/9/16 | 3/3/24 | 4/1/25 | 0/4/26 | 0/6/24 | 3/10/17 | 0/3/27 | 0/7/23 | / | |
Mean Rank | 6.9667 | 8.5667 | 7.9000 | 2.8667 | 3.8000 | 5.2333 | 5.8667 | 7.3000 | 2.7667 | 7.2000 | 8.1333 | 1.7667 | |
Result | 7 | 12 | 10 | 3 | 4 | 5 | 6 | 9 | 2 | 8 | 11 | 1 | |
Bold represents the optimal values of the evaluation indicators
Wilcoxon rank sum-test evaluates the significance of algorithms differences statistically, and is used to analyze the importance of the evaluation results. If the p-value is less than 0.05, it means that IGEO is significantly different from other algorithms. According to the “ + / = / − ” data in the last row of the table, the better BMO algorithm in the comparison algorithms is good to FRSA on 5 functions, but not good to FRSA on 16 functions. Secondly, for the HHO algorithm, although its results on 10 test functions are equivalent to those of FRSA, the results are outdo/worse than FRSA is 3/17. For other algorithms, FRSA outperforms them on at least 22 test functions, especially compared with SCA algorithm, FRSA has better solution results. Therefore, from the overall view of the 30 functions, FRSA has a better effect on solving the 30-dimensional CEC2014.
Figure 7 draws the boxplots of each algorithm on some functions of 30-dimensional CEC2014, and Fig. 8 draws some convergence curves. The box of FRSA is relatively small, which makes known that multiple run results of FRSA are relatively close, at the same time, the box diagram obtained by FRSA in the figure is lower, which shows that its solving accuracy is high. In addition, on function F19 in Fig. 7, although FRSA has an outlier, its overall position is lower. According to the curve convergence diagram in Fig. 8, the convergence results of all curves is accordance with Table 3, among which FRSA has high solution accuracy.
[See PDF for image]
Fig. 7
Boxplots on 30-dimensional CEC2014
[See PDF for image]
Fig. 8
Convergence curves on 30-dimensional CEC2014
Table 3. Wilcoxon rank sum test values and Comparison results of each comparison algorithm on 10-dimensional CEC2014 test set
Result | Algorithms | |||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
GA | PSO | WOA | BMO | HHO | LFD | AOA | TSA | AO | HGS | RSA | FRSA | |
F1 | 6.796E−08 | 4.407E−01 | 6.796E−08 | 6.796E−08 | 6.796E−08 | 6.796E−08 | 6.796E−08 | 9.173E−08 | 6.796E−08 | 2.563E−07 | 6.796E−08 | / |
F2 | 6.796E−08 | 1.075E−01 | 6.796E−08 | 4.679E−02 | 6.796E−08 | 3.939E−07 | 6.796E−08 | 6.796E−08 | 6.796E−08 | 4.601E−04 | 6.796E−08 | / |
F3 | 6.796E−08 | 1.719E−01 | 7.898E−08 | 3.369E−01 | 2.222E−04 | 1.065E−07 | 2.745E−04 | 1.576E−06 | 5.874E−06 | 1.017E−01 | 1.235E−07 | / |
F4 | 6.796E−08 | 7.338E−01 | 5.255E−05 | 1.794E−02 | 3.293E−05 | 3.939E−07 | 6.796E−08 | 1.576E−06 | 6.015E−07 | 1.333E−01 | 6.796E−08 | / |
F5 | 6.796E−08 | 1.264E−01 | 6.610E−05 | 5.166E−06 | 5.874E−06 | 6.796E−08 | 3.851E−02 | 5.609E−01 | 6.359E−01 | 6.796E−08 | 6.168E−01 | / |
F6 | 6.796E−08 | 6.750E−01 | 6.796E−08 | 1.159E−04 | 6.796E−08 | 6.796E−08 | 6.796E−08 | 7.898E−08 | 1.576E−06 | 5.166E−06 | 6.796E−08 | / |
F7 | 6.796E−08 | 5.978E−01 | 6.796E−08 | 3.966E−03 | 6.796E−08 | 6.796E−08 | 6.796E−08 | 6.796E−08 | 6.796E−08 | 2.748E−02 | 6.796E−08 | / |
F8 | 9.786E−03 | 2.734E−04 | 9.173E−08 | 3.499E−06 | 6.796E−08 | 6.796E−08 | 6.796E−08 | 6.796E−08 | 7.898E−08 | 3.987E−06 | 6.796E−08 | / |
F9 | 3.966E−03 | 9.461E−01 | 1.376E−06 | 1.047E−06 | 4.540E−06 | 1.415E−05 | 7.898E−08 | 1.065E−07 | 2.041E−05 | 1.264E−01 | 5.727E−08 | / |
F10 | 1.794E−04 | 1.415E−05 | 6.796E−08 | 6.796E−08 | 9.127E−07 | 6.796E−08 | 7.898E−08 | 6.796E−08 | 1.065E−07 | 9.892E−01 | 6.467E−08 | / |
F11 | 6.359E−01 | 1.227E−03 | 1.404E−01 | 3.369E−01 | 2.616E−01 | 8.585E−02 | 3.939E−07 | 4.703E−03 | 1.000 | 1.794E−02 | 1.918E−07 | / |
F12 | 9.127E−07 | 1.431E−07 | 1.264E−01 | 2.062E−06 | 8.355E−03 | 1.159E−04 | 7.353E−01 | 9.246E−01 | 5.629E−04 | 1.918E−07 | 9.892E−01 | / |
F13 | 4.986E−02 | 1.075E−01 | 4.155E−04 | 7.353E−01 | 2.563E−07 | 3.705E−05 | 6.796E−08 | 9.127E−07 | 3.416E−07 | 2.748E−02 | 6.796E−08 | // |
F14 | 1.159E−04 | 1.136E−01 | 2.503E−01 | 4.388E−02 | 6.949E−01 | 6.040E−03 | 6.796E−08 | 1.803E−06 | 3.639E−03 | 5.560E−03 | 6.796E−08 | / |
F15 | 6.796E−08 | 4.407E−01 | 1.918E−07 | 4.112E−02 | 6.796E−08 | 6.796E−08 | 6.796E−08 | 1.065E−07 | 9.173E−08 | 9.676E−01 | 6.796E−08 | / |
F16 | 3.057E−03 | 4.643E−02 | 7.948E−07 | 5.166E−06 | 3.966E−03 | 3.987E−06 | 7.948E−07 | 3.750E−04 | 1.037E−04 | 2.073E−02 | 6.796E−08 | / |
F17 | 6.796E−08 | 8.357E−04 | 8.355E−03 | 1.657E−07 | 1.349E−03 | 7.579E−04 | 1.481E−03 | 7.406E−05 | 3.648E−01 | 9.278E−05 | 6.796E−08 | / |
F18 | 6.796E−08 | 7.557E−01 | 4.388E−02 | 1.478E−01 | 5.979E−01 | 3.369E−01 | 1.719E−01 | 7.972E−01 | 1.199E−01 | 3.793E−01 | 6.917E−07 | / |
F19 | 1.918E−07 | 2.073E−02 | 6.796E−08 | 5.255E−05 | 6.796E−08 | 6.796E−08 | 6.796E−08 | 6.796E−08 | 6.796E−08 | 5.874E−06 | 6.796E−08 | / |
F20 | 6.796E−08 | 2.184E−01 | 2.139E−03 | 9.676E−01 | 2.184E−01 | 6.787E−02 | 6.787E−02 | 4.679E−02 | 4.679E−02 | 5.075E−01 | 1.918E−07 | / |
F21 | 6.796E−08 | 1.404E−01 | 1.431E−07 | 2.745E−04 | 8.357E−04 | 1.199E−01 | 3.069E−06 | 3.372E−02 | 3.104E−01 | 2.977E−01 | 6.796E−08 | / |
F22 | 6.796E−08 | 1.047E−06 | 7.898E−08 | 5.652E−02 | 6.796E−08 | 6.796E−08 | 6.796E−08 | 6.796E−08 | 6.796E−08 | 3.793E−01 | 6.796E−08 | / |
F23 | 8.007E−09 | 3.305E−09 | 8.007E−09 | NaN | NaN | 8.007E−09 | 8.007E−09 | 8.007E−09 | 1.054E−07 | NaN | NaN | / |
F24 | 5.227E−07 | 2.944E−02 | 6.917E−07 | 4.636E−08 | 1.538E−07 | 4.539E−07 | 5.601E−07 | 3.416E−07 | 8.068E−07 | 9.385E−07 | 4.636E−08 | / |
F25 | 9.435E−03 | 6.196E−01 | 6.601E−01 | 1.626E−01 | 1.626E−01 | 2.978E−04 | 2.538E−04 | 1.893E−03 | 1.157E−01 | 4.998E−01 | 1.626E−01 | / |
F26 | 8.181E−01 | 1.997E−04 | 8.355E−03 | 1.075E−01 | 1.116E−03 | 5.629E−04 | 6.796E−08 | 5.874E−06 | 2.218E−07 | 6.220E−04 | 6.796E−08 | / |
F27 | 2.779E−03 | 6.523E−05 | 3.787E−06 | 3.421E−01 | 1.000 | 8.078E−01 | 3.787E−06 | 2.384E−07 | 2.779E−03 | 5.737E−01 | 3.421E−01 | / |
F28 | 8.007E−09 | 8.007E−09 | 8.007E−09 | NaN | NaN | 8.007E−09 | 8.007E−09 | 8.007E−09 | 8.007E−09 | NaN | NaN | / |
F29 | 8.007E−09 | 7.992E−09 | 7.992E−09 | NaN | 2.992E−08 | 8.007E−09 | 8.007E−09 | 8.007E−09 | 8.007E−09 | 2.566E−05 | NaN | / |
F30 | 8.007E−09 | 8.007E−09 | 8.007E−09 | 4.016E−02 | 1.054E−07 | 8.007E−09 | 8.007E−09 | 8.007E−09 | 8.007E−09 | 1.105E−06 | NaN | / |
+ / = / − | 0/2/28 | 2/15/13 | 0/4/26 | 0/12/18 | 0/8/22 | 0/5/25 | 0/3/27 | 0/3/27 | 1/6/23 | 2/13/15 | 0/8/22 | / |
Rank | 7.5667 | 3.2667 | 7.3333 | 3.8333 | 4.4333 | 5.7667 | 6.7667 | 8.0667 | 3.9667 | 2.6333 | 8.7667 | 1.4667 |
Result | 10 | 3 | 9 | 4 | 6 | 7 | 8 | 11 | 5 | 2 | 12 | 1 |
Bold represents the optimal values of the evaluation indicators
At the same time, the test was also carried out in the case of 10 dimensions, and the specific results and Wilcoxon rank sum test values are summarized in Table 3.
According to the Wilcoxon rank sum test values and statistical analysis of Table 3, the overall solution result of FRSA is superior in terms of 30 functions, ranking first. While, PSO, BMO, LFD, AOA, AO and HGS [53] obtained smaller values on 4, 1, 1, 1 and 1 functions respectively. Therefore, the results obtained by FRSA is far superior to that obtained by other comparison algorithms. In addition, on functions F5, F12, F13 and F16, the average value of FRSA is the same as that of some comparison algorithms, here is a discrepancy in standard deviation. Therefore, it can be considered that all comparison algorithms have achieved good solution results on these functions. Other methods is superior/close/inferior to FRSA are 0/2/28, 2/15/13, 0/4/26, 0/12/18, 0/8/22, 0/5/25, 0/3/27, 0/3/27, 1/6/23, 2/13/15 and 0/8/22, respectively. That is to say, PSO in the comparison algorithms is better than FRSA on 2 functions, but not good to FRSA on 13 functions.
In addition, Fig. 9 shows the boxplots of each algorithm on some functions, and Fig. 10 shows the iterative convergence diagram of each algorithm. The boxes corresponding to FRSA are small and the position is lower, which indicates that the solution results of FRSA are stable and the accuracy is relatively high. The convergence speed of FRSA is slower than the partial comparison algorithms, but its convergence curves are still converging downward even in end, which indicates that FRSA has a good ability to escape from precocious mechanism.
[See PDF for image]
Fig. 9
Boxplots of all algorithms on CEC2014 (dim = 10)
[See PDF for image]
Fig. 10
Convergence curves of all algorithms on CEC2014 (dim = 10)
In recent years, various algorithms and improved algorithms have emerged one after another, with dozens or even hundreds of improved algorithms for a single original algorithm. It is worth mentioning here that new algorithms may not necessarily be better than all old algorithms, as new algorithms are generated by new phenomena or behaviors inspired by nature. The improved algorithm of the old algorithm is not necessarily inferior to the improved method of the new algorithm. But the improved algorithm of the new algorithm must be superior to the old algorithm. Therefore, within a certain range, several improved algorithms of the previously proposed primitive heuristic algorithm were randomly selected and compared with the improved algorithm in this paper. Which contain CGSA [68], ALALO [69], GQPSO [70], ISCA [71], and QWOA [72]. Due to space constraints, this paper only gives the convergence curves of some test sets, see Fig. 11. Although the Rate of convergence of FRSA is not as fast as other improved algorithms in some functions, its accuracy is still superior, which is consistent with the above analysis. The convergence curves of some test sets also show that FRSA has good convergence speed and high convergence accuracy.
[See PDF for image]
Fig. 11
Convergence curves of different improved algorithms algorithms on CEC2014 partial test set function (dim = 10)
Experiment and Analysis on The CEC2017 Test Set
Further test the performance of FRSA and other methods on CEC2017 test function. At this point, the following comparison algorithms are selected: SSA [39], GA [27], WOA [38], BMO [49], HHO [40], as the widely used algorithms. AOA [51], TSA [52], AO [45], RSO [46], LFD [50], as the novel algorithms, as well as RSA. n is taken as 40 in this section.
Table 4 displays the solution results of the same as above evaluation indicators. The minimum average function values are recorded. where the best average fitness value is highlighted in bold black. FRSA’s comprehensive ranking is the best in all methods, is 1.9310. The penultimate line illustrated that the solving effect of RSA, TSA, RSO and AOA are poor, and their average rank is greater than 9. In addition, the solving effect of SSA, BMO and AO are relatively good, and their average rank is less than 5. FRSA obtained smaller values on 23 functions, far more than the number of functions ranking first in the comparison algorithms. Therefore, FRSA performs better in solving 30-dimensional CEC2017.
Table 4. Comparison results on CEC2017 with 30-dimension
F | Index | Algorithms | |||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
GA | SSA | WOA | BMO | HHO | LFD | AOA | TSA | AO | RSO | RSA | FRSA | ||
F1 | Ave | 5.990E + 9 | 5.189E + 3 | 3.145E + 9 | 2.384E + 9 | 1.422E + 8 | 7.224E + 9 | 5.01E + 10 | 1.84E + 10 | 2.56E + 9 | 3.73E + 10 | 6.08E + 10 | 4.513E + 3 |
Std | 1.210E + 9 | 5.935E + 3 | 1.243E + 9 | 1.561E + 9 | 6.577E + 7 | 3.264E + 9 | 6.467E + 9 | 6.081E + 9 | 7.767E + 8 | 3.667E + 9 | 8.776E + 9 | 5.824E + 3 | |
Rank | 7 | 2 | 6 | 4 | 3 | 8 | 11 | 9 | 5 | 10 | 12 | 1 | |
F3 | Ave | 9.487E + 7 | 6.409E + 4 | 2.679E + 5 | 7.770E + 4 | 5.234E + 4 | 8.955E + 4 | 5.087E + 4 | 5.493E + 4 | 6.651E + 4 | 7.174E + 4 | 1.096E + 5 | 7.473E + 4 |
Std | 3.225E + 8 | 2.414E + 4 | 7.313E + 4 | 5.382E + 3 | 6.959E + 3 | 1.396E + 4 | 1.086E + 4 | 1.373E + 4 | 8.033E + 3 | 8.145E + 3 | 2.980E + 4 | 1.261E + 4 | |
Rank | 12 | 4 | 11 | 8 | 2 | 9 | 1 | 3 | 5 | 6 | 10 | 7 | |
F4 | Ave | 3.593E + 3 | 5.201E + 2 | 1.141E + 3 | 6.379E + 2 | 6.863E + 2 | 1.902E + 3 | 1.014E + 4 | 4.228E + 3 | 8.449E + 2 | 1.268E + 4 | 1.926E + 4 | 5.146E + 2 |
Std | 1.173E + 3 | 3.914E + 1 | 2.247E + 2 | 4.470E + 1 | 1.144E + 2 | 1.705E + 3 | 1.494E + 3 | 2.161E + 3 | 1.531E + 2 | 3.146E + 3 | 4.417E + 3 | 2.271E + 1 | |
Rank | 8 | 2 | 6 | 3 | 4 | 7 | 10 | 9 | 5 | 11 | 12 | 1 | |
F5 | Ave | 6.749E + 2 | 6.760E + 2 | 8.562E + 2 | 7.237E + 2 | 7.588E + 2 | 7.404E + 2 | 8.866E + 2 | 8.655E + 2 | 7.224E + 2 | 8.710E + 2 | 1.030E + 3 | 6.592E + 2 |
Std | 2.214E + 1 | 4.283E + 1 | 6.063E + 1 | 3.790E + 1 | 2.764E + 1 | 4.044E + 1 | 2.156E + 1 | 6.293E + 1 | 3.413E + 1 | 2.942E + 1 | 8.145E + 1 | 4.494E + 1 | |
Rank | 2 | 3 | 8 | 5 | 7 | 6 | 11 | 9 | 4 | 10 | 12 | 1 | |
F6 | Ave | 6.280E + 2 | 6.524E + 2 | 6.803E + 2 | 6.507E + 2 | 6.670E + 2 | 6.676E + 2 | 6.784E + 2 | 6.810E + 2 | 6.543E + 2 | 6.891E + 2 | 7.011E + 2 | 6.023E + 2 |
Std | 5.532E+0 | 1.085E + 1 | 1.232E + 1 | 1.069E + 1 | 6.570E+0 | 1.119E + 1 | 6.071E+0 | 1.353E + 1 | 7.610E+0 | 6.599E+0 | 1.581E + 1 | 1.209E+0 | |
Rank | 2 | 4 | 9 | 3 | 6 | 7 | 8 | 10 | 5 | 11 | 12 | 1 | |
F7 | Ave | 9.998E + 2 | 9.088E + 2 | 1.308E + 3 | 1.120E + 3 | 1.317E + 3 | 1.239E + 3 | 1.385E + 3 | 1.262E + 3 | 1.131E + 3 | 1.305E + 3 | 1.661E + 3 | 8.862E + 2 |
Std | 4.647E + 1 | 5.227E + 1 | 8.673E + 1 | 9.842E + 1 | 6.643E + 1 | 7.301E + 1 | 4.661E + 1 | 8.968E + 1 | 5.543E + 1 | 4.857E + 1 | 9.331E-13 | 7.072E + 1 | |
Rank | 3 | 2 | 9 | 4 | 10 | 6 | 11 | 7 | 5 | 8 | 12 | 1 | |
F8 | Ave | 9.346E + 2 | 9.626E + 2 | 1.066E + 3 | 9.773E + 2 | 9.802E + 2 | 9.669E + 2 | 1.104E + 3 | 1.122E + 3 | 9.845E + 2 | 1.098E + 3 | 1.263E + 3 | 9.724E + 2 |
Std | 1.892E + 1 | 3.609E + 1 | 5.592E + 1 | 2.188E + 1 | 2.301E + 1 | 2.672E + 1 | 1.422E + 1 | 4.403E + 1 | 2.545E + 1 | 2.727E + 1 | 7.098E + 1 | 6.718E + 1 | |
Rank | 1 | 2 | 8 | 5 | 6 | 3 | 9 | 11 | 7 | 10 | 12 | 4 | |
F9 | Ave | 7.505E + 3 | 5.826E + 3 | 1.297E + 4 | 5.286E + 3 | 8.245E + 3 | 6.622E + 3 | 8.526E + 3 | 1.506E + 4 | 6.768E + 3 | 8.928E + 3 | 2.426E + 4 | 1.593E + 3 |
Std | 1.380E + 3 | 1.273E + 3 | 5.224E + 3 | 5.678E + 2 | 5.887E + 2 | 2.078E + 3 | 9.228E + 2 | 4.472E + 3 | 1.149E + 3 | 1.467E + 3 | 8.698E + 3 | 1.049E + 3 | |
Rank | 6 | 3 | 10 | 2 | 7 | 4 | 8 | 11 | 5 | 9 | 12 | 1 | |
F10 | Ave | 5.108E + 3 | 5.400E + 3 | 7.120E + 3 | 5.407E + 3 | 5.883E + 3 | 6.501E + 3 | 8.517E + 3 | 7.161E + 3 | 6.189E + 3 | 8.709E + 3 | 8.957E + 3 | 8.621E + 3 |
Std | 4.464E + 2 | 9.393E + 2 | 9.348E + 2 | 2.723E + 2 | 9.328E + 2 | 9.597E + 2 | 4.988E + 2 | 4.764E + 2 | 7.864E + 2 | 5.197E + 2 | 9.761E + 2 | 3.663E + 2 | |
Rank | 1 | 2 | 7 | 3 | 4 | 6 | 9 | 8 | 5 | 11 | 12 | 10 | |
F11 | Ave | 7.446E + 4 | 1.383E + 3 | 8.270E + 3 | 2.369E + 3 | 1.473E + 3 | 7.337E + 3 | 5.198E + 3 | 4.815E + 3 | 4.090E + 3 | 5.850E + 3 | 3.052E + 4 | 1.281E + 3 |
Std | 7.554E + 4 | 7.776E + 1 | 3.211E + 3 | 7.657E + 2 | 2.437E + 2 | 1.845E + 3 | 1.412E + 3 | 1.296E + 3 | 1.095E + 3 | 1.300E + 3 | 1.198E + 4 | 4.723E + 1 | |
Rank | 12 | 2 | 10 | 4 | 3 | 9 | 7 | 6 | 5 | 8 | 11 | 1 | |
F12 | Ave | 1.611E + 9 | 2.919E + 7 | 4.335E + 8 | 2.511E + 7 | 6.858E + 7 | 9.808E + 8 | 1.225E + 10 | 3.663E + 9 | 1.873E + 8 | 1.196E + 10 | 1.741E + 10 | 1.562E + 6 |
Std | 4.533E + 8 | 1.932E + 7 | 3.594E + 8 | 2.365E + 7 | 5.356E + 7 | 1.196E + 9 | 2.028E + 9 | 2.641E + 9 | 1.056E + 8 | 1.659E + 9 | 3.976E + 9 | 1.179E + 6 | |
Rank | 8 | 3 | 6 | 2 | 4 | 7 | 11 | 9 | 5 | 10 | 12 | 1 | |
F13 | Ave | 3.202E + 9 | 1.141E + 5 | 5.904E + 6 | 7.337E + 4 | 9.559E + 6 | 2.620E + 7 | 3.598E + 9 | 2.866E + 9 | 4.317E + 6 | 1.101E + 10 | 1.662E + 10 | 1.094E + 4 |
Std | 1.170E + 9 | 6.791E + 4 | 5.227E + 6 | 6.552E + 4 | 3.797E + 7 | 7.597E + 7 | 2.064E + 9 | 3.239E + 9 | 5.491E + 6 | 2.757E + 9 | 6.773E + 9 | 9.982E + 3 | |
Rank | 9 | 3 | 5 | 2 | 6 | 7 | 10 | 8 | 4 | 11 | 12 | 1 | |
F14 | Ave | 4.953E + 7 | 1.800E + 5 | 2.419E + 6 | 6.462E + 5 | 1.269E + 6 | 8.195E + 5 | 1.094E + 6 | 1.810E + 6 | 1.123E + 6 | 3.426E + 6 | 3.065E + 7 | 8.330E + 4 |
Std | 4.058E + 7 | 4.925E + 5 | 2.793E + 6 | 5.859E + 5 | 1.113E + 6 | 1.018E + 6 | 7.586E + 5 | 1.963E + 6 | 1.142E + 6 | 2.420E + 6 | 3.517E + 7 | 4.878E + 4 | |
Rank | 12 | 2 | 9 | 3 | 8 | 4 | 5 | 7 | 6 | 10 | 11 | 1 | |
F15 | Ave | 7.425E + 8 | 7.536E + 4 | 5.774E + 6 | 6.641E + 3 | 1.203E + 5 | 9.772E + 5 | 2.327E + 8 | 1.482E + 8 | 1.535E + 5 | 1.549E + 8 | 1.893E + 9 | 4.297E + 3 |
Std | 3.782E + 8 | 3.397E + 4 | 1.096E + 7 | 4.996E + 3 | 5.973E + 4 | 4.070E + 6 | 2.057E + 8 | 2.291E + 8 | 1.246E + 5 | 3.164E + 8 | 8.440E + 8 | 2.647E + 3 | |
Rank | 12 | 3 | 7 | 2 | 4 | 6 | 11 | 8 | 5 | 9 | 10 | 1 | |
F16 | Ave | 4.570E + 3 | 2.919E + 3 | 4.224E + 3 | 2.856E + 3 | 3.519E + 3 | 3.900E + 3 | 5.745E + 3 | 3.490E + 3 | 3.379E + 3 | 4.402E + 3 | 7.199E + 3 | 2.831E + 3 |
Std | 1.474E + 3 | 3.082E + 2 | 4.567E + 2 | 3.565E + 2 | 3.973E + 2 | 5.751E + 2 | 7.118E + 2 | 3.789E + 2 | 4.994E + 2 | 4.674E + 2 | 1.221E + 3 | 4.541E + 2 | |
Rank | 10 | 3 | 8 | 2 | 6 | 7 | 11 | 5 | 4 | 9 | 12 | 1 | |
F17 | Ave | 1.913E + 4 | 2.332E + 3 | 2.666E + 3 | 2.235E + 3 | 2.806E + 3 | 2.830E + 3 | 4.360E + 3 | 2.513E + 3 | 2.358E + 3 | 3.161E + 3 | 1.317E + 4 | 2.193E + 3 |
Std | 2.205E + 4 | 2.252E + 2 | 3.064E + 2 | 3.704E + 2 | 2.883E + 2 | 2.889E + 2 | 2.902E + 3 | 2.609E + 2 | 2.243E + 2 | 1.995E + 2 | 1.699E + 4 | 2.234E + 2 | |
Rank | 12 | 3 | 6 | 2 | 7 | 8 | 10 | 5 | 4 | 9 | 11 | 1 | |
F18 | Ave | 2.315E + 8 | 1.831E + 6 | 1.063E + 7 | 2.080E + 6 | 2.948E + 6 | 1.270E + 7 | 1.286E + 7 | 4.545E + 6 | 5.702E + 6 | 1.046E + 7 | 2.001E + 8 | 3.115E + 6 |
Std | 1.227E + 8 | 1.605E + 6 | 1.666E + 7 | 3.509E + 6 | 3.060E + 6 | 1.739E + 7 | 8.637E + 6 | 6.660E + 6 | 5.257E + 6 | 9.408E + 6 | 1.963E + 8 | 3.734E + 6 | |
Rank | 12 | 1 | 8 | 2 | 3 | 9 | 10 | 5 | 6 | 7 | 11 | 4 | |
F19 | Ave | 9.389E + 8 | 4.871E + 6 | 1.395E + 7 | 9.417E + 3 | 1.038E + 6 | 5.020E + 6 | 4.079E + 8 | 1.686E + 8 | 2.374E + 6 | 6.368E + 8 | 1.872E + 9 | 5.743E + 3 |
Std | 4.187E + 8 | 3.248E + 6 | 1.369E + 7 | 6.119E + 3 | 6.758E + 5 | 6.070E + 6 | 2.513E + 8 | 1.893E + 8 | 1.728E + 6 | 3.235E + 8 | 1.198E + 9 | 6.797E + 3 | |
Rank | 11 | 5 | 7 | 2 | 3 | 6 | 9 | 8 | 4 | 10 | 12 | 1 | |
F20 | Ave | 2.621E + 3 | 2.596E + 3 | 2.947E + 3 | 2.798E + 3 | 2.765E + 3 | 2.810E + 3 | 2.851E + 3 | 2.889E + 3 | 2.686E + 3 | 2.973E + 3 | 3.480E + 3 | 2.570E + 3 |
Std | 1.855E + 2 | 2.051E + 2 | 1.370E + 2 | 2.273E + 2 | 2.060E + 2 | 2.292E + 2 | 1.397E + 2 | 1.915E + 2 | 1.837E + 2 | 1.878E + 2 | 2.397E + 2 | 2.116E + 2 | |
Rank | 3 | 2 | 10 | 6 | 5 | 7 | 8 | 9 | 4 | 11 | 12 | 1 | |
F21 | Ave | 2.472E + 3 | 2.450E + 3 | 2.607E + 3 | 2.477E + 3 | 2.579E + 3 | 2.579E + 3 | 2.647E + 3 | 2.648E + 3 | 2.516E + 3 | 2.709E + 3 | 2.872E + 3 | 2.424E + 3 |
Std | 2.886E + 1 | 3.342E + 1 | 5.423E + 1 | 4.266E + 1 | 5.039E + 1 | 6.235E + 1 | 3.479E + 1 | 5.587E + 1 | 3.395E + 1 | 4.445E + 1 | 7.139E + 1 | 5.400E + 1 | |
Rank | 3 | 2 | 8 | 4 | 6 | 7 | 9 | 10 | 5 | 11 | 12 | 1 | |
F22 | Ave | 4.210E + 3 | 4.061E + 3 | 7.094E + 3 | 3.144E + 3 | 7.041E + 3 | 6.790E + 3 | 9.093E + 3 | 8.715E + 3 | 4.429E + 3 | 8.797E + 3 | 9.830E + 3 | 6.338E + 3 |
Std | 1.864E + 3 | 2.240E + 3 | 2.379E + 3 | 1.539E + 3 | 1.811E + 3 | 1.690E + 3 | 1.556E + 3 | 1.140E + 3 | 2.070E + 3 | 1.261E + 3 | 8.594E + 2 | 3.758E + 3 | |
Rank | 3 | 2 | 8 | 1 | 7 | 6 | 11 | 9 | 4 | 10 | 12 | 5 | |
F23 | Ave | 2.933E + 3 | 2.804E + 3 | 3.099E + 3 | 2.851E + 3 | 3.220E + 3 | 3.230E + 3 | 3.603E + 3 | 3.281E + 3 | 3.016E + 3 | 3.255E + 3 | 3.774E + 3 | 2.784E + 3 |
Std | 6.006E + 1 | 5.390E + 1 | 1.164E + 2 | 5.284E + 1 | 1.146E + 2 | 1.189E + 2 | 1.562E + 2 | 1.514E + 2 | 5.744E + 1 | 8.058E + 1 | 1.440E + 2 | 4.525E + 1 | |
Rank | 4 | 2 | 6 | 3 | 7 | 8 | 11 | 10 | 5 | 9 | 12 | 1 | |
F24 | Ave | 3.280E + 3 | 2.956E + 3 | 3.252E + 3 | 2.980E + 3 | 3.446E + 3 | 3.433E + 3 | 3.869E + 3 | 3.396E + 3 | 3.117E + 3 | 3.472E + 3 | 3.808E + 3 | 2.986E + 3 |
Std | 9.703E + 1 | 4.762E + 1 | 8.098E + 1 | 3.782E + 1 | 1.197E + 2 | 1.406E + 2 | 1.147E + 2 | 9.500E + 1 | 6.644E + 1 | 1.034E + 2 | 2.889E + 2 | 5.242E + 1 | |
Rank | 6 | 1 | 5 | 2 | 10 | 8 | 12 | 7 | 4 | 9 | 11 | 3 | |
F25 | Ave | 3.161E + 3 | 2.937E + 3 | 3.167E + 3 | 3.071E + 3 | 2.985E + 3 | 3.146E + 3 | 4.481E + 3 | 3.366E + 3 | 3.041E + 3 | 4.849E + 3 | 6.228E + 3 | 2.902E + 3 |
Std | 5.908E + 1 | 1.580E + 1 | 5.917E + 1 | 8.132E + 1 | 2.900E + 1 | 9.799E + 1 | 3.770E + 2 | 1.749E + 2 | 4.635E + 1 | 4.399E + 2 | 7.493E + 2 | 1.753E + 1 | |
Rank | 7 | 2 | 8 | 5 | 3 | 6 | 10 | 9 | 4 | 11 | 12 | 1 | |
F26 | Ave | 6.586E + 3 | 4.969E + 3 | 8.536E + 3 | 5.799E + 3 | 7.902E + 3 | 8.808E + 3 | 1.041E + 4 | 9.270E + 3 | 6.657E + 3 | 8.920E + 3 | 1.300E + 4 | 4.521E + 3 |
Std | 1.202E + 3 | 8.750E + 2 | 9.524E + 2 | 1.497E + 3 | 1.010E + 3 | 7.744E + 2 | 6.111E + 2 | 9.768E + 2 | 1.488E + 3 | 6.560E + 2 | 1.637E + 3 | 5.486E + 2 | |
Rank | 4 | 2 | 7 | 3 | 6 | 8 | 11 | 10 | 5 | 9 | 12 | 1 | |
F27 | Ave | 3.464E + 3 | 3.280E + 3 | 3.457E + 3 | 3.250E + 3 | 3.551E + 3 | 3.604E + 3 | 3.904E + 3 | 3.674E + 3 | 3.427E + 3 | 4.041E + 3 | 5.242E + 3 | 3.230E + 3 |
Std | 5.006E + 1 | 4.523E + 1 | 1.546E + 2 | 1.916E + 1 | 1.257E + 2 | 3.130E + 2 | 6.530E + 2 | 1.744E + 2 | 6.217E + 1 | 3.280E + 2 | 5.456E + 2 | 1.127E + 1 | |
Rank | 5 | 3 | 6 | 2 | 7 | 8 | 10 | 9 | 4 | 11 | 12 | 1 | |
F28 | Ave | 3.675E + 3 | 3.300E + 3 | 3.682E + 3 | 3.496E + 3 | 3.421E + 3 | 4.173E + 3 | 5.328E + 3 | 4.520E + 3 | 3.629E + 3 | 6.392E + 3 | 7.815E + 3 | 3.260E + 3 |
Std | 1.382E + 2 | 4.901E + 1 | 1.139E + 2 | 1.112E + 2 | 6.839E + 1 | 4.911E + 2 | 1.576E + 3 | 4.611E + 2 | 1.259E + 2 | 5.334E + 2 | 7.171E + 2 | 2.793E + 1 | |
Rank | 7 | 2 | 6 | 4 | 3 | 8 | 10 | 9 | 5 | 11 | 12 | 1 | |
F29 | Ave | 5.133E + 4 | 4.305E + 3 | 5.351E + 3 | 4.163E + 3 | 5.031E + 3 | 5.005E + 3 | 6.799E + 3 | 5.021E + 3 | 4.820E + 3 | 5.304E + 3 | 1.294E + 4 | 3.772E + 3 |
Std | 5.513E + 4 | 4.080E + 2 | 4.729E + 2 | 2.701E + 2 | 4.881E + 2 | 6.257E + 2 | 1.251E + 3 | 3.986E + 2 | 4.134E + 2 | 5.362E + 2 | 5.456E + 3 | 2.080E + 2 | |
Rank | 8 | 3 | 10 | 2 | 7 | 5 | 11 | 6 | 4 | 9 | 12 | 1 | |
F30 | Ave | 6.382E + 8 | 8.499E + 6 | 8.538E + 7 | 3.223E + 5 | 1.032E + 7 | 1.335E + 8 | 1.729E + 9 | 1.495E + 8 | 2.685E + 7 | 8.644E + 8 | 2.497E + 9 | 1.134E + 4 |
Std | 2.630E + 8 | 5.594E + 6 | 8.833E + 7 | 4.360E + 5 | 7.017E + 6 | 3.096E + 8 | 7.321E + 8 | 4.281E + 8 | 2.415E + 7 | 5.931E + 8 | 1.642E + 9 | 4.070E + 3 | |
Rank | 9 | 3 | 6 | 2 | 4 | 7 | 11 | 8 | 5 | 10 | 12 | 1 | |
+ / = / − | 2/3/24 | 3/13/13 | 1/1/27 | 2/7/20 | 2/3/24 | 1/2/26 | 1/2/26 | 2/2/25 | 2/4/23 | 0/3/26 | 0/0/29 | / | |
Mean rank | 6.8621 | 2.5172 | 7.5862 | 3.1724 | 5.4483 | 6.7931 | 9.5172 | 10.8621 | 4.7586 | 9.6552 | 11.6897 | 1.9310 | |
Result | 7 | 2 | 8 | 3 | 5 | 6 | 9 | 11 | 4 | 10 | 12 | 1 | |
Bold represents the optimal values of the evaluation indicators
The end of Table 4 shows the number of significant differences, which are represented by (+ / = / −), where “ + ” indicates that FRSA is significantly different from the algorithm, and “ − ” indicates that insignificant significant differences. According to Table 4, the better comparison algorithm is SSA, which is ahead of FRSA on 3 functions and inferior to FRSA on 13 functions. FRSA performs poorly on function F10, and its calculation results on F10 are worse than those of the 7 comparison algorithms. We can consider further improving the algorithm in this regard in the future. From the 29 test functions as a whole, the performance of FRSA is outstanding.
The partial boxplots and convergence curves on CEC2017 to clear analysis of its performance is given in Figs. 12 and 13. The boxplots are more consistent with Table 4. FRSA is still converging in the later iteration of the algorithm according to Fig. 13, which has high solution accuracy.
[See PDF for image]
Fig. 12
Boxplots of all algorithms on 30-dimensional CEC2017
[See PDF for image]
Fig. 13
Convergence curves of all algorithms on 30-dimensional CEC2017
Experiment and Analysis on The CEC2019 Test Set
Finally, the performance of FRSA is tested on CEC2019. And compared with GA [27], WOA [38], TSA [52], AO [45], AOA [51], HHO [40], HBA [54], RSO [47], LFD [50], SCA [48].
Table 5 gives the experimental results, where the best average fitness value is highlighted in bold black. FRSA rank the second with the average rank 2.4, which is slightly inferior to the HBA. However, considering the small values, FRSA obtains small values on 6 functions, HBA obtains small values on 3 functions, and AOA obtains small values on 1 function. Therefore, FRSA has significant advantages from small values in each algorithm. In addition, by analyzing the ranking of FRSA on all functions, it can be seen that FRSA ranks poorly on F2 and F3, which can be considered in the subsequent improvement of the algorithm. In addition, the performance of FRSA, HBA, AO and HHO is relatively better, and their average rank are less than 6. The performance of WOA, SCA, TSA and the original RSA algorithm is relatively poor, and their average rank are greater than 8. Therefore, on the whole, FRSA effectively improves the calculation accuracy and has competitive on all comparison algorithms.
Table 5. Comparison results of all algorithms on CEC2019
F | Result | Algorithms | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
GA | WOA | SCA | HHO | LFD | AOA | TSA | AO | RSO | HBA | RSA | FRSA | |||
F1 | Mean | 2.234E + 8 | 1.041E + 7 | 4.290E + 6 | 1 | 1 | 3.106422 | 1.276E + 5 | 1.000008 | 1 | 1 | 1 | 1 | |
Std | 1.066E + 8 | 1.450E + 7 | 5.038E + 6 | 0 | 9.61E-12 | 9.420204 | 2.775E + 5 | 5.350E-6 | 0 | 3.80E-11 | 0 | 0 | ||
Rank | 12 | 11 | 10 | 1 | 5 | 8 | 9 | 7 | 1 | 6 | 1 | 1 | ||
F2 | Mean | 2.783E + 3 | 7.026E + 3 | 3.985E + 3 | 4.982388 | 4.823147 | 4.841062 | 8.764E + 2 | 5 | 4.904186 | 4.504166 | 5 | 5 | |
Std | 8.238E + 2 | 3.200E + 3 | 1.812E + 3 | 0.07876 | 0.26797 | 0.25576 | 5.834E + 2 | 0 | 0.57305 | 0.30978 | 0 | 0 | ||
Rank | 10 | 12 | 11 | 5 | 2 | 3 | 9 | 6 | 4 | 1 | 6 | 6 | ||
F3 | Mean | 11.86652 | 5.711183 | 8.455418 | 4.245017 | 9.307667 | 4.848810 | 8.495810 | 5.454194 | 5.917537 | 2.243131 | 8.433526 | 6.297643 | |
Std | 0.614233 | 2.298934 | 1.899357 | 1.357383 | 0.88028 | 0.90216 | 2.563858 | 1.228390 | 1.483047 | 1.716356 | 1.804311 | 1.900497 | ||
Rank | 12 | 5 | 9 | 2 | 11 | 3 | 10 | 4 | 6 | 1 | 8 | 7 | ||
F4 | Mean | 42.15282 | 57.71709 | 48.04902 | 44.72264 | 47.51901 | 62.76637 | 60.35140 | 32.34963 | 73.67674 | 20.62095 | 118.9345 | 16.08146 | |
Std | 7.717142 | 20.84238 | 7.895953 | 11.57053 | 16.62308 | 7.136751 | 22.62824 | 13.73063 | 11.45382 | 6.693205 | 23.26988 | 10.16052 | ||
Rank | 4 | 8 | 7 | 5 | 6 | 10 | 9 | 3 | 11 | 2 | 12 | 1 | ||
F5 | Mean | 17.86500 | 2.539118 | 10.65177 | 2.028963 | 1.983608 | 50.97168 | 40.32110 | 1.954834 | 57.39010 | 1.127358 | 97.05708 | 1.093633 | |
Std | 7.027216 | 0.695983 | 3.641655 | 0.342777 | 0.805676 | 13.94980 | 18.32856 | 0.136558 | 13.20486 | 0.078354 | 33.81766 | 0.052753 | ||
Rank | 8 | 6 | 7 | 5 | 4 | 10 | 9 | 3 | 11 | 2 | 12 | 1 | ||
F6 | Mean | 5.857950 | 8.776428 | 7.930773 | 8.378281 | 9.266655 | 8.499737 | 7.535621 | 6.001983 | 7.814759 | 3.471298 | 13.77296 | 2.035334 | |
Std | 1.472178 | 1.905102 | 1.044373 | 1.778665 | 1.278800 | 1.254395 | 1.592586 | 1.378693 | 1.148139 | 1.750617 | 1.121829 | 0.826560 | ||
Rank | 3 | 10 | 7 | 8 | 11 | 9 | 5 | 4 | 6 | 2 | 12 | 1 | ||
F7 | Mean | 1.103E + 3 | 1.422E + 3 | 1.538E + 3 | 1.211E + 3 | 1.176E + 3 | 1.564E + 3 | 1.206E + 3 | 1.042E + 3 | 1.478E + 3 | 9.921E + 2 | 2.074E + 3 | 8.046E + 2 | |
Std | 1.910E + 2 | 2.781E + 2 | 2.474E + 2 | 3.720E + 2 | 3.461E + 2 | 2.491E + 2 | 3.423E + 2 | 2.768E + 2 | 3.412E + 2 | 3.141E + 2 | 1.925E + 2 | 4.055E + 2 | ||
Rank | 4 | 8 | 10 | 7 | 5 | 11 | 6 | 3 | 9 | 2 | 12 | 1 | ||
F8 | Mean | 4.393150 | 4.728795 | 4.452906 | 4.770596 | 4.532039 | 4.511784 | 4.518907 | 4.483387 | 4.925901 | 4.041661 | 5.151677 | 4.005504 | |
Std | 0.401611 | 0.375130 | 0.290005 | 0.354751 | 0.331174 | 0.250551 | 0.337305 | 0.300870 | 0.214014 | 0.565632 | 0.232196 | 0.251473 | ||
Rank | 3 | 9 | 4 | 10 | 8 | 6 | 7 | 5 | 11 | 2 | 12 | 1 | ||
F9 | Mean | 1.303900 | 1.425508 | 1.615137 | 1.369508 | 1.501693 | 3.042147 | 1.947082 | 1.395982 | 1.493032 | 1.180423 | 4.490920 | 1.223950 | |
Std | 0.101515 | 0.186281 | 0.169019 | 0.175508 | 0.224694 | 0.451056 | 0.985134 | 0.089030 | 0.044351 | 0.083526 | 0.763616 | 0.057553 | ||
Rank | 3 | 6 | 9 | 4 | 8 | 11 | 10 | 5 | 7 | 1 | 12 | 2 | ||
F10 | Mean | 21.03800 | 21.29129 | 21.48147 | 21.15590 | 20.99993 | 21.38395 | 21.46000 | 19.83685 | 21.40858 | 20.46255 | 21.52094 | 20.73048 | |
Std | 0.036541 | 0.115670 | 0.085855 | 0.090627 | 0.000126 | 0.130912 | 0.109924 | 4.708346 | 0.135390 | 3.922163 | 0.085116 | 3.039288 | ||
Rank | 5 | 7 | 10 | 6 | 4 | 8 | 10 | 1 | 8 | 2 | 10 | 3 | ||
+ / = / − | 0/0/10 | 0/1/9 | 1/0/9 | 1/2/7 | 1/0/9 | 2/1/7 | 0/1/9 | 1/2/7 | 1/3/6 | 2/5/3 | 0/2/8 | / | ||
Mean Rank | 6.4 | 8.2 | 8.4 | 5.3 | 6.4 | 7.9 | 8.4 | 4.1 | 7.4 | 2.1 | 9.7 | 2.4 | ||
Result | 5 | 9 | 10 | 4 | 5 | 8 | 10 | 3 | 7 | 1 | 12 | 2 | ||
Bold represents the optimal values of the evaluation indicators
The end of Table 5 shows the number of significant differences, which are represented by (+ / = / −), where “ + ” indicates that FRSA is significantly different from the algorithm, and “ − ” indicates that insignificant significant differences. Table 5 indicates the comparison results are better than/close to /worse than FRSA are: 0/0/10, 0/1/9, 1/0/9, 1/2/7, 1/0/9, 2/1/7, 0/1/9, 1/2/7, 1/3/6, 2/5/3 and 0/2/8 respectively. HBA is better in the comparison algorithms, and are better than FRSA on F2 and F3, and the calculation results on F1, F4 and F6 are worse than FRSA. Therefore, the solution results of FRSA is better.
The partial boxplots and convergence curves on CEC2019 test function are shown in Figs. 14 and 15, respectively. By observing the boxplots in Fig. 14, FRSA are smaller and lower, that is to say, FRSA has good solution accuracy and stability. In addition, the convergence curves corresponding to FRSA algorithm has high solving accuracy.
[See PDF for image]
Fig. 14
Boxplots on CEC2019
[See PDF for image]
Fig. 15
Convergence curves on CEC2019
FRSA Solves Engineering Design Problems
For the sake of further verify the performance of FRSA, we gives examples by using FRSA to deal with practical problems, such as FM sound waves parameter estimation, the lightest cantilever beam design, vehicle side impact and 25-bar truss design problem. The parameters and in FRSA are both 0.1.
Parameter Estimation of Frequency Modulated Sound Waves
The objective of FM sound wave problem is to minimize the sum of square errors between the estimated sound wave S(τ) and the target sound wave S0(χ).
Set , the mathematical model:
42
where,Table 6 shows the optimal results by FRSA and some comparison algorithm. Table 6 indicates the results obtained by FRSA is small and has high computational precision. At the same, Table 7 summarized the statistical objective parameter results running 20 times, we can see that although the standard deviation of FRSA is large, it takes a smaller average value, optimal value and worst value. Therefore, on the whole, FRSA has a better solution effect.
Table 6. Optimal results of frequency modulated sound waves parameter estimation problem
Algorithms | Variables | Optimum | |||||
|---|---|---|---|---|---|---|---|
χ1 | χ2 | χ3 | χ4 | χ5 | χ6 | ||
RSA | − 0.551280 | − 4.890827 | − 0.921836 | 5.450988 | 1.646417 | − 4.218115 | 20.215167 |
PSO | 0.648839 | 0.158066 | − 1.515831 | − 0.114867 | 4.144086 | 4.897812 | 8.416087 |
GSA | − 0.853927 | 2.873344 | 2.882623 | 5.763949 | 6.025119 | 0.019527 | 18.695926 |
GWO | 0.695186 | 4.980460 | − 0.946647 | 4.788864 | 2.121480 | 4.912593 | 7.058627 |
MFO [73] | − 0.618316 | − 4.980413 | − 1.166173 | 0.441220 | 3.267800 | − 5.032973 | 14.813068 |
HHO | − 0.527026 | 4.955461 | 3.025663 | 4.834626 | − 0.026649 | − 6.278660 | 14.026293 |
LFD | − 1.002491 | − 5.025788 | − 1.517110 | − 4.785685 | 2.006875 | − 4.904562 | 0.169752 |
AOA | 0.527303 | 0.000000 | 0.702742 | 0.000000 | − 4.711160 | 4.846001 | 16.909001 |
TSA | 0.643888 | 0.190254 | 1.573678 | 0.134671 | − 4.145763 | 4.896738 | 8.628772 |
HGS | − 0.756939 | − 4.925450 | 1.151402 | 2.498471 | − 4.944126 | − 2.424182 | 11.465285 |
AO | 0.749062 | 0.077756 | 1.120744 | 0.020362 | − 4.411150 | 4.902244 | 10.853527 |
FRSA | − 0.999524 | − 5.000357 | 1.500233 | 4.799796 | − 2.000140 | − 4.900202 | 0.000084 |
Bold represents the optimal values of the evaluation indicators
Table 7. Statistical results of frequency modulated sound waves parameter estimation problem
Algorithms | Best | Worst | Mean | Std |
|---|---|---|---|---|
RSA | 20.215167 | 29.606564 | 26.550088 | 2.828731 |
PSO | 8.416087 | 26.130913 | 18.301190 | 5.440342 |
GSA | 18.695926 | 29.541179 | 26.484305 | 2.424852 |
GWO | 7.058627 | 25.131393 | 18.788298 | 4.969505 |
MFO | 14.813068 | 26.492552 | 21.512838 | 3.464875 |
HHO | 14.026293 | 25.464897 | 22.008830 | 3.315943 |
LFD | 0.169752 | 25.391202 | 20.612592 | 5.727178 |
AOA | 16.909001 | 29.861076 | 27.253458 | 3.569741 |
TSA | 8.628772 | 26.766232 | 20.897631 | 4.810990 |
HGS | 11.465285 | 27.198084 | 23.422080 | 3.443954 |
AO | 10.853527 | 29.613028 | 23.054915 | 4.720146 |
FRSA | 0.000084 | 22.390424 | 13.984635 | 6.025911 |
Bold represents the optimal values of the evaluation indicators
Cantilever Beam Design Problem
The lightest cantilever beam structure is the goal pursued by people. The schematic diagram [74] of the cantilever beam is draw in Fig. 16, and the model contains five hollow square units. Using to represent the section parameters of each hollow square element, the mathematical model of nonlinear constrained question is expressed in below.
43
[See PDF for image]
Fig. 16
Cantilever beam design problem [74]
and the values range of the variables are as follows:
The cantilever beam design problem can be solved by FRSA, and the comparison results with other outstanding methods are display in Tables 8 and 9. The optimal value in bold, the total weight obtained by FRSA algorithm is the minimum, which is 1.336525. However, the total weight obtained by AOA is larger, and its value is 1.503293. All the indicators considered to evaluate the FRSA in this paper are the best, and its standard deviation is only 3.079E−05, which indicates that FRSA algorithm has relatively accurate and stable results.
Table 8. Optimal results of cantilever beam design problem
Algorithms | Variables | Optimum | ||||
|---|---|---|---|---|---|---|
χ1 | χ2 | χ3 | χ4 | χ1 | ||
RSA | 6.639907 | 5.751055 | 4.359566 | 2.878979 | 2.377086 | 1.369690 |
GSA | 6.045977 | 5.282369 | 4.500345 | 3.447672 | 2.201732 | 1.336797 |
WOA | 6.352166 | 5.317200 | 3.884704 | 3.777989 | 2.526994 | 1.360507 |
GWO | 6.013264 | 5.292875 | 4.514038 | 3.504584 | 2.149701 | 1.336570 |
MFO | 6.004131 | 5.286680 | 4.486421 | 3.521271 | 2.176147 | 1.336582 |
HHO | 5.986092 | 5.370053 | 4.515262 | 3.541409 | 2.070325 | 1.337111 |
RSO | 7.971523 | 4.763292 | 4.643302 | 3.795778 | 1.673940 | 1.422049 |
AOA | 8.069802 | 4.975269 | 3.559013 | 4.453242 | 3.095840 | 1.503293 |
TSA | 6.053130 | 5.292247 | 4.431718 | 3.500495 | 2.202655 | 1.336930 |
HGS | 6.022596 | 5.301360 | 4.515638 | 3.498079 | 2.136476 | 1.336551 |
AO | 6.081964 | 5.214321 | 4.409156 | 3.600546 | 2.185883 | 1.337654 |
FRSA | 6.013308 | 5.305644 | 4.493921 | 3.511262 | 2.149593 | 1.336525 |
Bold represents the optimal values of the evaluation indicators
Table 9. Statistical results of cantilever beam design problem
Algorithms | Best | Worst | Mean | Std |
|---|---|---|---|---|
RSA | 1.369690 | 2.778562 | 1.719426 | 3.238E−01 |
GSA | 1.336797 | 2.578467 | 1.607317 | 3.721E−01 |
WOA | 1.360507 | 1.890543 | 1.577295 | 1.625E−01 |
GWO | 1.336570 | 1.337008 | 1.336693 | 1.211E−04 |
MFO | 1.336582 | 1.340498 | 1.337740 | 1.109E−03 |
HHO | 1.337111 | 1.351153 | 1.342019 | 3.304E−03 |
RSO | 1.422049 | 5.614126 | 2.633141 | 1.153E+00 |
AOA | 1.503293 | 4.099180 | 2.550362 | 7.516E−01 |
TSA | 1.336930 | 1.339568 | 1.338406 | 7.478E−04 |
HGS | 1.336551 | 1.339515 | 1.337114 | 7.646E−04 |
AO | 1.337654 | 1.366313 | 1.341560 | 6.054E−03 |
FRSA | 1.336525 | 1.336650 | 1.336554 | 3.079E−05 |
Bold represents the optimal values of the evaluation indicators
Vehicle Side Impact Design Problem
According to the mathematical model of vehicle side impact established in Ref. [75]. Obtaining the minim vehicle weight is its goal, this problem has 11 design variables , and its mathematical model is as follows:
44
the constraint conditions are:and the values range of the variables are as follows:FRSA and some comparison methods are used to solve this question. The optimal values and design variable in bold are listed in Tables 10 and 11. The the minim vehicle weight obtained by PSO, HGS and FRSA are the same and small, and the solution precision on these three algorithms is very high. In addition, each algorithm running for 20 times. By observing the bold data, FRSA achieves good solving results under four indexes, and they are all small. Therefore, on the whole, FRSA has high calculate accuracy and stable in solving this problem (Fig. 17).
Table 10. Optimal results of vehicle side impact design problem
Algorithms | Variables | Optimum | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
RSA | 0.5354 | 1.0519 | 0.5000 | 0.5000 | 0.5000 | 1.3246 | 0.5000 | 0.3450 | 0.3450 | − 29.7361 | − 7.7512 | 22.581924 |
PSO | 0.5000 | 1.0525 | 0.5000 | 0.5000 | 0.5000 | 1.5000 | 0.5000 | 0.3450 | 0.3450 | − 30.0000 | 0.0000 | 22.238312 |
GSA | 0.7254 | 0.9753 | 0.5530 | 0.5039 | 0.6181 | 1.3169 | 0.6135 | 0.3429 | 0.2866 | − 23.7969 | 3.3741 | 23.703971 |
GWO | 0.5006 | 1.0515 | 0.5000 | 0.5000 | 0.5000 | 1.5000 | 0.5000 | 0.3450 | 0.3438 | − 30.0000 | − 1.0153 | 22.240428 |
SCA | 0.5000 | 1.0505 | 0.5000 | 0.5000 | 0.5000 | 1.5000 | 0.5000 | 0.3450 | 0.2904 | − 30.0000 | 6.7780 | 22.314511 |
HHO | 0.5000 | 1.0525 | 0.5000 | 0.5000 | 0.5000 | 1.5000 | 0.5000 | 0.3450 | 0.3450 | − 30.0000 | 1.3610 | 22.239257 |
RSO | 1.2430 | 0.7868 | 0.5000 | 0.5000 | 0.5000 | 0.5000 | 0.5040 | 0.2880 | 0.1920 | − 30.0000 | 23.0939 | 26.916830 |
AOA | 0.5000 | 1.0772 | 0.5000 | 0.5000 | 0.5000 | 1.5000 | 0.5000 | 0.3450 | 0.2986 | − 28.9484 | − 0.0273 | 22.551181 |
TSA | 0.5000 | 1.0533 | 0.5000 | 0.5000 | 0.5000 | 1.5000 | 0.5000 | 0.3450 | 0.3450 | − 30.0000 | − 2.3338 | 22.241843 |
HGS | 0.5000 | 1.0525 | 0.5000 | 0.5000 | 0.5000 | 1.5000 | 0.5000 | 0.3450 | 0.3450 | − 30.0000 | 0.0000 | 22.238312 |
AO | 0.5393 | 1.0445 | 0.5000 | 0.5000 | 0.5000 | 1.5000 | 0.5000 | 0.3259 | 0.1920 | − 30.0000 | − 7.7431 | 22.582648 |
FRSA | 0.5000 | 1.0525 | 0.5000 | 0.5000 | 0.5000 | 1.5000 | 0.5000 | 0.3450 | 0.3450 | − 30.0000 | 0.0000 | 22.238312 |
Bold represents the optimal values of the evaluation indicators
Table 11. Statistical results of vehicle side impact design problem
Algorithms | Best | Worst | Mean | Std |
|---|---|---|---|---|
RSA | 22.581924 | 27.570952 | 24.215075 | 1.048674 |
PSO | 22.238312 | 22.375042 | 22.258802 | 0.050044 |
GSA | 23.703971 | 26.099004 | 24.505470 | 0.572723 |
GWO | 22.240428 | 22.307278 | 22.262398 | 0.021684 |
SCA | 22.314511 | 23.874120 | 22.993422 | 0.437355 |
HHO | 22.239257 | 23.408215 | 22.712380 | 0.365083 |
RSO | 26.916830 | 32.443023 | 29.127644 | 1.571055 |
AOA | 22.551181 | 26.020795 | 24.255131 | 1.280337 |
TSA | 22.241843 | 25.485130 | 22.595955 | 0.979121 |
HGS | 22.238312 | 22.307983 | 22.241831 | 0.015571 |
AO | 22.582648 | 25.933563 | 23.604783 | 1.120885 |
FRSA | 22.238312 | 22.244547 | 22.238635 | 0.001392 |
Bold represents the optimal values of the evaluation indicators
[See PDF for image]
Fig. 17
25-bar truss design problem [52]
25-bar Truss Design Problems
Minimal weight truss structure can save costs. The 25-bar truss design problem [52] has 25 elements and 10 nodes, which can be divided into 8 groups:, –, –, –, –, –, –, –, and its schematic is shown in Fig. 15. In the experiment, the mass density is 0.1 lb/in3, the elastic modulus is 10,000 ksi, Upper and downer limit of stress is defined ± 40,000 psi, and the displacement limit of each node in X, Y and Z directions is ± 0.35(in). Design variables can selected form D = {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, 2.3, 2.4, 2.6, 2.8, 3.0, 3.2, 3.4}.
FRSA and some very excellent methods can be applied to solve this question, the minim weight and design variables acquired by all compared methods are organize in Tables 12 and 13. Analysis table can be obtained the solution result of FRSA is superior. Each algorithm running for 20 times, all evaluating indicator of FRSA are all superior, which indicates the feasibility and superiority of FRSA algorithm for 25-bar truss.
Table 12. Optimal results of 25-bar truss design problem
Algorithms | Variables | Optimum | |||||||
|---|---|---|---|---|---|---|---|---|---|
– | – | – | – | – | – | – | |||
RSA | 0.64341 | 0.03290 | 4.19579 | 0.00100 | 1.40279 | 0.67216 | 0.07776 | 4.25250 | 488.2026 |
WOA | 0.20468 | 0.04318 | 3.80534 | 0.91686 | 1.33570 | 0.55973 | 0.57964 | 4.02606 | 497.6295 |
SCA | 0.00550 | 0.07119 | 3.50753 | 0.00260 | 1.62952 | 0.82425 | 0.27345 | 4.10007 | 476.5006 |
GWO | 0.01267 | 0.05701 | 3.64933 | 0.00127 | 1.95801 | 0.77393 | 0.14039 | 3.93722 | 464.7981 |
SSA | 0.13672 | 0.02971 | 3.63114 | 0.00499 | 2.07869 | 0.79327 | 0.17998 | 3.86114 | 465.6012 |
HHO | 0.39430 | 0.12440 | 3.49259 | 0.00640 | 2.67929 | 0.79435 | 0.13619 | 3.84050 | 471.3912 |
AOA | 0.52985 | 0.03217 | 3.12343 | 0.01069 | 1.81247 | 0.88033 | 0.14161 | 4.47072 | 479.1515 |
TSA | 0.10899 | 0.03048 | 3.76997 | 0.07729 | 1.98999 | 0.76815 | 0.17118 | 3.82809 | 466.8961 |
HGS | 0.00100 | 0.05377 | 3.71016 | 0.00100 | 2.08774 | 0.75719 | 0.15414 | 3.86148 | 464.8217 |
RSO | 0.00100 | 0.68837 | 2.90412 | 0.00100 | 3.43536 | 0.64694 | 0.11039 | 4.19319 | 490.2797 |
AO | 0.00974 | 0.12848 | 3.33056 | 0.00100 | 2.00917 | 0.82718 | 0.08910 | 4.16841 | 468.1370 |
FRSA | 0.00118 | 0.05329 | 3.55655 | 0.00142 | 2.00514 | 0.77018 | 0.17707 | 3.95825 | 464.7727 |
Bold represents the optimal values of the evaluation indicators
Table 13. Statistical results of 25-bar truss design problem
Algorithms | Best | Worst | Mean | Std |
|---|---|---|---|---|
RSA | 488.20260 | 694.53336 | 568.97765 | 70.57406 |
WOA | 497.62951 | 861.68551 | 582.38351 | 90.58740 |
SCA | 476.50063 | 548.27495 | 508.47911 | 25.19144 |
GWO | 464.79806 | 520.42066 | 468.75842 | 12.19818 |
SSA | 465.60118 | 607.34252 | 513.25990 | 39.13708 |
HHO | 471.39125 | 523.88391 | 486.02770 | 14.70723 |
AOA | 479.15149 | 608.03420 | 540.67271 | 29.35344 |
TSA | 466.89613 | 526.05922 | 477.04945 | 20.91519 |
HGS | 464.82166 | 523.26632 | 486.02060 | 21.59014 |
RSO | 490.27974 | 1064.48035 | 696.24843 | 158.9000 |
AO | 468.13698 | 508.35585 | 481.13184 | 9.66163 |
FRSA | 464.77275 | 491.81179 | 468.69253 | 6.93887 |
Bold represents the optimal values of the evaluation indicators
Shape Optimization of SGC-Ball Developable Surface Based on FRSA
Shape Optimization Model of SGC-Ball Developable Surface Interpolated on Curvature Line
This section optimizes the shape of SGC-Ball interpolation developable surface by the minimum energy. In SGC-Ball interpolation developable surface, different shape parameters correspond to different energy values of developable surface. Therefore, under the shape parameters constraints, we rebuilt the shape optimization model of SGC-Ball interpolation developable surface. Firstly, the energy formula of the surface is defined as follows [21, 22–23]:
45
According to the content in Sect. 2, substitute Eq. (20) into Eq. (45), we have
46
In the shape optimization model shown in Eq. (46), the objective function is a nonlinear function, and the cross product of vectors and various derivatives are involved with respect to and . Therefore, we adopt FRSA with good performance to handle the shape optimization modeling of the SCC-Ball interpolation developable surface with minimum energy.
FRSA to Solve The Shape Optimization Model of SGC-Ball Developable Surface
Based on the optimization model of SGC-Ball interpolation developable surface introduced above. Figure 18 draws the flow chart using FRSA to solve the Eq. (46).
[See PDF for image]
Fig. 18
Flow chart of FRSA solving shape optimization model
The Examples of FRSA Solving The Shape Optimization Model
Combined with the flow chart in Sect. 5.2, we gives specific solution examples. When comparing the shape optimization model results obtained by different algorithms, set n = 30. Take the evaluation index of independent operation for 10 times, and the values of parameters and in FRSA algorithm are 0.1.
This section first gives an example of developable surface optimization interpolated on a single SGC-Ball curve, and selects the following algorithms as comparison algorithms: RSA, GSA [28], SCA [48], MFO [68], LFD [50], AOA [51], HGS [53], RSO [46] and FRSA. Optimize SGC-Ball interpolation developable surface given by Example 1, Table 14 statistics the results. The worst value obtained by GSA and MFO is the same as that obtained by FRSA, and the std of RSA is minimal, while the Other evaluation indicators got by FRSA running for 10 times are better than other algorithms. Therefore, in a comprehensive view, FRSA has achieved good solving effect.
Table 14. Statistical results of example 1
Index | GSA | RSA | SCA | MFO | LFD | AOA | HGS | RSO | FRSA |
|---|---|---|---|---|---|---|---|---|---|
Best | 1,556,709.5 | 2,082,420.8 | 1,938,881.9 | 1,308,978.7 | 1,857,437.8 | 2,086,950.3 | 1,766,438.1 | 2,110,976.6 | 1,096,521.4 |
Worst | 2,082,420.8 | 2,090,878.5 | 2,083,104.7 | 2,082,420.8 | 2,082,692.1 | 2,204,836.0 | 2,230,006.2 | 2,304,493.0 | 2,082,420.8 |
Mean | 1,984,379.0 | 2,084,040.6 | 1,999,452.6 | 1,991,042.2 | 2,060,028.7 | 2,129,376.9 | 2,098,054.4 | 2,176,017.3 | 1,983,830.8 |
Std | 1.855E+05 | 2.54E+03 | 4.880E+04 | 2.437E+05 | 7.118E+04 | 3.970E+04 | 1.241E+05 | 7.140E+04 | 3.118E+05 |
Bold represents the optimal values of the evaluation indicators
Figure 19 shows the corresponding developable surfaces for each algorithm to run once,. According to Fig. 19, the position, shape and position of control polygon of each developable surface are different, indicating that different algorithms have achieved different results. Combined with Table 15, the developable surface drawn by FRSA algorithm has a smaller energy value.
[See PDF for image]
Fig. 19
Developable surfaces of example 1
Table 15. Results corresponding to Fig. 19
Algorithms | Shape parameters | Optimum | |||
|---|---|---|---|---|---|
GSA | 0.792104 | 1.725665 | 0.348079 | 2.093680 | 2.08242E+06 |
RSA | 0.609318 | 1.977441 | 0.000000 | 2.452627 | 2.08433E+06 |
SCA | 1.000000 | 2.958251 | 0.000000 | 2.321313 | 1.96399E+06 |
MFO | 1.000000 | 2.959694 | 0.000000 | 2.366215 | 1.94208E+06 |
LFD | 0.959393 | 1.611713 | 0.619419 | 1.894365 | 2.08258E+06 |
AOA | 0.400400 | 2.962161 | 0.000000 | 2.841358 | 2.15491E+06 |
HGS | 0.331351 | 2.654350 | 0.000000 | 3.000000 | 2.14464E+06 |
RSO | 0.322184 | 2.200209 | 0.000000 | 3.000000 | 2.14818E+06 |
FRSA | 0.963444 | 2.994067 | 2.900187 | 1.750022 | 1.09652E+06 |
Bold represents the optimal values of the evaluation indicators
Example 2
Given a spatial SGC-Ball curve and taking it as a curvature line, the developable surface interpolated with the curvature line can be constructed. Assume that the control points coordinate are as follows:
47
Table 16 shows the energy values of FRSA and RSA, PSO [33], WOA [38], HHO [40], LFD [50], AOA [51], TSA [52] and HGS [53] in solving SGC-Ball interpolation developable surface. We can see that FRSA has good performance in dealing with the developable surface and has a significant advantage.
Table 16. Statistical results of example 2
Index | RSA | PSO | WOA | HHO | LFD | AOA | TSA | HGS | FRSA |
|---|---|---|---|---|---|---|---|---|---|
Best | 275,112.84 | 275,112.76 | 275,112.79 | 275,113.24 | 275,123.81 | 275,404.50 | 275,113.06 | 277,468.24 | 275,112.76 |
Worst | 275,198.36 | 275,223.99 | 275,139.19 | 275,213.88 | 275,578.56 | 407,274.47 | 458,562.99 | 328,120.21 | 275,112.76 |
Mean | 275,153.52 | 275,135.00 | 275,122.52 | 275,148.00 | 275,242.85 | 307,393.54 | 293,466.17 | 302,281.23 | 275,112.76 |
Std | 3.132E+01 | 4.690E+01 | 1.031E+01 | 3.472E+01 | 1.452E+02 | 3.996E+04 | 5.801E+04 | 1.825E+04 | 6.210E−04 |
Bold represents the optimal values of the evaluation indicators
Figure 20 shows the SGC-Ball interpolation developable surfaces of each algorithm running once. Table 16 displays the results of all algorithm matched to Fig. 20. There is little difference between the developable surfaces, which indicates that the developable surfaces obtained by each algorithm are relatively similar. Combined with Table 16, the energy value of the developable surface obtained by FRSA is lowest, and the shape is optimal (Table 17).
[See PDF for image]
Fig. 20
Developable surfaces of example 2
Table 17. Results corresponding to Fig. 20
Algorithms | Shape parameters | Optimum | |||
|---|---|---|---|---|---|
RSA | 0.330406 | 2.945901 | 3.843041 | 3.000000 | 2.75133E+05 |
PSO | 0.530575 | 2.191756 | 3.167449 | 2.246294 | 2.75113E+05 |
WOA | 0.330223 | 2.886419 | 3.925202 | 2.990176 | 2.75133E+05 |
HHO | 0.789978 | 1.804403 | 2.780598 | 1.835432 | 2.75115E+05 |
LFD | 0.376390 | 2.724701 | 3.461557 | 2.744073 | 2.75418E+05 |
AOA | 0.417977 | 2.402230 | 3.518529 | 2.528413 | 2.75464E+05 |
TSA | 0.907530 | 1.684697 | 2.686302 | 1.721160 | 2.75134E+05 |
HGS | 0.360155 | 3.000000 | 3.137154 | 3.000000 | 2.77855E+05 |
FRSA | 0.333228 | 2.897545 | 3.858843 | 2.984382 | 2.75113E+05 |
Bold represents the optimal values of the evaluation indicators
In Example 2, an example of developable surface optimization interpolated on a single SGC-Ball curve is given, but sometimes a single curve cannot meet the actual needs, so it is of higher significance on spliced curves. The following examples of developable surface optimization interpolated on two or three SGC-Ball spliced curves are given.
Example 3
In this example, two SGC-Ball curves satisfying G1 smooth splicing are taken as curvature lines to construct two spliced developable surfaces. First, the control points are given as follows:
48
Through derivation, it can be seen that the necessary and sufficient conditions about two curves Pj(t), Pj+1(t) to meet G1 smooth splicing are
49
Therefore, let k = 1 and hj+1 = hj, the control points of curve P1(t) are calculated with Eq. (50).
50
For the SGC-Ball composite curves designed with the control points shown in Eqs. (48) and (50), the splicing SGC-Ball interpolation developable surface are established by interpolating on the curvature line. The Statistical data that FRSA are compared with GSA [28], GWO [37], HHO [40], LFD [50], AOA [51], TSA [52], AO [45] and the original RSA are given in Table 18. From the table, FRSA has a good solution effect in solving the optimization model, which is far less than the comparison algorithms. Therefore, combining the four indexes, the FRSA algorithm has better results and better robustness.
Table 18. Statistical results of example 3
Index | GSA | RSA | GWO | HHO | LFD | AOA | TSA | AO | FRSA |
|---|---|---|---|---|---|---|---|---|---|
Best | 213,113.7 | 213,160.4 | 212,862.6 | 213,111.7 | 213,679.8 | 219,720.2 | 212,939.3 | 212,950.1 | 212,862.1 |
Worst | 213,603.7 | 297,006.7 | 213,876.9 | 215,043.8 | 225,067.6 | 296,875.9 | 265,937.0 | 215,034.0 | 212,862.1 |
Mean | 213,320.1 | 226,629.4 | 213,300.0 | 213,760.0 | 216,493.2 | 247,687.3 | 224,474.4 | 213,716.5 | 212,862.1 |
Std | 1.57E+02 | 2.72E+04 | 4.02E+02 | 7.29E+02 | 3.36E+03 | 2.88E+04 | 2.19E+04 | 6.07E+02 | 2.08E-10 |
Bold represents the optimal values of the evaluation indicators
Similarly, Table 19 shows the energy values and shape parameters value. Figure 21 shows the developable surfaces drawn of Table 18. The control polygon position and illumination of each developable surface are slightly different, and the shape are slightly different. Table 18 indicates the energy value of the developable surface obtained by FRSA is lowest, and the shape is better.
Table 19. Results corresponding to Fig. 21
Algorithms | Shape parameters | Optimum | |||||||
|---|---|---|---|---|---|---|---|---|---|
GSA | 0.6114 | 2.5429 | 1.5310 | 1.8716 | 1.0000 | − 1.2856 | 1.8004 | 1.8234 | 213,194.4 |
RSA | 0.4453 | 3.0000 | 0.0000 | 2.2399 | 0.4016 | − 3.0000 | 2.5310 | 3.0000 | 214,972.9 |
GWO | 0.4759 | 2.9596 | 0.9986 | 2.1374 | 0.9264 | − 3.0000 | 1.7142 | 1.8980 | 212,891.3 |
HHO | 0.5706 | 2.5564 | 1.2850 | 1.6983 | 0.5217 | − 1.5086 | 2.3149 | 2.4594 | 215,043.8 |
LFD | 0.6526 | 2.4606 | 1.7130 | 1.8156 | 0.4696 | − 0.1562 | 1.6095 | 2.6814 | 214,569.7 |
AOA | 0.0467 | 1.3625 | 4.0000 | 0.0000 | 0.0000 | 0.0000 | 0.1630 | 0.0000 | 296,875.9 |
TSA | 0.5161 | 2.8288 | 1.1599 | 2.0570 | 0.9802 | 0.0829 | 1.9012 | 1.8129 | 213,787.9 |
AO | 0.8936 | 2.0600 | 1.6405 | 1.5947 | 0.7866 | − 0.9804 | 1.7057 | 2.0324 | 213,445.5 |
FRSA | 0.8093 | 2.1542 | 1.4407 | 1.6667 | 1.0000 | − 3.0000 | 1.7221 | 1.8286 | 212,862.1 |
Bold represents the optimal values of the evaluation indicators
[See PDF for image]
Fig. 21
Developable surfaces of example 3
Example 4
In this example, three SGC-Ball curves satisfying G1 smooth splicing are taken as curvature lines to construct three spliced developable surfaces. First, giving the following control points of P0(t):
51
Using the G1 smooth splicing conditions of the two SGC-Ball curves, the control points P1(t) can be set as follows
52
Meanwhile, the control points of the third curve P2(t) are expressed as follows
53
Table 20 gives the comparison results of FRSA and original RSA, LFD [50], PSO [33], SMA[42], AOA [51], TSA [52], HHO [40], and HGS [53] in dealing with the energy value of the spliced SGC-Ball interpolation developable surface, and the bold data is the optimal value under each index. It indicates FRSA with the average value 1536114.1 has high solving accuracy when solving the optimization model of developable surface, which has significant advantages compared with the comparison algorithms.
Table 20. Statistical results of example 4
Index | RSA | PSO | SMA | HHO | LFD | AOA | TSA | HGS | FRSA |
|---|---|---|---|---|---|---|---|---|---|
Best | 1,545,521.6 | 1,536,513.6 | 1,718,755.7 | 1,546,827.7 | 1,541,566.0 | 1,585,629.9 | 1,540,113.4 | 1,711,302.3 | 1,536,114.1 |
Worst | 1,987,175.1 | 1,901,414.1 | 1,795,248.8 | 1,635,738.8 | 1,713,695.4 | 2,174,359.8 | 2,157,535.8 | 2,367,805.2 | 1,536,114.1 |
Mean | 1,714,532.5 | 1,578,404.1 | 1,787,426.1 | 1,570,672.7 | 1,578,610.8 | 1,907,713.7 | 1,777,490.2 | 2,098,684.3 | 1,536,114.1 |
Std | 1.761E+05 | 1.144E+05 | 2.413E+04 | 2.580E+04 | 5.452E+04 | 1.900E+05 | 2.200E+05 | 2.181E+05 | 8.213E-10 |
Bold represents the optimal values of the evaluation indicators
Figure 22 displays the developable surfaces diagram picture when the results running once. Table 20 shows the energy values and shape parameter values corresponding to each algorithm in Fig. 22. The splicing SGC-Ball interpolation developable surfaces solved by each algorithm are different in shape, and the positions and shapes of control polygons of each combined SGC-Ball curve are also different. Table 21 indicates the energy value of FRSA is lowest, and the shape is optimal.
Table 21. Results corresponding to Fig. 22
Algorithms | Shape parameters | Optimum | |||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
RSA | 0.544 | 2.996 | 3.496 | 2.998 | 0.461 | − 3.00 | 2.667 | 3.000 | 0.000 | − 3.00 | 3.986 | 2.599 | 1,755,767.5 |
PSO | 0.590 | 3.000 | 2.000 | 3.000 | 1.000 | − 3.00 | 2.128 | 2.441 | 0.729 | − 3.00 | 2.183 | 3.000 | 1,537,171.5 |
SMA | 0.845 | 2.070 | 3.380 | 2.070 | 0.845 | 2.070 | 3.380 | 2.070 | 0.845 | 2.070 | 3.380 | 2.070 | 1,795,036.9 |
HHO | 0.786 | 2.497 | 2.275 | 2.740 | 0.795 | 1.133 | 2.089 | 2.557 | 0.543 | 1.093 | 2.437 | 3.000 | 1,562,898.5 |
LFD | − 0.39 | − 2.03 | 1.988 | − 1.91 | 1.115 | − 2.60 | 2.068 | 2.412 | − 0.76 | − 0.26 | 1.287 | − 0.59 | 1,557,981.7 |
AOA | 0.000 | − 0.00 | 0.960 | 0.003 | 0.744 | 0.000 | 2.291 | 2.411 | 0.646 | 0.006 | 2.468 | 2.928 | 1,969,741.7 |
TSA | 0.797 | 2.419 | 2.059 | 2.562 | 0.902 | 0.120 | 2.198 | 2.658 | 0.834 | 0.302 | 2.279 | 2.695 | 1,543,252.5 |
HGS | 0.000 | 2.547 | 0.000 | − 3.00 | 0.000 | − 1.17 | 0.000 | − 2.81 | 0.000 | − 0.17 | 0.000 | − 3.00 | 2,367,805.2 |
FRSA | 0.835 | 2.340 | 2.000 | 2.454 | 1.000 | − 3.00 | 2.127 | 2.441 | 1.000 | − 3.00 | 2.125 | 2.450 | 1,536,114.1 |
Bold represents the optimal values of the evaluation indicators
[See PDF for image]
Fig. 22
Developable surfaces of example 4
Conclusions
This paper presents the construction method of SGC-Ball developable surface interpolated on the curvature line, and studies the shape optimization of SGC-Ball interpolation developable surface based on proposed FRSA. Firstly, for SGC-Ball interpolation developable surface, its construction steps are given in Sect. 2.2. The obtained developable surface has good shape adjustability. Then, for reptile search algorithm, by introducing feedback mechanism, adaptive parameters and mutation strategy, an FRSA is proposed. FRSA is not only improves the computing precision, but also achieves the balance between the exploration and exploitation through the comparative experiments with other methods on CEC2014, CEC2017, CEC2019 test sets and four engineering examples, it is verified that FRSA has better solution performance. Finally, the shape optimization model of SGC-Ball interpolation developable surface with the minimum energy is established. The construction of interpolation developable surface with low energy is realized with the help of FRSA. The implementation of four numerical experiments explained the feasibility and the superiority of FRSA on the parameters problem. However, FRSA usually requires a longer computation time when facing very complex high-dimensional problems.
This paper studies the construction of SGC-Ball developable surface interpolated on the curvature line, and optimizes the shape of SGC-Ball interpolation developable surface according to the energy method. Subsequently, the shape optimization of SGC-Ball developable surface can be studied according to other metrics of surface shape. In addition, we will apply improved FRSA to practical engineering problems in other fields, including image segmentation, path planning [76], feature selection, engineering designs [77, 78], and power applications, etc.
Acknowledgements
This work is supported by the National Natural Science Foundation of China (Grant No. 52375264).
Data Availability Statement
All data generated or analysed during this study are included in this published article.
Declarations
Conflicts of Interest
The authors declare no conflict of interest.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
1. Frey, W; Bindschadler, D. Computer aided design of a class of developable Bézier surfaces; 1993; General Motors R & D Publication: 8057.
2. Chu, CH; Wang, CC; Tsai, CR. Computer aided geometric design of strip using developable Bézier patches. Computers in Industry; 2008; 59,
3. Hu, G; Wu, JL; Li, HN; Hu, X. Shape optimization of generalized developable H-Bézier surfaces using adaptive cuckoo search algorithm. Advances in Engineering Software; 2020; 149, [DOI: https://dx.doi.org/10.1016/j.advengsoft.2020.102889] 102889.
4. Aumann, G. Interpolation with developable Bézier patches. Computer Aided Geometric Design; 1991; 8,
5. Maekawa, T; Chalfant, JS. Design and tessellation of B-spline developable surfaces. Journal of Mechanical Design; 1998; 120,
6. Aumann, G. A simple algorithm for designing developable Bézier surfaces. Computer Aided Geometric Design; 2003; 20,
7. Bodduluri, R; Ravani, B. Design of developable surfaces using duality between plane and point geometries. Computer-Aided Design; 1993; 25,
8. Bodduluri, R; Ravani, B. Geometric design and fabrication of developable Bézier and B-spline surfaces. Journal of Mechanical Design; 1994; 116,
9. Pottmann, H; Wallner, J. Approximation algorithms for developable surfaces. Computer Aided Geometric Design; 1999; 16,
10. Zhou, M; Yang, JQ; Zheng, HC; Song, WJ. Design and shape adjustment of developable surfaces. Applied Mathematical Modelling; 2013; 37,
11. Hu, G; Wu, JL; Qin, XQ. A new approach in designing of local controlled developable H-Bézier surfaces. Advances in Engineering Software; 2018; 121, pp. 26-38. [DOI: https://dx.doi.org/10.1016/j.advengsoft.2018.03.003]
12. Hu, G; Zhu, XN; Wei, G; Chang, CT. An improved marine predators algorithm for shape optimization of developable ball surfaces. Engineering Applications of Artificial Intelligence; 2021; 105, [DOI: https://dx.doi.org/10.1016/j.engappai.2021.104417] 104417.
13. Maekawa, T; Wolter, FE; Patrikalakis, NM. Umbilics and lines of curvature for shape interrogation. Computer Aided Geometric Design; 1996; 13,
14. Patrikalakis, NM; Maekawa, T. Shape interrogation for computer aided design and manufacturing; 2002; Springer: [DOI: https://dx.doi.org/10.1007/978-3-642-04074-0]
15. Zhang, XP; Che, WJ; Paul, JC. Computing lines of curvature for implicit surfaces. Computer Aided Geometric Design; 2009; 26,
16. Li, CY; Wang, RH; Zhu, CG. Parametric representation of a surface pencil with a common line of curvature. Computer-Aided Design; 2011; 43,
17. Li, CY; Wang, RH; Zhu, CG. An approach for designing a developable surface through a given line of curvature. Computer-Aided Design; 2013; 45,
18. Hu, G; Wu, JL; Wang, XF. Constructing local controlled developable H-Bézier surfaces by interpolating characteristic curves. Computational and Applied Mathematic; 2021; 40,
19. Terzopoulos, D; Platt, J; Barr, A; Fleischer, K. Elastically deformable models. Acm Siggraph Computer Graphics; 1987; 21,
20. Celniker, G; Gossard, D. Deformable curve and surface finite-elements for free-form shape design. Acm Siggraph Computer Graphics; 1991; 25,
21. Hu, SM; Li, YF; Ju, T; Zhu, X. Modifying the shape of NURBS surfaces with geometric constraints. Computer-Aided Design; 2001; 33,
22. Li, JC; Yan, LL; Liu, CZ. Quintic composite spline with adjustable shape and parameter selection. Journal of Image and Graphics; 2017; 22,
23. Liu, X. Filling n-sided holes with trimmed b-spline surfaces based on energy-minimization method. Journal of Computing and Information Science in Engineering; 2015; 15,
24. Wang, M; Chen, H. Chaotic multi-swarm whale optimizer boosted support vector machine for medical diagnosis. Applied Soft Computing; 2019; 88, [DOI: https://dx.doi.org/10.1016/j.asoc.2019.105946] 105946.
25. Nautiyal, B; Prakash, R; Vimal, V; Liang, G; Chen, H. Improved salp swarm algorithm with mutation schemes for solving global optimization and engineering problems. Engineering with Computers; 2021; 4, pp. 1-23. [DOI: https://dx.doi.org/10.1007/s00366-020-01252-z]
26. Hu, G; Du, B; Wang, XF; Wei, G. An enhanced black widow optimization algorithm for feature selection. Knowledge-Based Systems; 2022; 235, [DOI: https://dx.doi.org/10.1016/j.knosys.2021.107638] 107638.
27. Holland, JH. Genetic algorithms. Scientific American; 1992; 267,
28. Rashedi, E; Nezamabadi-Pour, H; Saryazdi, S. GSA: A gravitational search algorithm. Information Sciences; 2009; 179,
29. Mirjalili, S; Mirjalili, SM; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Computing and Applications; 2015; 27,
30. Faramarzi, A; Heidarinejad, M; Stephens, BE; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowledge-Based Systems; 2020; 191, [DOI: https://dx.doi.org/10.1016/j.knosys.2019.105190] 105190.
31. Rao, RV; Savsani, VJ; Vakharia, DP. Teaching-learning-based optimization: An optimization method for continuous non-linear large scale problems. Information Sciences: An International Journal; 2012; 183,
32. Moghdani, R; Salimifard, K. Volleyball premier league algorithm. Applied Soft Computing; 2018; 64, pp. 161-185. [DOI: https://dx.doi.org/10.1016/j.asoc.2017.11.043]
33. Eberhart, R; Kennedy, J. Particle swarm optimization. Proceedings of The IEEE International Conference on Neural Networks, Perth, Australia; 1995; 4,
34. Dorigo, M; Maniezzo, V; Colorni, A. Ant system: Optimization by a colony of cooperating agents. IEEE Transactions on Cybernetics, Man, and Cybernetics, Part B (Cybernetics); 1996; 26,
35. Yang, XS; Gandomi, AH. Bat algorithm: A novel approach for global engineering optimization. Engineering Computations: International Journal for Computer-Aided Engineering and Software; 2012; 29,
36. Gandomi, AH; Yang, X; Alavi, AH. Cuckoo search algorithm: A metaheuristic approach to solve structural optimization problems. Engineering with Computers; 2013; 29,
37. Mirjalili, S; Mirjalili, SM; Lewis, A. Grey wolf optimizer. Advances in Engineering Software; 2014; 69, pp. 46-61. [DOI: https://dx.doi.org/10.1016/j.advengsoft.2013.12.007]
38. Mirjalili, S; Lewis, A. The whale optimization algorithm. Advances in Engineering Software; 2016; 95, pp. 51-67. [DOI: https://dx.doi.org/10.1016/j.advengsoft.2016.01.008]
39. Mirjalili, S; Gandomi, AH; Mirjalili, SZ; Saremi, S; Faris, H; Mirjalili, SM. Salp swarm algorithm: A bio-inspired optimizer for engineering design problems. Advances in Engineering Software; 2017; 114, pp. 163-191. [DOI: https://dx.doi.org/10.1016/j.advengsoft.2017.07.002]
40. Heidari, AA; Mirjalili, S; Faris, H; Aljarah, I; Mafarja, M; Chen, H. Harris hawks optimization: Algorithm and applications. Future Generation Computer Systems; 2019; 97, pp. 849-872. [DOI: https://dx.doi.org/10.1016/j.future.2019.02.028]
41. Abualigah, L; Elaziz, MA; Sumari, P; Geem, ZW; Gandomi, AH. Reptile search algorithm : A nature-inspired meta-heuristic optimizer. Expert Systems with Applications; 2022; 191,
42. Li, SM; Chen, HL; Wang, MJ; Heidari, AA; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Generation Computer Systems; 2020; 111, pp. 300-323. [DOI: https://dx.doi.org/10.1016/j.future.2020.03.055]
43. Braik, MS. Chameleon swarm algorithm: A bio-inspired optimizer for solving engineering design problems. Expert Systems with Applications; 2021; 174,
44. Naruei, I; Keynia, F. Wild horse optimizer: a new meta-heuristic algorithm for solving engineering optimization problems. Engineering with Computers; 2021; [DOI: https://dx.doi.org/10.1007/s00366-021-01438-z]
45. Abualigah, L; Yousri, D; Elaziz, MA; Ewees, AA; Al-qaness, MAA; Gandomi, A. Matlab code of aquila optimizer: A novel meta-heuristic optimization algorithm. Computers & Industrial Engineering; 2021; 157, [DOI: https://dx.doi.org/10.1016/j.cie.2021.107250] 107250.
46. Dhiman, G; Garg, M; Nagar, A; Kumar, V; Dehghani, M. A novel algorithm for global optimization: rat swarm optimizer. Journal of Ambient Intelligence and Humanized Computing; 2021; 12, pp. 8457-8482. [DOI: https://dx.doi.org/10.1007/s12652-020-02580-0]
47. Hu, G; Wang, J; Li, M; Hussien, AG; Abbas, M. EJS: Multi-strategy enhanced jellyfish search algorithm for engineering applications. Mathematics; 2023; 11,
48. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowledge-Based Systems; 2016; 96, pp. 120-133. [DOI: https://dx.doi.org/10.1016/j.knosys.2015.12.022]
49. Sulaiman, MH; Mustaffa, Z; Saari, MM; Daniyal, H. Barnacles mating optimizer: A new bio-inspired algorithm for solving engineering optimization problems. Engineering Applications of Artificial Intelligence; 2020; 87, [DOI: https://dx.doi.org/10.1016/j.engappai.2019.103330] 103330.
50. Houssein, HE; Saad, MR; Hashim, FA; Shaban, H; Hassaballah, M. Lévy flight distribution: A new metaheuristic algorithm for solving engineering optimization problems. Engineering Applications of Artificial Intelligence; 2020; 94, [DOI: https://dx.doi.org/10.1016/j.engappai.2020.103731] 103731.
51. Hashim, FA; Hussain, K; Houssein, EH; Mai, SM; Al-Atabany, W. Archimedes optimization algorithm: A new metaheuristic algorithm for solving optimization problems. Applied Intelligence; 2021; 51,
52. Kaur, S; Awasthi, LK; Sangal, AL; Dhiman, G. Tunicate swarm algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Engineering Applications of Artificial Intelligence; 2020; 90, [DOI: https://dx.doi.org/10.1016/j.engappai.2020.103541] 103541.
53. Yang, YT; Chen, HL; Heidari, AA; Gandomi, AH. Hunger games search: Visions, conception, implementation, deep analysis, perspectives, and towards performance shifts. Expert Systems with Applications; 2021; 177, [DOI: https://dx.doi.org/10.1016/j.eswa.2021.114864] 114864.
54. Hashim, FA; Houssein, EH; Hussain, K; Mabrouk, MS; Al-Atabany, W. Honey badger algorithm: New metaheuristic algorithm for solving optimization problems. Mathematics and Computers in Simulation (MATCOM); 2022; 192, pp. 84-110.4314074 [DOI: https://dx.doi.org/10.1016/j.matcom.2021.08.013]
55. Ezugwu, AE; Agushaka, JO; Abualigah, L; Mirjalili, S; Gandomi, AH. Prairie dog optimization algorithm. Neural Computing and Applications; 2022; 34,
56. Agushaka, JO; Ezugwu, AE; Abualigah, L. Dwarf mongoose optimization algorithm. Computer Methods in Applied Mechanics and Engineering; 2022; 391, 2022CMAME.391k4570A4372742 [DOI: https://dx.doi.org/10.1016/j.cma.2022.114570] 114570.
57. Oyelade, ON; Ezugwu, AES; Mohamed, TIA; Abualigah, L. Ebola optimization search algorithm: a new nature-inspired metaheuristic optimization algorithm. IEEE Access; 2022; 10, pp. 16150-16177. [DOI: https://dx.doi.org/10.1109/ACCESS.2022.3147821]
58. Shinawi, AE; Ibrahim, RA; Abualigah, L; Zelenakova, M; Abd Elaziz, M. Enhanced adaptive neuro-fuzzy inference system using reptile search algorithm for relating swelling potentiality using index geotechnical properties: A case study at El Sherouk City. Mathematics; 2021; 9,
59. Almotairi, KH; Abualigah, L. Hybrid reptile search algorithm and remora optimization algorithm for optimization tasks and data clustering. Symmetry; 2022; 14,
60. Al-Shourbaji, I; Helian, N; Sun, Y; Alshathri, S; Abd Elaziz, M. Boosting ant colony optimization with reptile search algorithm for churn prediction. Mathematics; 2022; 10,
61. Almotairi, KH; Abualigah, L. Improved reptile search algorithm with novel mean transition mechanism for constrained industrial engineering problems. Neural Computing and Applications; 2022; 34,
62. Emam, MM; Houssein, EH; Ghoniem, RM. A modified reptile search algorithm for global optimization and image segmentation: Case study brain MRI images. Computers in Biology and Medicine; 2022; 152, [DOI: https://dx.doi.org/10.1016/j.compbiomed.2022.106404] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/36521356]106404.
63. Abualigah, L; Diabat, A. Chaotic binary reptile search algorithm and its feature selection applications. Journal of Ambient Intelligence and Humanized Computing; 2022; 2022, pp. 1-17.
64. Huang, L; Wang, Y; Guo, Y; Hu, G. An improved reptile search algorithm based on lévy flight and interactive crossover strategy to engineering application. Mathematics; 2022; 10,
65. Wang, GG; Guo, LH; Gandomi, AH; Hao, GS; Wang, H. Chaotic krill herd algorithm. Information Sciences; 2014; 274, pp. 17-34.3198026 [DOI: https://dx.doi.org/10.1016/j.ins.2014.02.123]
66. Ewees, AA; Elaziz, MA. Performance analysis of chaotic multi-verse harris hawks optimization: a case study on solving engineering problems. Engineering Applications of Artificial Intelligence; 2020; 88, 103370. [DOI: https://dx.doi.org/10.1016/j.engappai.2019.103370]
67. Li, CY; Li, J; Chen, HL; Heidari, AA. Memetic harris hawks optimization: Developments and perspectives on project scheduling and QoS-aware web service composition. Expert Systems with Applications; 2021; 171, [DOI: https://dx.doi.org/10.1016/j.eswa.2020.114529] 114529.
68. Rather, S; Bala, P. Swarm-based chaotic gravitational search algorithm for solving mechanical engineering design problems. World Journal of Engineering; 2020; 17,
69. Jing, K; Zhao, XG; Zhang, XY; Liu, D. Ant lion optimizer with levy variation and adaptive elite competition mechanism. CAAI Transactions on Intelligent Systems; 2018; 13,
70. Coelho, L. Gaussian quantum-behaved particle swarm optimization approaches for constrained engineering design problems. Expert Systems with Applications; 2010; 37,
71. Gupta, S; Deep, K. Improved sine cosine algorithm with crossover scheme for global optimization. Knowledge-Based Systems; 2019; 165, pp. 374-406. [DOI: https://dx.doi.org/10.1016/j.knosys.2018.12.008]
72. Wu, XL; Hu, S; Cheng, W. Multi-objective signal timing optimization based on improved whale optimization algorithm. Journal of kunming university of science and technology (Natural science); 2021; 46,
73. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowledge-Based Systems; 2015; 89, pp. 228-249. [DOI: https://dx.doi.org/10.1016/j.knosys.2015.07.006]
74. Ahmadianfar, I; Bozorg-Haddad, O; Chu, X. Gradient-based optimizer: A new metaheuristic optimization algorithm. Information Sciences; 2020; 540, pp. 131-159.4119424 [DOI: https://dx.doi.org/10.1016/j.ins.2020.06.037]
75. Youn, BD; Choi, KK; Yang, RJ; Gu, L. Reliability-based design optimization for crashworthiness of vehicle side impact. Structural & Multidisciplinary Optimization; 2004; 26,
76. Hu, G; Zhong, JY; Wei, G. SaCHBA_PDN: Modified honey badger algorithm with multi-strategy for UAV path planning. Expert Systems with Applications; 2023; 223, 223119941. [DOI: https://dx.doi.org/10.1016/j.eswa.2023.119941]
77. Hu, G; Zhong, JY; Wei, G; Chang, CT. DTCSMO: An efficient hybrid starling murmuration optimizer for engineering applications. Computer Methods in Applied Mechanics and Engineering; 2023; 405, 2023CMAME.405k5878H4530727 [DOI: https://dx.doi.org/10.1016/j.cma.2023.115878] 115878.
78. Hu, G; Yang, R; Qin, XQ; Wei, G. MCSA: Multi-strategy boosted chameleon-inspired optimization algorithm for engineering applications. Computer Methods in Applied Mechanics and Engineering; 2023; 403, 2023CMAME.403k5676H4502986 [DOI: https://dx.doi.org/10.1016/j.cma.2022.115676] 115676.
© Jilin University 2023.