Introduction
Swarm intelligence optimization algorithms, which are inspired by swarm systems in nature and mainly simulate the behavior of biological groups to solve complex optimization problem [1–4]. Swarm intelligence systems operate on a fundamental concept, which attain globally intelligent behavior by following simple local rules. In recent years, optimization algorithms inspired by swarms have been evolving continuously. These algorithms have demonstrated significant potential across a wide range of optimization tasks [5–8].
In the 1990s, the ant colony algorithm was proposed by M. Dorigo et al, which models the behavior of ants as they search for food [9]. This algorithm exhibits excellent performance when dealing with discrete optimization problems, like the traveling salesman problem. In the 2000s, Karaboga et al. introduced the artificial bee colony algorithm, which simulates the foraging behavior of bees, including three roles: employed bees, observation bees and scout bees [10]. The optimal solution is found through division of labor and cooperation. It has been applied and continuously improved in problems such as function optimization.
From the beginning of the 21st century to the present, extensive research on swarm intelligence algorithms have been proposed, for example, the firefly algorithm (FA), whale optimization algorithm (WOA), sparrow search algorithm (SSA), etc [11–14]. These algorithms simulate the flashing and mutual attraction behavior of fireflies, the predation behavior of whales, etc., and have been widely used in many fields, for example, image processing, machine learning, engineering optimization, and bioinformatics [15–19]. Ang et al. studied the drawbacks of particle swarm optimization (PSO) in dealing with constrained optimization problems and proposed a constrained multi-swarm particle swarm optimization algorithm without velocity [20]. It effectively solves constrained optimization problems through various mechanisms and verifies its good performance on benchmark functions through a large number of simulations. Zhang et al. studied an improved PSO (AMPSO) to address the poor performance of PSO in multi-modal and multi-objective optimization problems [21]. By introducing a dynamic neighborhood learning strategy and offspring competition mechanism, and using a variety of functions to verify and compare to other algorithms, the results show that AMPSO is competitive. Tijjani et al. proposed an enhanced PSO (EBPSO), which uses a dimensionality reduction mechanism and a new position update method [22]. After experimental comparison of multiple algorithms, the classification is more accurate on most data sets, effectively solving the feature selection problem, and EBPSO has better performance. Wu et al. proposed the hybrid ant colony algorithm (HACO), which innovatively updates pheromones, introduces adaptive parameters and mutation operations [23]. Verified by Solomon examples and actual cases, HACO outperforms the basic ant colony algorithm and other intelligent algorithms in most cases, effectively reduces vehicle driving distance and cost, and is practical. Zhang et al. proposed the chaotic particle ant colony algorithm (PSCACO), which innovatively transformed multi-objective solutions into single-objective solutions and introduced a chaotic variable optimization ant colony algorithm [24]. Experimental results indicate that the PSCACO algorithm performs more effectively compared to the contrast algorithms in both benchmark function tests and actual cases, and effectively solves multi-objective optimization problems. Wang et al. proposed the BSAS algorithm, which combines the swarm intelligence algorithm with the feedback-based step size update strategy, improving the ability and efficiency of BAS in handling high-dimensional problems [25].
With the aim of improving the performance of the algorithm, researchers began to explore the mixing of different swarm intelligence algorithms or swarm intelligence algorithms with other types of algorithms [26–29]. Hybrid swarm intelligence algorithms are developed based on swarm intelligence algorithms, aiming to combine the advantages of different algorithms and overcome the limitations of a single algorithm to better solve complex optimization problems. Khodayifar et al. proposed the algorithm by combining the advantages of particle swarm optimization (PSODESA) simulated annealing and differential evolution algorithms [30]. By combining the advantages of the two algorithms, the exploration capability is effectively improved and the risk of falling into local optimality is reduced. Deng et al. proposed a hybrid algorithm, introduced the composite adversarial learning strategy, and combined it with PSO to improve the escape ability from local optimality and local search capabilities [31]. Li et al. proposed a hybrid butterfly and Newton-Raphson swarm intelligence algorithm (BOANRBO) based on adversarial learning to solve the problems of local optimality, slow convergence and low precision of the butterfly optimization algorithm [32]. They improved initialization through adversarial learning, introduced adaptive perceptual modal factors and dynamic exploration probability, and combined with the Newton-Raphson optimizer to enhance exploration capabilities. Pashaei et al. proposed a hybrid gene selection method combining differentially expressed gene (DEG) analysis and hiking optimization algorithm (HOA) [33]. First, relevant genes were screened by DEG analysis, and then the binary variants of HOA, BHOA and the improved version BHOA-CM, were used to optimize gene selection.
The basic CTCM algorithm simulates the herd behavior of animals in nature, especially the competitive relationship between tribes and the cooperative relationship between members within a tribe [34]. Even though the CTCM algorithm increases population diversity to a certain level using the tribe competition and member cooperation mechanism, in complex optimization scenarios, it may converge prematurely to a local optimal solution, causing it to miss the global best solution. This is especially likely to happen when the cooperation between members within the tribe is too close or the competition between tribes is not sufficient. The CTCMKT algorithm effectively avoids converging to the local optimal solution in the optimization process and missing the global optimal solution by introducing a joint strategy of Kent chaotic mapping and t-distribution mutation. Kent chaotic mapping can uniformly explore all possible states within a specific value range. This implies that it can conduct wide ranging exploration in the search space, thus enhancing the probability of discovering the global optimal solution [35]. The t-distribution mutation can adaptively adjust the characteristics of the mutation according to the number of iterations, and effectively balance the algorithm’s exploration and exploitation capabilities [36]. In the early stages of iteration, the focus is on exploration, and new solution spaces are discovered through larger variation steps; as the iteration proceeds, the focus is on development, and the better solutions found are optimized through smaller variation steps.
The subsequent sections of this paper are structured as follows. First, the principles of the CTCM and CTCMKT algorithms will be explained, respectively. Next, the results of all algorithms will be analyzed. Then, the performance of the algorithms in solving practical engineering problems will be discussed. Ultimately, a summary of the research and outlook for future work are presented.
CTCM
The CTCM is based on the human group competition and member cooperation behavior. In primitive human society, members formed tribes through random cooperation. Each tribe occupied different resources and migrated to obtain more. The tribe was led by a chief, who decided the migration direction and influenced the development of the tribe. Tribe members explored new lands based on experience and the instructions of the chief. Despite close cooperation, members might replace the chief after discovering more fertile land. Competition between tribes was fierce, conflicts were frequent, and the weak would flee in the opposite direction to reduce losses. Based on these characteristics, the CTCM mathematical model was constructed. The CTCM algorithm adopts member cooperation and tribal competition to solve optimization problem.
CTCMKT
Introducing Kent chaotic map and t-distribution mutation into the basic CTCM algorithm, the advantages of the two strategies are combined to effectively avoid the stagnation of CTCM search and slow convergence. CTCMKT can efficiently coordinate global search capabilities and local search capabilities, thereby significantly enhancing calculation accuracy while improving convergence speed, ensuring that the algorithm achieves efficient and accurate iterative optimization in solving complex problems.
Initialization with Kent chaotic mapping
According to the characteristics of the primitive tribes in searching for resources, the mathematical model for CTCMKT is bulit. Suppose p presents the number of humans in primitive society, n is the number of tribes, i is the number of humans in one tribe, and d represents the dimension of the solution space. And Fn represents the fitness value of the entire primitive society, v denotes the velocity matrix of the entire primitive human society.
Chaotic theory has been increasingly incorporated into swarm intelligence algorithms, leveraging its features such as randomness, ergodicity, and non – repetitiveness. These attributes serve to boost the diversity within the initialized population, thereby enhancing the algorithm’s optimization capabilities. In contrast to random search methods, chaotic theory enables a more comprehensive and in – depth exploration of the search space.
To maximize the utilization of solution – space information by the initial population’s individuals, the Kent mapping from chaotic theory is integrated into the CTCM algorithm for population initialization enhancement. The mathematical model of the Kent mapping can be represented as Eq. (1).
(1)
Where is an adjustable parameter in the interval (0, 1). When ,the distribution is basically uniform, in this article sets .
Exploitation
In primitive tribes, the tribe’s management and future planning are the responsibility of the leader. Most members will obey the leader’s arrangements, but individual members also have their own ideas, so the loyalty of individuals will change as time passes. This loyalty will show chaotic behavior, and the increase in the number of tribes will aggravate this chaotic nature. In this case, the sine chaotic mapping is used to characterize the changes in member loyalty. At the same time, it is believed that the amount of tribes n will affect the chaotic state. The loyalty of a single member rt can be expressed as Eq. (2).
(2)
The mixed model of loyalty reflecting social behavior is shown in Equation (2), which consists of two parts: random redistribution and chaotic evolution. The random redistribution represents the intermittent major attitude changes, while the chaotic sine map represents the unpredictability of individual members adjusting their loyalty based on previous values. When the number of tribes increases, then the distinctions and relationships between different tribes are enhanced, the chaotic behavior of loyalty is amplified. Increasing interactions and connections may cause the system to be more sensitive to initial conditions because the rt value may deviate or fluctuate more. Members communicate with the leader to obtain instructions and combine personal experience to provide information for actions. The way to update the speed matrix is similar to the PSO, which is random and reflects the differences in individual thinking and the difference between following instructions and personal ideas, which can be expressed as Eq. (3).
(3)
Among them, the speed of the mth member of the nth tribe at the t + 1 times is represented by ,and consider the constant 3/5 as the inertia factor. refers to the velocity at the t times. indicates the position with the optimal fitness value discovered by the member across the entire period, and is the position at t time. represents the position with the best fitness value found by the tribe over the entire duration. c1 and c2 refers to the ways in which tribe members follow their own experiences and comply with the chief’s orders, respectively. r1 and r2 are the chaotic loyalty of each member.
Exploration
When tribal conflicts occur, weaker tribes, finding it hard to compete with stronger ones for resources, are often forced into retreat. This retreat is often in a state of chaotic. Some members flee rapidly because of fear, whereas other members retreat more slowly. The unpredictability of these retreats increases as the number of tribes increases. In random conflicts between tribes, weaker tribes will flee, while stronger tribes will not be affected.
represents a random conflict with one among the n tribes. Then the velocity will be updated by Eq. (4) and Eq. (5).
(4)(5)
means the optimal position found by the opponent. represents the optimal fitness of the nth tribe, and is the optimal fitness of the opponent. c3 represents the tribe escape coefficient, and is a chaotic random factor indicating the retreat speed. At the same time, the positions of the tribe members are updated by Equation (6).
(6)
The tribe members need to update their positions within the feasible domain [xmin, xmax], where xmin is the minimum value of the domain and xmax is the maximum value of the domain. Once this range is surpassed, a mirror bounce effect will happen. The velocity will reverse by Eq. (7), and both its position information and the position of the tribe member will be rectified.
(7)
t-distribution mutation
The t – distribution embodies characteristics inherent to both the Cauchy and Gaussian distribution. In the current approach, the degree of freedom parameter of the t – distribution is substituted with the number of algorithm iterations. This substitution enables the t-distribution to closely approximate the Cauchy distribution during the initial stages of algorithm iteration. In this stage, the global optimization ability of the algorithm is enhanced. As the number of iterations continues to increase, the t-distribution approaches the Gaussian distribution. This process improves the search efficiency of the algorithm in the local range, thereby improving the optimization accuracy of the algorithm.
vIn the CTCMKT algorithm, some members are selected with a certain probability to perform t-distribution mutation operations. The formula is described as Eq. (8).
(8)
Where is the position of the tribe member after mutation; is the original position of the tribe member; iter is the current iteration number of the CTCMKT; t(iter) represents the t-distribution function with the number of iterations as the degree of freedom. The degree of freedom is continuously changed during the iteration process to achieve the effect of adaptively changing the amplitude of mutation. The adaptively adjusted t – distribution mutation enhances population diversity. This enables tribe members to potentially break free from local extreme values and locate the global optimal solution, thus boosting the algorithm’s performance. And the pseudo code is shown in Algorithm 1.
Algorithm 1 The algorithm of CTCMKT
Input:
T: the maximum iterations
p: the number of humans
n: the quantity of tribes
c1: The experience factor
c2: The obey factor
c3: The escape factor
Initialize relevant parameters and a population of p humans
Output: xbest, Fbest
while iter<T
for each human do
update loyalty factor by Eq. (2)
update velocity by Eq. (3)
randomly select rival and update velocity by Eq. (4)
if then
update retreat factor by Eq. (2)
update human position by Eq. (6)
update human position by Eq. (8)
if human position is out of bound then
compute the fitness F
if then
=x
if then
=x
Retrieve the current position
iter = iter+1
return ,
As can be seen, Fig 1 illustrates the flow chart of CTCMKT.
[Figure omitted. See PDF.]
Convergence analysis
This section elaborates on the theoretical convergence analysis of the CTCMKT algorithm. Meanwhile, effective theoretical analysis can further guide the design of CTCMKT application parameters. Assuming p* is the individual’s optimal solution, f* is the population’s optimal solution, and x* is the global optimal point. In order to conduct analysis, it is assumed that p* = f* = x* during the convergence process. Therefore, the CTCMKT speed update equation can be written as Eq.(9).
(9)
Among them, represents the individual’s velocity at the k + 1 iteration, and
(10)
The positional offset between the k-th iteration and the optimal solution is sk as shown in Eq. (11).
(11)
Then Eq.(9) can be rewritten as Eq.(12).
(12)
And ck = c1 r1,k + c2 r2,k − c3 r3,k. Substituting into Eq.(12) to eliminate the velocity term, then
(13)
The characteristic root equation of the difference equation obtained from this is shown in Eq.(14).
(14)
If ck is a constant, then the solution of the characteristic root equation should be as shown in Eq. (15).
(15)
To ensure that CTCMKT can be in a stable state, the modulus of the eigenvalues must be less than 1, i.e., (1.6-ck)2 < 2.4 and ǀ1.6- ckǀ < 2.4, so ck∈ (0, 3.15). Because of rj, k∈[0,1], let us only consider extreme situations. If r1, k = 1, r2, k = 1 and r3, k = 0, so 0<(c1 + c2)<3.15, where c3 is an unstable perturbation term; if c3 is greater than 0, the system will exhibit instability at some point. As mentioned earlier, if c3> (c1r1 + c2r2−3.15)/r3, then the system will experience an increase in disturbance. So when adjusting parameters, c1 and c2 should first satisfy the condition of less than 3.15, and then c3 gradually increase from a very small positive number, in order to obtain the most ideal global optimization ability in different applications.
Sensitivity analysis
Since the CTCMKT algorithm incorporates two strategies based on the original algorithm, it is necessary to analyze the selection of these strategies before comparing it with other algorithms. Among them, CTCMK represents the algorithm integrated with Kent chaotic mapping, and CTCMT represents the algorithm with t-distribution mutation. 23 basic test functions are selected as the test objects.
The average optimal values of 20 tests are selected for comparison in the test results, and the results are shown in Table 1. At the same time, the radar chart and ranking chart are shown in Fig 2. In detail, compared with CTCM, the CTCMK algorithm shows a slight improvement in the performance of most functions, and its performance is only slightly worse in functions F2, F13, F15 - F17, and F19. The CTCMT algorithm shows a significant performance improvement compared to CTCM. However, its performance is slightly worse on functions F6, F16, F17, and F19. The CTCMKT algorithm with the combined strategy shows a slight improvement in the optimization – seeking ability compared to the CTCMT algorithm, and a significant improvement compared to the CTCM and CTCMK algorithms. As can be seen from the ranking chart, the rankings of the four algorithms are 3.39, 2.78, 2.00, and 1.83 respectively. In summary, the CTCMKT algorithm with the combined strategy has significantly improved in terms of optimization – seeking ability. In addition, it has also shown obvious enhancements in convergence speed and stability on most functions.
[Figure omitted. See PDF.]
[Figure omitted. See PDF.]
Experimental analysis
Simulation environment and parameter settings
The simulation platform runs on a Windows 11 – based computer equipped with a 12th Gen Intel(R) Core (TM) i7 - 1260P 2.10GHz CPU, 16GB memory, and integrated graphics card. All algorithms are implemented in MATLAB R2023a. For the test function, the population size is configured as 40, the dimension is set at 20, the number of iterations is set to 1000, and all algorithm results represent the average of 30 runs. According to Chen et al.‘s research, c1, c2, and c3 are set to 2.0, 1.0, and 0.1, respectively. For all test sets, eight algorithms will be compared, including WOA [12], PSO [37], grey wolf optimizer (GWO) [13], differential evolution algorithm (DE) [38], beluga whale optimization algorithm (BWO) [39], goose algorithm (GOOSE) [40], CTCM [34], harris hawks optimization (HHO) [41], damping multi-verse optimizer (DMVO) [42] and CTCMKT.
CEC2021 test functions
In this section, the CEC2021 function set will be used as a test set. The CEC2021 function set includes 10 single-objective functions, which are classified into the following categories: Single-peak function (f1): This type of function usually has only one global minimum point. Basic functions (f2-f4): These functions have different characteristics, and which are utilized for evaluating the performance of the algorithm under different conditions. Hybrid functions (f5-f7): Combines multiple function characteristics to assess the performance in complex environments. Combination function (f8-f10): These functions possess numerous local minimum points, serving to evaluate the algorithm’s performance within multi – peak environments. The CEC2021 test function defines the same search range, that is, the interval [−100, 100]. Min, std, avg, median, worse represent the optimal value, standard deviation, average value, median value and worst value, respectively.
The relevant statistical results of all algorithms solving formulas f1 to f4 are presented in Table 2. For f1, the optimal value, standard deviation, and average value of the CTCMKT have an absolute crushing advantage over the basic CTCM. Compared to other algorithms, the CTCMKT also ranks first, and which has both a smaller standard deviation and the best optimal value. It demonstrates that the CTCMKT possesses powerful global and local search abilities, enabling it to acquire the optimal solution efficiently. For f2, f3 and f4, CTCMKT can precisely identify the exact global optimization solution, and the performance of the optimal value, standard deviation, average value, and worst value are better than that of the basic CTCM and the remaining algorithms. It clarifies that CTCMKT has robust global optimization ability, allowing it to reach the optimal solution while avoiding being ensnared by local optimal solutions. The test results of f1 to f4 show that CTCMKT has perfect strong global optimization ability and stability for unimodal functions and basic functions.
[Figure omitted. See PDF.]
The test results of all algorithms for functions f5 - f10 are presented in Table 3. For f5, f7, f9, and f10, the CTCMKT algorithms have the best statistical data, include optimal value, average value, and standard deviation. This indicates that compared with other algorithms, the CTCMKT algorithm has better optimization solution solving ability and stability, and which can better escape from the local optimization solution. For f6 and f8, CTCMKT achieved accurate optimal solutions, and which showed excellent performance in terms of global solving ability and stability. And the modified CTCMKT exceeds the basic CTCM algorithm in terms of stability and global optimization. By comparing the solution results of all algorithms for f5-f10, it has been discovered that the CTCMKT algorithm showcases extraordinary stability and the proficiency to identify the global optimization solution during the resolution of complex problems.
[Figure omitted. See PDF.]
Fig 3 presents the convergence curve graphs of the algorithms for CEC2021. The convergence curve effectively evaluates algorithm’s ability to converge and calculation accuracy in solving the optimal solution of the function. For f1 to f4, the CTCMKT algorithm effectively avoids the problems of falling into local optimal solutions and premature convergence by initializing the population through chaotic mapping and updating the position through t-distribution mutation, thereby improving the algorithm’s solution capability. And related parameters include standard deviation and average value, which are better than other algorithms as shown in Table 2. In Fig 3, it can be seen that the convergence speed and calculation precision of the enhanced CTCMKT are superior to those of the basic CTCM. This implies that the CTCMKT holds more powerful search and optimization capacities. For f5 to f7, f9 and f10, all performance evaluation indicators of the CTCMKT algorithm are better than those of other algorithms, and the convergence speed is also the fastest. For f8, WOA, GWO, DE and HHO can be comparable to CTCMKT in terms of statistical values, but the convergence speed of CTCMKT algorithm still ranks first. Compared with the remaining algorithms, the CTCMKT has an absolute advantage in both indicators and convergence speed. By integrating Kent chaotic mapping with t-distribution mutation, the diversity within the population and the global search capacity are efficiently boosted. As a result, CTCMKT achieve a quicker convergence rate and a higher level of calculation precision.
[Figure omitted. See PDF.]
The ANOVA test graphs for CEC2021 test set are shown in Fig 4. For solving equation optimization problems, the most important evaluation indicator is the standard deviation. A small standard deviation indicates that the algorithm not only has excellent global optimization capabilities but also high stability. Through Kent chaotic mapping and t – distribution mutation, the exploration and exploitation abilities of the CTCM algorithm are enhanced. As a result, the algorithm attains a more rapid convergence speed and higher computational precision. For f1 to f10, compared to the CTCM, the CTCMKT with the joint strategy has a smaller standard deviation, indicating that the joint strategy can effectively improve the stability of the algorithm and the ability of global optimization. In comparison to the other eight algorithms, the CTCMKT stands out with not only the minimum standard deviation, but also the most favorable optimal value, average value, and worst value. This clearly demonstrates the algorithm’s robust stability and significant advantages.
[Figure omitted. See PDF.]
In Table 4, the results of the Wilcoxon rank sum test are presented. If the value is less than 0.05, it means that there is a significant difference between CTCMKT and the comparison algorithm in this function. It can be seen from the data in the table that basically all p values are less than 0.05. Therefore, it can be shown that, for CEC2021, there are significant differences between CTCMKT and other functions.
[Figure omitted. See PDF.]
The radar chart and sorting diagram of the CEC2021 test function are shown in Fig 5. (a)-(b). Radar chart is a graphical method for displaying multidimensional data in two dimensions. It maps data from multiple dimensions to axes starting from the center point, where each axis represents a variable. By connecting the data points on each axis to form a polygon, the performance of different categories in each dimension can be intuitively compared. It can be seen from the figure that CTCMKT (blue circle) ranks first in the test results of all algorithms, and this result can also be confirmed from the average ranking chart. This reveals that the CTCMKT improved by the joint strategy is more excellent than other algorithms in global optimization ability and stability.
[Figure omitted. See PDF.]
23 benchmark functions
In this section, the 23 benchmark functions will be tested. And the 23 benchmark functions are shown in Table 5, where F1-F7 are unimodal test functions, F8-F13 are multimodal test functions, and F14-F23 are fixed-dimension multimodal functions.
[Figure omitted. See PDF.]
The relevant statistical results of all algorithms for solving functions F1 to F7 are shown in Table 6. For F1 to F4, compared with other algorithms, CTCMKT algorithm ranks first in terms of optimal value, average value, standard deviation and worst value, which indicates that the CTCMKT algorithm has strong stability and can effectively avoid premature convergence. For F5 to F7, CTCMKT ranked fifth, fifth and ninth respectively, although the ranking is low, the average value is relatively small, which shows that the CTCMKT has stronger stability and optimization capabilities among several algorithms. Comprehensive consideration of functions F1 to F7 shows that the CTCMKT algorithm has strong search capability and accuracy in solving unimodal functions.
[Figure omitted. See PDF.]
The test results of the functions F8 to F13 are shown in Table 7. For F8, the optimal value and standard deviation of CTCMKT algorithm can only be ranked at a medium level, and its stability is slightly worse than that of the other algorithms. For functions F9 to F11, the CTCMKT algorithm ranks first with a standard deviation of 0, and can find the exact optimization solution, which shows that the CTCMKT algorithm can find the global optimal solution, avoid falling into the local optimization solution, and possess strong stability. For functions F12 and F13, the CTCMKT algorithm ranks fifth and eighth, and data such as standard deviation are better than those of the basic CTCM algorithm. It also has relatively small standard deviation and optimal value, indicating that CTCMKT possess better search ability and stability. In general, the CTCMKT algorithm with a joint strategy has better global optimization capability and stability than CTCM, can avoid premature convergence, and possess a good effect in solving multimodal test functions.
[Figure omitted. See PDF.]
Table 8 and Table 9 display the results obtained from testing fixed – dimension multimodal functions. For functions F14 to F20, the test results of all algorithms are at the same level, with small standard deviations and almost consistent optimal solutions, which indicates that CTCMKT has strong stability and global optimization capabilities and can avoid falling into local optimization solutions. For functions F21 to F23, all algorithms can search almost the same optimization solution, indicating that these algorithms are at a comparable level in finding the optimal solution. In terms of standard deviation, the CTCMKT algorithm has an absolute advantage over other algorithms, and the fluctuation of the optimal solution is very small, almost close to 0, indicating that the joint strategy has greatly improved the stability of CTCM. Comparing the optimal values and standard deviations of functions F14 to F23, the CTCMKT algorithm has better results, indicating that the combined strategy of Kent chaotic mapping and t-distribution mutation can effectively improve the global optimization ability of the algorithm and greatly improve the stability of the original algorithm.
[Figure omitted. See PDF.]
[Figure omitted. See PDF.]
Fig 6 presents the convergence curve graphs of the algorithms for 23 Benchmark functions. As shown in Fig 6, for functions F1 - F4 and F7, the CTCMKT algorithm exhibits the most rapid convergence speed and does not suffer from premature convergence. This implies that the algorithm has a robust capacity for searching and evading local optimal solutions. For functions F5 and F6, the initial iteration speed of the CTCMKT algorithm is higher than that of other algorithms, and it also ranks high in terms of convergence accuracy, which also means that the CTCMKT has better search capabilities. For the convergence curves of functions F12 and F13, the CTCMKT algorithm has relatively good accuracy and the fastest initial convergence speed compared to other algorithms. For F9 to F11, the CTCMKT algorithm not only converges the fastest but also has the best optimal value, which shows that the combined strategy of Kent chaotic mapping and t-distribution mutation can greatly improve the search ability and accuracy of the algorithm. In general, compared with other algorithms, CTCMKT has the best performance in global optimization of unimodal functions and multimodal functions. For function F14, the performance of the CTCMKT algorithm is second only to the DE when considering both accuracy and convergence speed. For functions F15-F23, the CTCMKT algorithm has extremely fast convergence speed and extremely high accuracy compared with other algorithms, which shows that the CTCMKT algorithm has a very good effect in optimizing fixed-dimensional multi-modal functions and has high global search capabilities and accuracy.
[Figure omitted. See PDF.]
The ANOVA test graphs for Benchmark functions are displayed in Fig 7. As shown in Fig 7 that the CTCMKT algorithm has the minimum standard deviation and high accuracy for function F1 to F5 and F7, which indicates that the CTCMKT possesses a strong ability to search and escape from the local optimal solution. For F6 and F8, the stability of CTCMKT has a high ranking. For functions F9 to F12, it can be seen from the Fig 7 that CTCMKT has the best stability and accuracy compared with other algorithms. For function F13, the standard deviation and optimal value of the CTCMKT algorithm are greatly improved compared with CTCM. For unimodal and multimodal functions, CTCMKT has smaller standard deviations and optimal values compared with several algorithms, which indicates that the Kent chaotic mapping combined with t-distribution mutation can improve the global search capability and stability of the algorithm. For functions F15-F23, the GWO, DE, BWO, HHO and CTCMKT algorithms all have relatively small standard deviations and optimal values in most test functions, and CTCMKT own smaller standard deviation and optimal value than CTCM. For multi-modal functions of fixed dimensions, the joint strategy to improve CTCM has a good effect, which can enhance the global optimization ability and stability of the algorithm and avoid premature convergence into the local optimal solution.
[Figure omitted. See PDF.]
Table 10 shows the results of the Wilcoxon rank sum test. It can be seen from the data in the table that basically all p values are less than 0.05. For functions greater than 0.05, the comparative algorithm has found the optimal solution and the standard deviation is 0. Therefore, it can be shown that there are significant differences between CTCMKT and other functions.
[Figure omitted. See PDF.]
The radar chart and sorting diagram of the 23 Benchmark functions are shown in Fig 8(a)–(b). From the radar chart, we can draw the following conclusions: CTCMKT ranked first in most functions, but ranked lower in F6 and F16 to F19. From the sorting diagram, we can see that CTCMKT ranked 3.65 overall, which is a significant improvement over the unimproved CTCM algorithm, and it also ranked first compared to other algorithms. This shows that the CTCMKT algorithm has a huge advantage in the 23 benchmark function tests, has better global optimization capabilities and stability, and can jump out of the local optimal solution and prevent premature convergence.
[Figure omitted. See PDF.]
CTCMKT for solving project optimization
For the purpose of checking the usability and practicality of the CTCMKT algorithm, it is applied to solve engineering optimization problems such as those related to compression spring projects, and welded beam design projects.
Compression spring project
The objective in compression spring design is to reduce its mass f(x) to the minimum while adhering to specific constraints. These consist of four inequality constraints: minimum deflection, shear stress, oscillation frequency, and outer diameter limitation. There are also three design variables: the average diameter of the spring coil D(x2), the diameter of the spring wire d(x1), and the effective number of spring coils N(x3), as illustrated in Fig 9.
[Figure omitted. See PDF.]
Minimize:
(16)
subject to:
(17)(18)(19)(20)
with bounds:
,, (21)
For the compression spring project, the algorithm parameters are set as follows: the population size is 40, and the maximum number of iterations is 300. All algorithms are run 30 times to obtain the optimal value, average value, standard deviation and other statistical results as shown in Table 11. The convergence curve and ANOVA test graph are shown in Fig 10 and Fig 11, respectively. It can be seen from Fig 10 except for the BWO and DMVO, other algorithms have similar convergence speeds and optimal values. Similarly, in terms of the stability of the optimal solution for the compression spring, except for the BWO algorithm, the stability of other algorithms is good. As can be seen from Table 11, the CTCMKT algorithm has a smaller standard deviation than the CTCM algorithm. Except for the BWO algorithm, the other algorithms have almost the same optimal value. In general, the CTCMKT algorithm has strong global optimization capabilities and high stability to solve practical engineering application problems.
[Figure omitted. See PDF.]
[Figure omitted. See PDF.]
[Figure omitted. See PDF.]
Welded beam design
The aim of welded beam design is to minimize its cost f(x) subject to specific constraints. There are seven inequality constraints involved, such as shear stress (τ), beam bending stress (σ), bar buckling load (PC), beam end deflection (δ), etc. The four design variables are: h(x1), l(x2), t(x3) and b(x4), as shown in Fig 12. The mathematical model is described in the form of Eq. (22).
[Figure omitted. See PDF.]
Minimize:
(22)
Subject to
(23)(24)(25)(26)(27)(28)(29)
Boundary constraints and related parameters.
(30)(31)(32)(33)
The statistical results of welded beam project are as shown in Table 12. The convergence curve and ANOVA test graph are shown in Fig 13 and Fig 14, respectively. As can be seen from Fig 13, all algorithms behave good convergence speed, and CTCMKT, PSO, GWO, HHO, DMVO show good global optimization capabilities. In terms of ANOVA test, CTCMKT showed very good stability, only slightly worse than GWO. As shown in Table 12, the standard deviation of CTCMKT is 0.0317 which is much better than 0.1571 of CTCM algorithm. In terms of average value, the average value of CTCMKT algorithm is 1.7547, which is also better than CTCM. In general, the CTCMKT algorithm is only weaker than the GWO algorithm in solving the welding beam design problem. Therefore, it can be shown that the Kent chaotic mapping combined with t-variation distribution can well improve the global optimization ability and stability of the CTCM algorithm and avoid falling into the local optimal solution.
[Figure omitted. See PDF.]
[Figure omitted. See PDF.]
[Figure omitted. See PDF.]
In summary, the CTCMKT algorithm has obvious advantages in testing single-peak and multi-peak functions, but it is slightly insufficient in optimizing multi-peak functions of fixed dimensions. Other improvement strategies can be used to enhance its global optimization ability in the future. In solving engineering problems, the CTCMKT algorithm performs well. In the future, the CTCMKT algorithm can be used in path planning, material composition optimization, etc. to verify the actual application ability of the algorithm.
Conclusion and future research
In conclusion, based on the joint strategy of Kent chaotic map and t-distribution mutation, this paper proposed an enhanced CTCM algorithm for function optimization and engineering problem solving. The CTCMKT algorithm improves the CTCM algorithm through a joint strategy, effectively avoids falling into the local optimal solution, and improves the stability and convergence speed of the algorithm. Compared with other algorithms, the CTCMKT algorithm has better standard deviation and optimal value in function testing, representing strong stability and global optimization ability. From the convergence curve, the CTCMKT algorithm has a faster convergence speed and accuracy. The CTCMKT algorithm has obvious advantages in testing unimodal and multimodal functions, but is slightly insufficient in optimizing multimodal functions of fixed dimensions. Its global optimization capability can be enhanced through other improvement strategies or by mixing other algorithms. For engineering optimization problems, the CTCMKT algorithm greatly improves the global optimization capability and robustness of the CTCM algorithm. From the experimental results, the CTCMKT algorithm effectively improves the convergence speed and accuracy of the algorithm, and can be used to solve practical engineering application. However, the ability of this optimization algorithm to solve other applications, such as three-dimensional path planning and material composition optimization, remains to be verified.
In future work, other intelligent optimization algorithms will be mixed to improve the algorithm’s global optimization ability to avoid premature convergence and algorithm stability. The enhanced CTCM algorithm can be used in engineering problems such as tracked mountain vehicle path planning, drone three-dimensional path planning, material composition optimization and ship track planning. The purpose of optimization is to find the path with the least time and the shortest distance to reduce time cost and fuel consumption. However, no single optimization algorithm can be all-encompassing, and much research remains to be done.
References
1. 1. Alreffaee MA. Exploring ant lion optimization algorithm to enhance the choice of an appropriate software reliability growth model. Int J Comput Appl. 2018;182:1–8.
* View Article
* Google Scholar
2. 2. Chopra N, Mohsin Ansari M. Golden jackal optimization: A novel nature-inspired optimizer for engineering applications. Expert Systems with Applications. 2022;198:116924.
* View Article
* Google Scholar
3. 3. Mirjalili S, Gandomi A, Mirjalili S, Saremi S, Faris H, Mirjalili S. Salp swarm algorithm: A bio-inspired optimizer for engineering design problems. Adv Eng Softw. 2017;114:163–91.
* View Article
* Google Scholar
4. 4. Xue J, Shen B. A novel swarm intelligence optimization approach: sparrow search algorithm. Syst Sci Control Eng. 2020;8(1):22–34.
* View Article
* Google Scholar
5. 5. Zitar RA, Abualigah L, Al-Dmour NA. Review and analysis for the Red Deer Algorithm. J Ambient Intell Humaniz Comput. 2023;14(7):8375–85. pmid:34840618
* View Article
* PubMed/NCBI
* Google Scholar
6. 6. Abdel-Basset M, Mohamed R, Jasser MB, Hezam IM, Sallam kM, Mohamed AW. Developments on metaheuristic-based optimization for numerical and engineering optimization problems: Analysis, design, validation, and applications. Alex Eng J. 2023;78:175–212.
7. 7. Ahmadianfar I, Heidari A, Noshadian S, Chen H, Gandomi A. Info: an efficient optimization algorithm based on weighted mean of vectors. Expert Syst Appl. 2022;195.
* View Article
* Google Scholar
8. 8. Zhang W, Wang N, Yang S. Hybrid artificial bee colony algorithm for parameter estimation of proton exchange membrane fuel cell. Int J Hydrogen Energy. 2013;38(14):5796–806.
* View Article
* Google Scholar
9. 9. Dorigo M, Gambardella LM. Ant colonies for the travelling salesman problem. Biosystems. 1997;43(2):73–81. pmid:9231906
* View Article
* PubMed/NCBI
* Google Scholar
10. 10. Karaboga D, Akay B. A comparative study of artificial bee colony algorithm. Appl Math Comput. 2009;214(1):108–32.
* View Article
* Google Scholar
11. 11. Yang X-S. Firefly algorithms for multimodal optimization. In: Springer. 2009.
12. 12. Mirjalili S, Lewis A. The Whale Optimization Algorithm. Advances in Engineering Software. 2016;95:51–67.
* View Article
* Google Scholar
13. 13. Mirjalili S, Mirjalili S, Lewis A. Grey wolf optimizer. Adv Eng Softw. 2014;69:46–61.
* View Article
* Google Scholar
14. 14. Wang J, Wang W, Hu X, Qiu L, Zang H. Black-winged kite algorithm: a nature-inspired meta-heuristic for solving benchmark functions and engineering problems. Artif Intell Rev. 2024;57(4):98.
* View Article
* Google Scholar
15. 15. Feng Z, Niu W, Liu S. Cooperation search algorithm: A novel metaheuristic evolutionary intelligence algorithm for numerical optimization and engineering optimization problems. Appl Soft Comput. 2021;98.
* View Article
* Google Scholar
16. 16. Yan P, Shang S, Zhang C, Yin N, Zhang X, Yang G, et al. Research on the Processing of Coal Mine Water Source Data by Optimizing BP Neural Network Algorithm With Sparrow Search Algorithm. IEEE Access. 2021;9:108718–30.
* View Article
* Google Scholar
17. 17. Nematzadeh S, Kiani F, Torkamanian-Afshar M, Aydin N. Tuning hyperparameters of machine learning algorithms and deep neural networks using metaheuristics: A bioinformatics study on biomedical and biological cases. Comput Biol Chem. 2022;97:107619. pmid:35033837
* View Article
* PubMed/NCBI
* Google Scholar
18. 18. Tang J, Liu G, Pan Q. A Review on Representative Swarm Intelligence Algorithms for Solving Optimization Problems: Applications and Trends. IEEE/CAA J Autom Sinica. 2021;8(10):1627–43.
* View Article
* Google Scholar
19. 19. Du C, Zhang J, Fang J. An innovative complex-valued encoding black-winged kite algorithm for global optimization. Sci Rep. 2025;15(1):932. pmid:39762300
* View Article
* PubMed/NCBI
* Google Scholar
20. 20. Ang KM, Lim WH, Isa NAM, Tiang SS, Wong CH. A constrained multi-swarm particle swarm optimization without velocity for constrained optimization problems. Expert Syst Appl. 2020;140:112882.
* View Article
* Google Scholar
21. 21. Zhang X, Liu H, Tu L. A modified particle swarm optimization for multimodal multi-objective optimization. Eng Appl Artif Intell. 2020;95.
* View Article
* Google Scholar
22. 22. Tijjani S, Ab Wahab M, Mohd Noor M. An enhanced particle swarm optimization with position update for optimal feature selection. Expert Syst Appl. 2024;247.
* View Article
* Google Scholar
23. 23. Wu H, Gao Y, Wang W, Zhang Z. A hybrid ant colony algorithm based on multiple strategies for the vehicle routing problem with time windows. Complex Intell Syst. 2021;9(3):2491–508.
* View Article
* Google Scholar
24. 24. Zhang T, Xie W, Wei M, Xie X. Multi-objective sustainable supply chain network optimization based on chaotic particle-Ant colony algorithm. PLoS One. 2023;18(7):e0278814. pmid:37428738
* View Article
* PubMed/NCBI
* Google Scholar
25. 25. Wang J, Chen H. Bsas: Beetle swarm antennae search algorithm for optimization problems. 2018.
26. 26. Deng W, Chen R, He B, Liu Y, Yin L, Guo J. A novel two-stage hybrid swarm intelligence optimization algorithm and application. Soft Comput. 2012;16:1707–22.
* View Article
* Google Scholar
27. 27. Shen Y, Liu M, Yang J, Shi Y, Middendorf M. A hybrid swarm intelligence algorithm for vehicle routing problem with time windows. IEEE Access. 2020;8:93882–93.
* View Article
* Google Scholar
28. 28. Tawhid MA, Ibrahim AM. An efficient hybrid swarm intelligence optimization algorithm for solving nonlinear systems and clustering problems. Soft Comput. 2023;27(13):8867–95.
* View Article
* Google Scholar
29. 29. Lien L-C, Cheng M-Y. A hybrid swarm intelligence based particle-bee algorithm for construction site layout optimization. Expert Systems with Applications. 2012;39(10):9642–50.
* View Article
* Google Scholar
30. 30. Mirsadeghi E, Khodayifar S. Hybridizing particle swarm optimization with simulated annealing and differential evolution. Cluster Comput. 2020;24(2):1135–63.
* View Article
* Google Scholar
31. 31. Deng X, He D, Qu L. A novel hybrid algorithm based on arithmetic optimization algorithm and particle swarm optimization for global optimization problems. J Supercomput. 2023;80(7):8857–97.
* View Article
* Google Scholar
32. 32. Li C, Zhu Y. A hybrid butterfly and Newton–raphson swarm intelligence algorithm based on opposition-based learning. Cluster Comput. 2024;27(10):14469–514.
* View Article
* Google Scholar
33. 33. Pashaei E, Pashaei E, Mirjalili S. Binary hiking optimization for gene selection: Insights from HNSCC RNA-Seq data. Expert Systems with Applications. 2025;268:126404.
* View Article
* Google Scholar
34. 34. Chen Z, Li S, Khan AT, Mirjalili S. Competition of tribes and cooperation of members algorithm: An evolutionary computation approach for model free optimization. Expert Systems with Applications. 2025;265:125908.
* View Article
* Google Scholar
35. 35. Li X, Gu J, Sun X, Li J, Tang S. Parameter identification of robot manipulators with unknown payloads using an improved chaotic sparrow search algorithm. Appl Intell. 2022;52(9):10341–51.
* View Article
* Google Scholar
36. 36. Chen J, Zhao J, Xiao R, Cui Z, Wang H, Pan J-S. Role division approach for firefly algorithm based on t-distribution perturbation and differential mutation. Cluster Comput. 2024;28(2).
* View Article
* Google Scholar
37. 37. Kennedy J, Eberhart R. Particle swarm optimization. In: Proceedings of ICNN’95-international conference on neural networks. IEEE. 1995.
38. 38. Price KV, Storn RM, Lampinen JA. Differential evolution: a practical approach to global optimization. Differential evolution. 2005. p. 37–134.
39. 39. Zhong C, Li G, Meng Z. Beluga whale optimization: A novel nature-inspired metaheuristic algorithm. Knowledge-Based Systems. 2022;251:109215.
* View Article
* Google Scholar
40. 40. Hamad RK, Rashid TA. Goose algorithm: a powerful optimization tool for real-world engineering challenges and beyond. Evol Syst. 2024;15(4):1249–74.
* View Article
* Google Scholar
41. 41. Heidari A, Mirjalili S, Faris H, Aljarah I, Mafarja M, Chen H. Harris hawks optimization: algorithm and applications. Future Gener Comput Syst. 2019;97:849–72.
* View Article
* Google Scholar
42. 42. Cao B, Ii X, Zhang X, Wang B, Zhang Q, Wei X. Designing Uncorrelated Address Constrain for DNA Storage by DMVO Algorithm. IEEE/ACM Trans Comput Biol Bioinform. 2022;19(2):866–77. pmid:32750895
* View Article
* PubMed/NCBI
* Google Scholar
Citation: Liu Y, Fu M, Jia C, Liu H, Wu Z, Peng W, et al. (2025) A novel enhanced competition of tribes and cooperation of members algorithm for global optimization. PLoS One 20(6): e0324944. https://doi.org/10.1371/journal.pone.0324944
About the Authors:
Yu Liu
Roles: Funding acquisition, Writing – original draft
Affiliation: School of Electronics and Information Engineering, West Anhui University, Lu’an, China
Maosheng Fu
Roles: Funding acquisition, Supervision
Affiliation: School of Electronics and Information Engineering, West Anhui University, Lu’an, China
Chaochuan Jia
Roles: Funding acquisition, Writing – review & editing
E-mail: [email protected] (CJ)
Affiliation: School of Electronics and Information Engineering, West Anhui University, Lu’an, China
ORICD: https://orcid.org/0000-0003-3393-7900
Huaiqing Liu
Roles: Project administration
Affiliation: School of Electronics and Information Engineering, West Anhui University, Lu’an, China
Zongling Wu
Roles: Data curation
Affiliation: School of Electronics and Information Engineering, West Anhui University, Lu’an, China
Wei Peng
Roles: Funding acquisition, Investigation
Affiliation: School of Electronics and Information Engineering, West Anhui University, Lu’an, China
Zhengyu Liu
Roles: Writing – review & editing
Affiliation: School of Electronics and Information Engineering, West Anhui University, Lu’an, China
[/RAW_REF_TEXT]
[/RAW_REF_TEXT]
[/RAW_REF_TEXT]
[/RAW_REF_TEXT]
[/RAW_REF_TEXT]
1. Alreffaee MA. Exploring ant lion optimization algorithm to enhance the choice of an appropriate software reliability growth model. Int J Comput Appl. 2018;182:1–8.
2. Chopra N, Mohsin Ansari M. Golden jackal optimization: A novel nature-inspired optimizer for engineering applications. Expert Systems with Applications. 2022;198:116924.
3. Mirjalili S, Gandomi A, Mirjalili S, Saremi S, Faris H, Mirjalili S. Salp swarm algorithm: A bio-inspired optimizer for engineering design problems. Adv Eng Softw. 2017;114:163–91.
4. Xue J, Shen B. A novel swarm intelligence optimization approach: sparrow search algorithm. Syst Sci Control Eng. 2020;8(1):22–34.
5. Zitar RA, Abualigah L, Al-Dmour NA. Review and analysis for the Red Deer Algorithm. J Ambient Intell Humaniz Comput. 2023;14(7):8375–85. pmid:34840618
6. Abdel-Basset M, Mohamed R, Jasser MB, Hezam IM, Sallam kM, Mohamed AW. Developments on metaheuristic-based optimization for numerical and engineering optimization problems: Analysis, design, validation, and applications. Alex Eng J. 2023;78:175–212.
7. Ahmadianfar I, Heidari A, Noshadian S, Chen H, Gandomi A. Info: an efficient optimization algorithm based on weighted mean of vectors. Expert Syst Appl. 2022;195.
8. Zhang W, Wang N, Yang S. Hybrid artificial bee colony algorithm for parameter estimation of proton exchange membrane fuel cell. Int J Hydrogen Energy. 2013;38(14):5796–806.
9. Dorigo M, Gambardella LM. Ant colonies for the travelling salesman problem. Biosystems. 1997;43(2):73–81. pmid:9231906
10. Karaboga D, Akay B. A comparative study of artificial bee colony algorithm. Appl Math Comput. 2009;214(1):108–32.
11. Yang X-S. Firefly algorithms for multimodal optimization. In: Springer. 2009.
12. Mirjalili S, Lewis A. The Whale Optimization Algorithm. Advances in Engineering Software. 2016;95:51–67.
13. Mirjalili S, Mirjalili S, Lewis A. Grey wolf optimizer. Adv Eng Softw. 2014;69:46–61.
14. Wang J, Wang W, Hu X, Qiu L, Zang H. Black-winged kite algorithm: a nature-inspired meta-heuristic for solving benchmark functions and engineering problems. Artif Intell Rev. 2024;57(4):98.
15. Feng Z, Niu W, Liu S. Cooperation search algorithm: A novel metaheuristic evolutionary intelligence algorithm for numerical optimization and engineering optimization problems. Appl Soft Comput. 2021;98.
16. Yan P, Shang S, Zhang C, Yin N, Zhang X, Yang G, et al. Research on the Processing of Coal Mine Water Source Data by Optimizing BP Neural Network Algorithm With Sparrow Search Algorithm. IEEE Access. 2021;9:108718–30.
17. Nematzadeh S, Kiani F, Torkamanian-Afshar M, Aydin N. Tuning hyperparameters of machine learning algorithms and deep neural networks using metaheuristics: A bioinformatics study on biomedical and biological cases. Comput Biol Chem. 2022;97:107619. pmid:35033837
18. Tang J, Liu G, Pan Q. A Review on Representative Swarm Intelligence Algorithms for Solving Optimization Problems: Applications and Trends. IEEE/CAA J Autom Sinica. 2021;8(10):1627–43.
19. Du C, Zhang J, Fang J. An innovative complex-valued encoding black-winged kite algorithm for global optimization. Sci Rep. 2025;15(1):932. pmid:39762300
20. Ang KM, Lim WH, Isa NAM, Tiang SS, Wong CH. A constrained multi-swarm particle swarm optimization without velocity for constrained optimization problems. Expert Syst Appl. 2020;140:112882.
21. Zhang X, Liu H, Tu L. A modified particle swarm optimization for multimodal multi-objective optimization. Eng Appl Artif Intell. 2020;95.
22. Tijjani S, Ab Wahab M, Mohd Noor M. An enhanced particle swarm optimization with position update for optimal feature selection. Expert Syst Appl. 2024;247.
23. Wu H, Gao Y, Wang W, Zhang Z. A hybrid ant colony algorithm based on multiple strategies for the vehicle routing problem with time windows. Complex Intell Syst. 2021;9(3):2491–508.
24. Zhang T, Xie W, Wei M, Xie X. Multi-objective sustainable supply chain network optimization based on chaotic particle-Ant colony algorithm. PLoS One. 2023;18(7):e0278814. pmid:37428738
25. Wang J, Chen H. Bsas: Beetle swarm antennae search algorithm for optimization problems. 2018.
26. Deng W, Chen R, He B, Liu Y, Yin L, Guo J. A novel two-stage hybrid swarm intelligence optimization algorithm and application. Soft Comput. 2012;16:1707–22.
27. Shen Y, Liu M, Yang J, Shi Y, Middendorf M. A hybrid swarm intelligence algorithm for vehicle routing problem with time windows. IEEE Access. 2020;8:93882–93.
28. Tawhid MA, Ibrahim AM. An efficient hybrid swarm intelligence optimization algorithm for solving nonlinear systems and clustering problems. Soft Comput. 2023;27(13):8867–95.
29. Lien L-C, Cheng M-Y. A hybrid swarm intelligence based particle-bee algorithm for construction site layout optimization. Expert Systems with Applications. 2012;39(10):9642–50.
30. Mirsadeghi E, Khodayifar S. Hybridizing particle swarm optimization with simulated annealing and differential evolution. Cluster Comput. 2020;24(2):1135–63.
31. Deng X, He D, Qu L. A novel hybrid algorithm based on arithmetic optimization algorithm and particle swarm optimization for global optimization problems. J Supercomput. 2023;80(7):8857–97.
32. Li C, Zhu Y. A hybrid butterfly and Newton–raphson swarm intelligence algorithm based on opposition-based learning. Cluster Comput. 2024;27(10):14469–514.
33. Pashaei E, Pashaei E, Mirjalili S. Binary hiking optimization for gene selection: Insights from HNSCC RNA-Seq data. Expert Systems with Applications. 2025;268:126404.
34. Chen Z, Li S, Khan AT, Mirjalili S. Competition of tribes and cooperation of members algorithm: An evolutionary computation approach for model free optimization. Expert Systems with Applications. 2025;265:125908.
35. Li X, Gu J, Sun X, Li J, Tang S. Parameter identification of robot manipulators with unknown payloads using an improved chaotic sparrow search algorithm. Appl Intell. 2022;52(9):10341–51.
36. Chen J, Zhao J, Xiao R, Cui Z, Wang H, Pan J-S. Role division approach for firefly algorithm based on t-distribution perturbation and differential mutation. Cluster Comput. 2024;28(2).
37. Kennedy J, Eberhart R. Particle swarm optimization. In: Proceedings of ICNN’95-international conference on neural networks. IEEE. 1995.
38. Price KV, Storn RM, Lampinen JA. Differential evolution: a practical approach to global optimization. Differential evolution. 2005. p. 37–134.
39. Zhong C, Li G, Meng Z. Beluga whale optimization: A novel nature-inspired metaheuristic algorithm. Knowledge-Based Systems. 2022;251:109215.
40. Hamad RK, Rashid TA. Goose algorithm: a powerful optimization tool for real-world engineering challenges and beyond. Evol Syst. 2024;15(4):1249–74.
41. Heidari A, Mirjalili S, Faris H, Aljarah I, Mafarja M, Chen H. Harris hawks optimization: algorithm and applications. Future Gener Comput Syst. 2019;97:849–72.
42. Cao B, Ii X, Zhang X, Wang B, Zhang Q, Wei X. Designing Uncorrelated Address Constrain for DNA Storage by DMVO Algorithm. IEEE/ACM Trans Comput Biol Bioinform. 2022;19(2):866–77. pmid:32750895
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2025 Liu et al. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
The competition of tribes and cooperation of members algorithm (CTCM) is a novel swarm intelligence algorithm, which increases the diversity of the population to a certain extent through tribal competition and member cooperation mechanisms. However, when dealing with certain complex optimization problems, the algorithm may converge to a local optimal solution prematurely, thereby failing to reach the global optimal solution. To enhance the algorithm’s global optimization capabilities and stability, an enhanced CTCM (CTCMKT) is proposed, which integrates a joint strategy of Kent chaotic mapping and t- distribution mutation. This integration effectively prevents premature convergence to local optimal solutions, ensuring that the algorithm does not miss the global optimal solution during the optimization process and the algorithm’s stability is significantly enhanced. CEC2021 and 23 benchmark functions are used to test the effectiveness and feasibility of the CTCMKT. By minimizing the fitness value, the CTCMKT is contrasted with other algorithms. Experimental results reveal that the CTCMKT has a superior global optimization ability compared to these algorithms. It can efficiently balance exploration and exploitation to reach the optimal solution. Additionally, the CTCMKT can effectively boost the convergence speed, calculation accuracy, and stability. Engineering application results show that the improved CTCMKT algorithm can solve practical application problems.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer