Content area
Problems of search optimization in a discrete space, particularly, in a binary space where a variable can take only two values, are of great practical importance. This paper proposes a new population-based discrete optimization algorithm that uses probability distributions of variables. The distributions determine the probability of taking one or another discrete value and are generated by transforming target values of solutions into their weight coefficients. The performance of the algorithm is evaluated using unimodal and multimodal test functions with binary variables. The experimental results demonstrate the high efficiency of the proposed algorithm in terms of convergence rate and stability.
INTRODUCTION
Optimization problems are solved in almost all domains. This study focuses on combinatorial optimization, which is aimed at finding an optimum for discrete values of possible solutions. A special case of this problem is binary optimization, where the components of a solution vector can take only two values. Binary optimization is widely used in practice, e.g., in the field of medicine to diagnose brain tumors [1], find subsets of consistent features when predicting the effectiveness of post-COVID-19 rehabilitation [2], as well as classify complex diseases [3] and arrhythmias based on electrocardiograms [4]. In the field of economy, binary optimization is used for selecting magazine publishers to place advertisements [5], workflow scheduling [6], software release planning [7], and manufacturing cell design [8]. In science and technology, binary optimization is used for informative feature extraction when constructing prediction systems [9–11], load restoration in primary distribution networks [12], fault section diagnosis of power systems [13], cellular network optimization [14], welded beam design [15], and hardware/software partitioning in embedded systems [16]. It was noted in [17] that, just like purely combinatorial problems, problems with real numbers can also be represented in binary form and solved in a discrete space.
There are two types of methods to solve binary optimization problems. The first type includes traditional deterministic optimization methods, while the methods of the second type are based on stochastic, nondeterministic algorithms. Traditional methods are relaxation, Lagrange multipliers, branch and bound, and integer programming [18, 19]. These methods are time-consuming and designed to solve low-dimensional problems, which are rarely encountered in practice. In addition, most traditional methods require the analytical specification of objective functions, as well as their differentiability and continuity. High-dimensional problems with many local optima significantly slow down the search in the traditional methods.
Nondeterministic methods represented by metaheuristic algorithms [20–23] eliminate these difficulties. Unlike the traditional methods, these algorithms do not get stuck in local optima, are less dependent on initial points, do not depend on the type of an objective function, and can solve the black-box optimization problem [24].
It was shown in [25] that there is no heuristic algorithm efficient enough to solve all optimization problems. Currently developed algorithms provide satisfactory results for some optimization problems, but not all of them. Thus, active research in this field is being conducted, resulting in new heuristic algorithms.
The purpose of this work is to develop an efficient discrete optimization algorithm capable of competing with popular algorithms in a binary search space.
The main contribution of this study is as follows.
1. A new population-based metaheuristic optimization algorithm for discrete search spaces is developed. The algorithm uses probability distributions to select values of variables. The distributions are generated by transforming target values of solutions into weight coefficients.
2. The efficiency of the proposed algorithm is empirically evaluated based on the convergence and stability criteria. The Wilcoxon test shows a significant advantage of the proposed algorithm over the genetic and particle swarm optimization algorithms on unimodal and multimodal test functions.
This paper is organized as follows. Section 2 considers approaches and methods for solving binary optimization problems by using metaheuristic algorithms. Section 3 presents the proposed algorithm and details of its operation. Section 4 describes the experimental part of the study. Section 5 considers the experimental results. Conclusions summarize the study.
RELATED WORKS
The most popular binary optimization algorithms are based on swarm intelligence. Like evolutionary algorithms, they are inspired by biological mechanisms, representing the coordinated behavior of plant and animal species, as well as physical objects. Evolutionary computing is based on the mechanisms of competition and natural selection, while swarm intelligence relies primarily on cooperation among agents [26].
Most of the swarm intelligence algorithms are designed for continuous optimization and use binarization for search in a binary space [27]. The most popular binarization method is transformation functions, which convert continuous values of solution vector components into values from range [0, 1]. Then, the binarization rule is applied to transform a solution into a binary value from set {0, 1}. Transformation functions were used for adaptation in the following algorithms: particle swarm optimization [17, 28], artificial algae [29], chimp optimization [30], salp swarm optimization [31], and whale optimization [32]. In [28], various modifications of transformation functions for the particle swarm optimization algorithm were investigated. The best convergence was achieved by the algorithm with a V-shaped transformation function.
The binarization method based on modified algebraic operators converts real operators used in particle movement formulas into their logical counterparts, which allows one to work with binary solutions. For instance, disjunction is used instead of addition and conjunction instead of multiplication. This method was used in the following algorithms: particle swarm optimization [33], brain storm optimization [13], tree growth [34], bat algorithm [33], continuous ant colony optimization [5], cuckoo search [35], and black hole algorithm [36].
The quantum binarization method also transforms continuous operators. In this method, each admissible solution has a position and quantization vector, which contains the probabilities for a corresponding solution element to take value 1. The quantization vector is updated taking into account the positions of the global and local leaders. This method was used in particle swarm optimization [37], artificial algae algorithm [7], gravitational search [38], and ant colony optimization [39].
In the class of evolutionary intelligence algorithms, the genetic algorithm is widely employed for binary optimization [40–42]. Its various modifications are proposed, e.g., a hybrid method based on the genetic algorithm and particle swarm optimization was presented in [43]. First, both the algorithms find solutions independently; then, the results are combined by weighted averaging. To find the final solution, local search is carried out.
The performance of algorithms is generally evaluated by solving certain applied problems. For this purpose, test functions are used, which makes it possible to determine the efficiency of finding the optimum for different types of objective functions, e.g., unimodal, multimodal, ravine, discontinuous, convex, and concave ones. In binary optimization, test functions are used for search in a continuous space. A binary search space is formed by the discretization of a continuous space with subsequent binary coding of discrete values [28, 44–46].
NEW DISCRETE OPTIMIZATION ALGORITHM
This paper considers an optimization problem where the efficiency criterion is minimized. This section presents an original discrete metaheuristic optimization algorithm based on probability distributions with transformation of target values (PDT). The algorithm is iterative, with its probabilistic model formed on each iteration. The model determines the probability for a variable to take a particular discrete value. The probabilities are generated based on the frequency of occurrence of a discrete value for each variable across population solutions, where a smaller value of an objective function enhances the contribution of a solution to the increase in the probability. Transformation functions are used to convert the objective function of a population solution into a weight coefficient. Figure 1 shows the flowchart of the algorithm.
At the initialization step, the initial population of solutions (Pop) is set randomly or otherwise. Next, weight coefficients w of population solutions are calculated. The weight coefficient takes a value from range [0, 1]; the smaller the target value of a solution, the larger the value of w. To generate weight coefficients from target values, transformation functions are used. For example, the following functions can be selected: linear TL, sigmoid TS, quadratic TQ, and hyperbolic tangent TTh. The graphs of these functions are shown in Fig. 2. The domain of the functions is bounded by segment [fmin, fmax], where fmin and fmax are the maximum and minimum values of the objective function in a population, respectively, and f is the current value. Below are the analytical expressions of the transformation functions:
Upon obtaining the weight coefficients of solutions, the probability distributions are calculated. Each distribution consists of the probabilities for a variable to take a certain discrete value. The probabilities are calculated based on the values of variables and solution weights.
Using the calculated distributions, a new population is generated and mutated to prevent early convergence. Next, the population is updated with the best solutions of the current and new populations. Then, the solution weights are recalculated, and the next iteration begins. Upon completing a specified number of iterations, the best solution in the population is extracted.
Below is the step-by-step description of the algorithm.
Input: size of population N, number of iterations MaxIter, mutation probability pa, and transformation function T. The kth discrete value of the jth variable is denoted by ljk.
Output: solution R.
Begin
Step 1. Initialization.
Randomly or otherwise generate a population of solutions Pop = [Pop1, Pop2, …, PopN] and calculate the corresponding target value f = [f1, f2 …, fN].
Step 2. Initialize the iteration counter, t = 1. Start the iteration process.
Step 3. Calculate the weight coefficients of the solutions from Pop by using transformation function T:where i = 1,…, N.
Step 4. Calculate the probability distributions.
For each discrete value k of variable j, find the sum of weight coefficients for the solutions from Pop that take this discrete value k. This sum is denoted by Sjk, where j = 1, …, n and k = 1, …, m:
Calculate the empirical probability of variable j having the kth value:
Fig. 1. [Images not available. See PDF.]
Flowchart of the algorithm.
Fig. 2. [Images not available. See PDF.]
Transformation functions for converting target values into solution weights.
Step 5. Generate new solutions.
Generate a population of new solutions Popnew based on the probabilities P of each variable:where k satisfies condition pk–1 < rand(0, 1) ≤ pk,
j = 1, …, n and k = 1, …, m.
Step 6. Mutate new solutions.
Change the values of vector components of the solutions from Popnew with probability pa. New values are selected randomly from the range of values of a solution vector component.
Step 7. Evaluate objective functions finew, where i = 1, …, N, for each solution from Popnew.
Step 8. Update population Pop by selecting N best solutions from Pop ∪ Popnew. Update the values of objective functions fi in accordance with the solutions from Pop.
Step 9. Check the termination condition.
If t < MaxIter, then t = t + 1 and go to Step3; otherwise, go to Step 10.
Step 10. Extract the best solution from Pop into R.
End
EXPERIMENT
This section describes the experiments with the developed discrete optimization algorithm based on probability distributions with transformation of target values. The algorithm was tested in a binary optimization problem, i.e., when variables take only two values, for 18 different unimodal and multimodal benchmark functions, which are widely used for testing optimization algorithms [28, 44–46]. Table 1 shows the characteristics of these functions, while Fig. 3 plots them in the two-dimensional search space. Functions f1–f11 are unimodal, i.e., have only one global optimum. Functions f12–f18 are multimodal with one global and many local optima, the number of which grows exponentially with increasing dimension. The experiment was carried out in accordance with the methodology described in [44].
Table 1. . Test functions used in the experiment
No. | Test function | Search range | Optimum |
|---|---|---|---|
1 | [–100, 100] | 0 | |
2 | [–2.56, 2.56] | 0 | |
3 | [–10, 10] | 0 | |
4 | [–2.56, 2.56] | 0 | |
5 | [–100, 100] | 0 | |
6 | [–100, 100] | 0 | |
7 | [–2, 2] | 0 | |
8 | [–10, 10] | 0 | |
9 | [–10, 10] | 0 | |
10 | [–1, 1] | 0 | |
11 | [–6, 6] | 0 | |
12 | [–10, 10] | –30 (for n = 5) | |
13 | [–25, 25] | 0 | |
14 | [–5.5] | 0 | |
15 | [–2, 2] | 0 | |
16 | [–10, 10] | 0 | |
17 | [–3, 3] | 0 | |
18 | [–15, 15] | –50.0929 (for n = 5) |
Fig. 3. [Images not available. See PDF.]
Graphs of the test functions in the two-dimensional search space.
The algorithm was implemented in the MATLAB R2022b programming environment. The program is available at1 . The experiment was carried out on Intel Core i7-12700 with 8 GB RAM under Windows 10.
Discretization of Continuous Values
Since the algorithm is discrete and operates with binary solution vectors, the components of which take value 0 or 1, real values are encoded with a binary vector. The procedure for converting a binary solution vector into values of real variables is illustrated in Fig. 4. This procedure is executed whenever the algorithm needs to evaluate the objective function. The number of variables in the experiment was 5, while the number of bits for encoding the value of each variable was 15. Thus, the size of a binary solution vector was n = 5 × 15 = 75 components. The number of discrete values that each variable can have was 215. These values were found by uniform quantization over the search range of a variable. The discretization step was defined as follows:where Rmin and Rmax are the lower and upper bounds for the range of a variable, respectively. In fact, the binary value of a variable is a binary representation of the ordinal number of a discrete value on [Rmin, Rmax] with discretization step Δh.
If variable x is encoded by binary vector [b1, …, b15], then the real value of this variable is determined as follows:
Fig. 4. [Images not available. See PDF.]
Converting a binary solution vector into a continuous vector to evaluate the objective function.
Performance Criteria
To evaluate the performance of the algorithm, two criteria were used [47]. The first one evaluates the convergence of the algorithm and is defined by the average deviation of a found target value from the current one:where nrun is the number of algorithm runs, fi is the objective function value found by the algorithm in the ith run, is the current value of the objective function optimum. The second criterion evaluates the stability of the nondeterministic algorithm and is determined by the standard deviation of the found optimum:where M is the mean of the objective function:
Lower values of both the criteria indicate better efficiency.
Fig. 5. [Images not available. See PDF.]
Convergence curves of the algorithms.
In addition to the criteria considered above, the paper includes some convergence graphs, which allow one to estimate the convergence rate of stochastic algorithms by plotting criterion E versus iterations [28, 44, 45]. The value of E indicated in the graphs is averaged over algorithm runs.
Selecting the Target Value Transformation Function
The function for transformation of target values into solution weights was selected among the following candidates: linear function TL, sigmoid function TS, quadratic function TQ, and hyperbolic tangent TTh.
The PDT algorithm with different transformation functions was used to find the optimum of test functions for 30 runs per function. The resulting values of the performance criteria are shown in Table 2.
Table 2. . Performance of the PDT algorithm with different transformation functions
f | TL | TS | TQ | TTh | ||||
|---|---|---|---|---|---|---|---|---|
E | STD | E | STD | E | STD | E | STD | |
f1 | 0.000047 | 0.000000 | 0.000049 | 0.000014 | 0.000047 | 0.000000 | 0.000059 | 0.000034 |
f2 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 |
f3 | 0.402924 | 0.231313 | 0.454002 | 0.280296 | 0.478528 | 0.240684 | 0.498614 | 0.281871 |
f4 | 0.005653 | 0.004306 | 0.006070 | 0.004650 | 0.005624 | 0.003659 | 0.005170 | 0.003590 |
f5 | 0.041098 | 0.096504 | 0.030112 | 0.026176 | 0.042929 | 0.140588 | 0.037639 | 0.042328 |
f6 | 0.015259 | 0.000000 | 0.015259 | 0.000000 | 0.015666 | 0.002229 | 0.015463 | 0.001114 |
f7 | 2.629031 | 1.500038 | 2.511857 | 1.363161 | 3.214144 | 1.576402 | 3.036793 | 1.573228 |
f8 | 0.000001 | 0.000000 | 0.000001 | 0.000000 | 0.000001 | 0.000000 | 0.000003 | 0.000003 |
f9 | 0.885491 | 0.863594 | 0.850277 | 0.472212 | 0.803704 | 0.485206 | 0.835841 | 0.739384 |
f10 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 |
f11 | 0.000001 | 0.000000 | 0.000001 | 0.000000 | 0.000001 | 0.000000 | 0.000001 | 0.000002 |
f12 | 0.239046 | 0.613565 | 0.870265 | 1.432694 | 0.885326 | 2.910895 | 0.362761 | 0.898002 |
f13 | 0.119874 | 0.040684 | 0.149874 | 0.062972 | 0.163207 | 0.071839 | 0.123821 | 0.056141 |
f14 | 0.057948 | 0.077372 | 0.058916 | 0.059387 | 0.053028 | 0.053763 | 0.033397 | 0.046236 |
f15 | 0.331668 | 0.603435 | 0.199166 | 0.404835 | 0.398790 | 0.618058 | 0.099557 | 0.303747 |
f16 | 0.036349 | 0.016301 | 0.028747 | 0.014743 | 0.029202 | 0.015521 | 0.034756 | 0.019396 |
f17 | 0.000374 | 0.000041 | 0.000374 | 0.000041 | 0.000367 | 0.000000 | 0.000389 | 0.000069 |
f18 | 0.407336 | 1.540216 | 0.405125 | 1.540966 | 0.610862 | 1.858916 | 0.000837 | 0.003885 |
It was noted in [47] that, for better performance evaluation of evolutionary algorithms, it is necessary to conduct statistical tests. It is not sufficient to compare algorithms based on the E and STD values [48], a statistical test should be performed to prove that the proposed algorithm provides a significant improvement as compared to its competitors.
To estimate whether the results of the algorithm differ statistically significantly for different transformation functions, a nonparametric Friedman test with significance level α = 0.05 was used. The asymptotic p-values below 0.05 can be considered as strong evidence against the null hypothesis H0 [47].
The Friedman test with multiple comparisons did not reject the null hypothesis for both tests. In this case, hypothesis H0 stated that there are no significant differences between the algorithms with different transformation functions. For the E criterion, p-value = 0.757; for the STD criterion, p-value = 0.590. Thus, the choice of the transformation functions considered above does not significantly affect the performance of the algorithm. In the following discussion, we use the linear function.
Experimental Settings
The performance of the PDT algorithm was evaluated in comparison with popular optimization algorithms: genetic algorithm (GA) and binary particle swarm optimization (BPSO). The algorithms were run under the same conditions with the following settings: population size is 30, number of iterations is 100, number of variables is 5, number of bits per variable is 15, and number of algorithm runs per test function is 30. The specific GA and BPSO parameters were set as recommended in [28, 45] (see Table 3).
Table 3. . Algorithm settings
Algorithm | Parameter | Value |
|---|---|---|
PDT | Transformation function T | Linear TL |
Mutation probability pa | 0.05 | |
GA | Selection | Roulette wheel |
Crossover (probability) | One-point (0.9) | |
Mutation (probability) | Uniform (0.005) | |
BPSO | C1 and C2 | 2 and 2 |
Inertia weight W | Linearly decreases from 0.9 to 0.4 | |
Maximum velocity | 6 | |
Transformation function | V-shaped |
Experimental Results
As a result of the experiment, the values of the performance criteria were obtained for each algorithm. These values are shown in Table 4. The last row of the table contains the average values. Figure 5 shows the convergence graphs of the algorithms. The graphs use the logarithmic scale for the axis of the convergence criterion, which allows one to better track the convergence rate.
Table 4. . Algorithm efficiency estimates
GA | BPSO | PDT | ||||
|---|---|---|---|---|---|---|
E | STD | E | STD | E | STD | |
f1 | 0.005349 | 0.028272 | 27.845964 | 37.522025 | 0.000047 | 0.000000 |
f2 | 0.000000 | 0.000000 | 0.008293 | 0.018114 | 0.000000 | 0.000000 |
f3 | 0.440548 | 0.254233 | 0.799580 | 0.513039 | 0.402924 | 0.231313 |
f4 | 0.021143 | 0.046160 | 0.067826 | 0.054019 | 0.005653 | 0.004306 |
f5 | 0.649841 | 1.354239 | 5.634938 | 3.111477 | 0.041098 | 0.096504 |
f6 | 0.016683 | 0.004988 | 4.112874 | 3.333243 | 0.015259 | 0.000000 |
f7 | 3.113280 | 2.491653 | 4.288055 | 2.188550 | 2.629031 | 1.500038 |
f8 | 0.000009 | 0.000025 | 0.804053 | 0.875332 | 0.000001 | 0.000000 |
f9 | 10.522179 | 24.073438 | 1.827357 | 1.320775 | 0.885491 | 0.863594 |
f10 | 0.000000 | 0.000000 | 0.000166 | 0.000383 | 0.000000 | 0.000000 |
f11 | 0.000004 | 0.000008 | 0.303480 | 0.574945 | 0.000001 | 0.000000 |
f12 | 0.760543 | 1.068967 | 0.571626 | 0.780258 | 0.239046 | 0.613565 |
f13 | 0.457784 | 0.259707 | 0.418610 | 0.159854 | 0.119874 | 0.040684 |
f14 | 0.072918 | 0.062301 | 0.054331 | 0.053543 | 0.057948 | 0.077372 |
f15 | 1.227600 | 1.001766 | 1.408564 | 0.734589 | 0.331668 | 0.603435 |
f16 | 0.042970 | 0.016959 | 0.061572 | 0.021475 | 0.036349 | 0.016301 |
f17 | 0.000583 | 0.000525 | 0.357644 | 0.145604 | 0.000374 | 0.000041 |
f18 | 4.935197 | 6.147513 | 3.989981 | 3.238601 | 0.407336 | 1.540216 |
1.24 | 2.05 | 2.92 | 3.04 | 0.29 | 0.31 | |
DISCUSSION OF THE RESULTS
To determine whether the performance of the proposed algorithm differs significantly from that of its competitors, we use the pairwise Wilcoxon test [47]. The null hypothesis H0 of this test states that there is no significant difference in the performance of the compared algorithms. Table 5 shows the comparison results. When comparing with GA and BPSO, p-value for the E and STD criteria is below 0.01. The sum of the negative ranks prevails over the positive ranks. This suggests that the criteria values of the PDT algorithm are statistically significantly lower than those of GA and BPSO at significance level α = 0.01.
Table 5. . Statistical comparison of the algorithms
PDT–GA | PDT–BPSO | |||
|---|---|---|---|---|
E | STD | E | STD | |
p-value | <0.01 | <0.01 | <0.01 | <0.01 |
Hypothesis H0 | rejected | rejected | rejected | rejected |
The analysis of Fig. 5 suggests that, on the initial iterations, the convergence rate of the PSO algorithm for tests f6, f7, f9, f15, and f16 is higher than that of the other algorithms; however, starting from approximately the 15th iteration, it decreases. Overall, the algorithm based on probability distributions with transformation of target values is faster than its competitors.
CONCLUSIONS
The proposed discrete algorithm based on probability distributions with transformation of target values provides a statistically significant improvement in terms of convergence and stability (deviation from the optimum and standard deviation of target values) over the genetic algorithm and binary particle swarm optimization. For the experiment, 18 unimodal and multimodal test functions were used. On average, the deviation from the optimum is decreased by a factor of 4.3 as compared to GA and by a factor of 10.1 as compared to BPSO. The results confirm the efficiency of the proposed algorithm in solving binary optimization problems.
ACKNOWLEDGMENTS
The author expresses his deep gratitude to Ilya Aleksandrovich Hodashinsky, Professor at the Tomsk State University of Control Systems and Radioelectronics, for discussion of this paper and a number of valuable suggestions and remarks.
FUNDING
This work was supported by the Russian Science Foundation, grant no. 24-21-00168 (https://rscf.ru/project/24-21-00168).
CONFLICT OF INTEREST
The author of this work declares that he has no conflicts of interest.
https://cloud.tusur.ru/index.php/s/395znYyx87rRoDP
Translated by Yu. Kornienko
Publisher’s Note.
Pleiades Publishing remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
AI tools may have been used in the translation or editing of this article.
REFERENCES
1 Aly, R.H.M.; Rahouma, K.H.; Hamed, H.F. Brain tumors diagnosis and prediction based on applying the learning metaheuristic optimization techniques of particle swarm, ant colony and bee colony. Proc. Comput. Sci.; 2019; 163, pp. 165-179. [DOI: https://dx.doi.org/10.1016/j.procs.2019.12.098]
2 Hodashinsky, I.A., Smirnova, I.N., Bardamova, M.B., Sarin, K.S., Svetlakov, M.O., Zaitsev, A.A., Titskaya, E.V., Tonkoshkurova, A.V., Antipova, I.I., Khodashins-kaya, A.I., and Zaripova, T.N., Method for finding subsets of consistent features when predicting the effectiveness of rehabilitation of patients after coronavirus infection, Sib. Zh. Klin. Eksp. Med., vol. 38, no. 4, pp. 270–279.
3 Phogat, M.; Kumar, D. Classification of complex diseases using an improved binary cuckoo search and conditional mutual information maximization. Comput. Sist.; 2020; 24, pp. 1121-1129.
4 Houssein, E.H., Ibrahim, I.E., Neggaz, N., Hassaballah, M., and Wazery, Y.M., An efficient ECG arrhythmia classification method based on Manta ray foraging optimization, Expert Syst. Appl., 2021, vol. 181, p. 115131.
5 Aytimur, A. and Babayigit, B., Binary artificial bee colony algorithms for {0-1} advertisement problem, Proc. 6th Int. Conf. Electrical and Electronics Engineering (ICEEE), Istanbul, 2019, pp. 91–95.
6 Mohammadzadeh, A.; Masdari, M.; Gharehchopogh, F.S.; Jafarian, A. Improved chaotic binary grey wolf optimization algorithm for workflow scheduling in green cloud computing. Evol. Intell.; 2021; 14, pp. 1997-2025. [DOI: https://dx.doi.org/10.1007/s12065-020-00479-5]
7 Pirozmand, P.; Ebrahimnejad, A.; Alrezaamiri, H.; Motameni, H. A novel approach for the next software release using a binary artificial algae algorithm. J. Intell. Fuzzy Syst.; 2021; 40, pp. 5027-5041. [DOI: https://dx.doi.org/10.3233/JIFS-201759]
8 Almonacid, B.; Aspee, F.; Soto, R.; Crawford, B.; Lama, J. Solving the manufacturing cell design problem using the modified binary firefly algorithm and the Egyptian vulture optimization algorithm. IET Software; 2017; 11, pp. 105-115. [DOI: https://dx.doi.org/10.1049/iet-sen.2016.0196]
9 Hodashinsky, I.A.; Sarin, K.S. Feature selection for classification through population random search with memory. Autom. Remote Control; 2019; 80, pp. 324-333.MathSciNet ID: 3932921[DOI: https://dx.doi.org/10.1134/S0005117919020103]
10 Hodashinsky, I.A.; Sarin, K.S. Feature selection: Comparative analysis of binary metaheuristics and population based algorithm with adaptive memory. Program. Comput. Software; 2019; 45, pp. 221-227. [DOI: https://dx.doi.org/10.1134/S0361768819050037]
11 Sarin, K.; Hodashinsky, I.; Slezkin, A. Feature selection and identification of fuzzy classifiers based on the cuckoo search algorithm. Commun. Comput. Inf. Sci.; 2018; 934, pp. 22-34.
12 El-Dakroury, H.E.D.M., Gad, A., and Abdelaziz, A.Y., Load restoration in primary distribution networks using the binary particle swarm optimization, Proc. IEEE Electrical Power and Energy Conf. (EPEC), Ottawa, 2016, pp. 1–6.
13 Xiong, G.; Shi, D.; Zhang, J.; Zhang, Y. A binary coded brain storm optimization for fault section diagnosis of power systems. Electr. Power Syst. Res.; 2018; 163, pp. 441-451. [DOI: https://dx.doi.org/10.1016/j.epsr.2018.07.009]
14 Dahi, Z.A.E.M.; Mezioud, C.; Draa, A. A 0-1 bat algorithm for cellular network optimization: A systematic study on mapping techniques, Int. J. Reasoning-Based Intell.. Syst.; 2017; 9, pp. 22-42.
15 Hussien, A.G.; Hassanien, A.E.; Houssein, E.H.; Amin, M.; Azar, A.T. New binary whale optimization algorithm for discrete optimization problems. Eng. Optim.; 2020; 52, pp. 945-959.MathSciNet ID: 4095556[DOI: https://dx.doi.org/10.1080/0305215X.2019.1624740]
16 Mourad, K.; Boudour, R. A modified binary firefly algorithm to solve hardware/software partitioning problem. Informatica; 2021; 45, pp. 1-12. [DOI: https://dx.doi.org/10.31449/inf.v45i7.3408]
17 Kennedy, J. and Eberhart, R.C., A discrete binary version of the particle swarm algorithm, Proc. IEEE Int. Conf. Systems, Man, and Cybernetics, Computational Cybernetics and Simulation, Orlando, USA, 1997, vol. 5, pp. 4104–4108.
18 Serigne, G.; Philippe, M. A linearization framework for unconstrained quadratic (0–1) problems. Discrete Appl. Math.; 2009; 157, pp. 1255-1266.MathSciNet ID: 2502442[DOI: https://dx.doi.org/10.1016/j.dam.2008.01.028]
19 Sherali, H.D.; Driscoll, P.J. Evolution, and state-of-the-art in integer programming. J. Comput. Appl. Math.; 2000; 124, pp. 319-340.MathSciNet ID: 1803307[DOI: https://dx.doi.org/10.1016/S0377-0427(00)00431-3]
20 Hodashinsky, I.A. Methods for improving the efficiency of swarm optimization algorithms. A survey. Autom. Remote Control; 2021; 82, pp. 935-967.MathSciNet ID: 4442354[DOI: https://dx.doi.org/10.1134/S0005117921060011]
21 Handbook of Methaheuristics, Gendreau, M. and Potvin, J.-Yv., Eds., Springer, 2019.
22 Karpenko, A.P. Methods for improving the efficiency of metaheuristic algorithms for global optimization. Mat. Metody Tekhn. Tekhnol.; 2017; 1, pp. 77-83.
23 Kureichik, V.V.; Rodzin, S.I. Fauna-inspired bioheuristics (survey). Inf. Tekhnol.; 2023; 29, pp. 559-573.
24 Pardalos, P.M.; Rasskazova, V.; Vrahatis, M.N. Black Box Optimization, Machine Learning, and No-Free Lunch Theorems; 2021;
25 Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput.; 1997; 1, pp. 67-82. [DOI: https://dx.doi.org/10.1109/4235.585893]
26 Becerra-Rozas, M.; Lemus-Romani, J.; Cisternas-Caneo, F.; Crawford, B.; Soto, R.; Astorga, G.; Castro, C.; Garcia, J. Continuous metaheuristics for binary optimization problems: An updated systematic literature review. Mathematics; 2022; 11, 129. [DOI: https://dx.doi.org/10.3390/math11010129]
27 Bardamova, M.B.; Buimov, A.G.; Tarasenko, V.F. Methods for adapting the leapfrog algorithm to a binary search space when solving the feature selection problem, Dokl. Tomsk.. Gos. Univ. Sist. Upr. Radioelektron.; 2020; 23, pp. 57-62.
28 Mirjalili, S.; Lewis, A. S-shaped versus V-shaped transfer functions for binary particle swarm optimization. Swarm Evol. Comput.; 2013; 9, pp. 1-14. [DOI: https://dx.doi.org/10.1016/j.swevo.2012.09.002]
29 Turkoglu, B., Uymaz, S.A., and Kaya, E., Binary artificial algae algorithm for feature selection, Appl. Soft Comput., 2022, vol. 120, p. 108630.
30 Pashaei, E.; Pashaei, E. An efficient binary chimp optimization algorithm for feature selection in biomedical data. Neural Comput. Appl.; 2022; 34, pp. 6427-6451. [DOI: https://dx.doi.org/10.1007/s00521-021-06775-0]
31 Jain, S.; Dharavath, R. Memetic salp swarm optimization algorithm based feature selection approach for crop disease detection. J. Ambient Intell. Humanized Comput.; 2023; 14, pp. 1817-1835. [DOI: https://dx.doi.org/10.1007/s12652-021-03406-3]
32 Mohd Yusof, N.; Muda, A.K.; Pratama, S.F.; Abraham, A. A novel nonlinear time-varying sigmoid transfer function in binary whale optimization algorithm for descriptors selection in drug classification. Molecular Diversity; 2023; 27, pp. 71-80. [DOI: https://dx.doi.org/10.1007/s11030-022-10410-y]
33 Merikhi, B.; Soleymani, M. Automatic data clustering framework using nature-inspired binary optimization algorithms. IEEE Access; 2021; 9, pp. 93703-93722. [DOI: https://dx.doi.org/10.1109/ACCESS.2021.3091397]
34 Zhong, C.; Chen, Y.; Peng, J. Feature selection based on a novel improved tree growth algorithm. Int. J. Comput. Intell. Syst.; 2020; 13, pp. 247-258. [DOI: https://dx.doi.org/10.2991/ijcis.d.200219.001]
35 Pandey, A.C.; Rajpoot, D.S.; Saraswat, M. Feature selection method based on hybrid data transformation and binary binomial cuckoo search. J. Ambient Intell. Humanized Comput.; 2020; 11, pp. 719-738. [DOI: https://dx.doi.org/10.1007/s12652-019-01330-1]
36 Yepes, V.; Marti, J.V.; Garcia, J. Black hole algorithm for sustainable design of counterfort retaining walls. Sustainability; 2020; 12, 2767. [DOI: https://dx.doi.org/10.3390/su12072767]
37 Lai, X., Hao, J.K., Fu, Z.H., and Yue, D., Diversity-preserving quantum particle swarm optimization for the multidimensional knapsack problem, Expert Syst. Appl., 2020, vol. 149, p. 113310.
38 Barani, F.; Mirhosseini, M.; Nezamabadi-Pour, H. Application of binary quantum-inspired gravitational search algorithm in feature subset selection. Appl. Intell.; 2017; 47, pp. 304-318. [DOI: https://dx.doi.org/10.1007/s10489-017-0894-3]
39 Ross, O.H.M. A review of quantum-inspired metaheuristics: Going from classical computers to real quantum computers. IEEE Access; 2019; 8, pp. 814-838. [DOI: https://dx.doi.org/10.1109/ACCESS.2019.2962155]
40 Shreem, S.S.; Turabieh, H.; Al Azwari, S.; Baothman, F. Enhanced binary genetic algorithm as a feature selection to predict student performance. Soft Comput.; 2022; 26, pp. 1811-1823. [DOI: https://dx.doi.org/10.1007/s00500-021-06424-7]
41 Nicolau, M., Application of a simple binary genetic algorithm to a noiseless testbed benchmark, Proc. 11th Annu. Conf. Companion on Genetic and Evolutionary Computation Conference: Late Breaking Papers, 2009, pp. 2473–2478.
42 Haupt, R.L.; Haupt, S.E. Practical Genetic Algorithms; 2004;
43 Ghosh, M.; Guha, R.; Alam, I.; Lohariwal, P.; Jalan, D.; Sarkar, R. Binary genetic swarm optimization: A combination of GA and PSO for feature selection. J. Intell. Syst.; 2019; 29, pp. 1598-1610.
44 Bas, E.; Ulker, E. A binary social spider algorithm for continuous optimization task. Soft Comput.; 2020; 24, pp. 12953-12979. [DOI: https://dx.doi.org/10.1007/s00500-020-04718-w]
45 Mirjalili, S.; Mirjalili, S.M.; Yang, X.-S. Binary bat algorithm. Neural Comput. Appl.; 2014; 25, pp. 663-681. [DOI: https://dx.doi.org/10.1007/s00521-013-1525-5]
46 Pan, J.-S., Hu, P., and Chu, S.-C., Binary fish migration optimization for solving unit commitment, Energy, 2021, vol. 226, p. 120329.
47 Derrac, J.; Garcia, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput.; 2011; 1, pp. 3-18. [DOI: https://dx.doi.org/10.1016/j.swevo.2011.02.002]
48 Garcia, S.; Molina, D.; Lozano, M.; Herrera, F. A study on the use of non-parametric tests for analyzing the evolutionary algorithms’ behaviour: A case study on the CEC'2005 special session on real parameter optimization. J. Heuristics; 2009; 15, pp. 617-644.
© Pleiades Publishing, Ltd. 2024. ISSN 0361-7688, Programming and Computer Software, 2024, Vol. 50, No. 6, pp. 445–456. © Pleiades Publishing, Ltd., 2024. Russian Text © The Author(s), 2024, published in Programmirovanie, 2024, Vol. 50, No. 6.