1. Introduction
Concrete is a common building material; it has been widely used in industrial and civil buildings and has become one of the world’s most widely used building materials because of its low price, excellent performance, simple production process, and other characteristics [1,2,3,4,5,6]. Over time, more and more infrastructure industries have given priority to concrete as a building material [7,8]. As a result of pouring, the concrete interior often produces phenomena such as cavities and in-compactness. This will lead to the strength, compactness, frost resistance, anti-permeability, and other properties of concrete being reduced and will also affect the service life of concrete structures to a certain extent, and may even affect the safe operation of buildings [9,10,11,12]. Cement is an important part of concrete, but it will emit a large amount of carbon in the process of production, which will bring a certain burden to the environment [13,14,15]. With the wide application of concrete, its impact on the environment has been paid more and more attention [16,17,18,19]. Considering the strategy of sustainable development, it is urgently needed to solve the problem of the environmental pollution caused by cement production by replacing cement with green materials [20,21,22,23,24,25,26].
Blast furnace slag is a type of industrial waste slag discharged from the blast furnace when smelting pig iron, and it contains a large amount of active substances [27,28,29,30]. Researchers found that blast furnace slag has a certain value, so they began to study the application of blast furnace slag, and took it as auxiliary cementing material to replace part of the cement in concrete, alleviate the environmental pollution brought by cement production, and improve the performance of concrete [29,30,31,32]. Shi et al. studied the effect of blast furnace slag fine aggregate produced by three different steel mills on the mechanical properties of high-performance concrete, and the results showed that the concrete with blast furnace slag fine aggregate could improve the compressive strength of concrete under the condition of a lower water–cement ratio [33]. Cvetkovic et al. proposed an adaptive network-based fuzzy inference system (ANFIS) to study the influence of blast furnace slag and fly ash on the strength of concrete. The research results showed that the addition of blast furnace slag and fly ash is beneficial to improving concrete strength, and the curing time has the greatest influence on concrete strength [34]. Zhao et al. studied the mechanical properties and fresh properties of self-compacting concrete by ground blast furnace slag and hook-end steel fiber. The results showed that replacing 10% cement in self-compacting concrete with slag had a positive effect on reducing the workability of freshly mixed concrete. However, using slag to replace 30% cement in self-compacting concrete will reduce the passing capacity and filling capacity of fresh concrete. Blast furnace slag can improve the bonding properties of the fiber–matrix interface, and then improve the mechanical properties of self-compacting concrete [35]. Liu et al. studied the influence of the composite mixing of steel slag and blast furnace slag on mortar and concrete, and the results showed that the composite mixing of steel slag and blast furnace slag may reduce the early compressive strength of concrete, but will promote the development of concrete strength over time, and is conducive to the self-shrinkage of concrete and the reduction of adiabatic temperature. These phenomena are more obvious when the water–solid ratio is low [36].
Engineers usually use the laboratory test method to study the performance of concrete. However, the laboratory test method has many disadvantages, such as low efficiency and high cost [37,38,39,40,41,42,43]. To find a more efficient and low-cost method to predict the performance of concrete, many researchers choose to use machine learning models to predict the properties of concrete [44,45,46,47,48,49,50,51]. Salimbahrami et al. studied the compressive strength prediction methods of recycled concrete based on the artificial neural network (ANN) and support vector machine (SVM), and the research results show that machine learning models have good prediction effects on the compressive strength of recycled concrete [52]. Al-Shamir et al. established a prediction model for the compressive strength of HPC by using the regularized extreme learning machine (RELM) technology and compared the RELM model with other machine learning models. The results show that the prediction accuracy of the RELM model for the compressive strength of HPC is higher [53]. Wang et al. proposed a model based on the combination of random forest and support vector machine (RF-SVM) to predict the impermeability of concrete and compared the prediction results of the RF-SVM model with the BP neural network model and single SVM model. The research results show that the RF-SVM model has a better prediction effect and fitting effect on the prediction of the impermeability of concrete [54]. Nilsen et al. proposed to use the linear regression and stochastic forest machine learning methods to predict the thermal expansion coefficient of concrete to solve the time-consuming and expensive problems of CTE measurement, and achieved a good prediction effect [55]. The above machine learning models have achieved good prediction results in concrete performance prediction [56,57,58,59,60,61,62,63,64,65,66,67,68]. However, most researchers only consider the prediction effect of the proposed model when studying the prediction of the properties of concrete by machine learning models. Few researchers compare the prediction effect of various machine learning models and select the machine learning model with the best prediction effect to predict the properties of concrete.
Strength is an important index to measure the quality of concrete with blast furnace slag. To ensure the quality of concrete, concrete must reach a certain strength, and the material composition of concrete determines its most critical mechanical index, which is compressive strength. To predict the compressive strength of concrete with blast furnace slag more efficiently and economically, firstly, this study uses the beetle antennae search algorithm (BAS) to adjust the hyperparameters of the BPNN, SVM, DT, RF, KNN, LR, and MLR, considering the simple implementation, fast convergence speed, and low possibility of falling into local optimization by changing the step size strategy [69]. Then, comparing the prediction effects of the above seven models on the compressive strength of concrete with blast furnace slag, the machine learning model with the best prediction effect on the compressive strength of concrete with blast furnace slag is selected.
2. Methodology
2.1. Data Collection
In the past, many researchers often only focused on developing new prediction models of the performance of concrete, while ignoring the importance of a reliable database to verify the accuracy of the developed models. In this study, the data set on the compressive strength of concrete is collected from the published articles, and a reliable database is formed [70]. Cement, water, blast furnace slag, coarse aggregate, fine aggregate, and superplasticizer are the input variables, and the compressive strength of concrete is the output variable. The frequency distribution histogram of the data of each variable in the database is shown in Figure 1. It can be seen from Figure 1 that the data frequency distribution histograms of water, coarse aggregate, and superplasticizer are single peaks, and the data frequency distribution histograms of fine aggregate and concrete compressive strength are double peaks. In short, from the frequency distribution histogram of these seven variables, it can be seen that the data distribution of each variable in the database is reasonable and covers a wide range; that is, using the database to predict the compressive strength of concrete can achieve good results.
2.2. Correlation Analysis
To prevent the multicollinearity of machine learning models for predicting the compressive strength of concrete, it is necessary to analyze the correlation between input variables before the training of models. The correlation analysis results of cement, water, blast furnace slag, coarse aggregate, superplasticizer, and fine aggregate are shown in Figure 2. It can be seen from Figure 2 that the correlation coefficients on the diagonal line are 1, while the correlation coefficients on the other position are all less than 0.6; that is, the correlation coefficients between the same variables are 1, and the correlation coefficients between different variables are less than 0.6. The above results show that the correlation between cement, water, blast furnace slag, coarse aggregate, superplasticizer, and fine aggregate is low. Therefore, using them as input variables to predict the compressive strength of concrete will not affect the prediction effect due to multiple collinearities.
2.3. Algorithm
2.3.1. Beetle Antennae Search (BAS)
BAS is an efficient intelligent optimization algorithm. Compared with other optimization algorithms, this algorithm can optimize without knowing the specific function and gradient information. Therefore, this algorithm has the advantages of small computation and fast optimization speed. The idea of the BAS algorithm is to realize optimization by simulating the process of beetles looking for food. The BAS algorithm regards the fitness function value as the concentration of food odor, so the function has different function values in different positions. Beetles find the optimal value by comparing the odor concentration received by left and right antennae until they find the location of food; that is, the algorithm finds the optimal function value through multiple iterations and comparisons. The optimization steps of the BAS algorithm are as follows:
(1). Determine the direction of the initial value of each beetle. The direction of the initial value of the beetles is determined by the following formula:
(1)
where is the random function, and K is the spatial dimension.-
(2). Set the step factor. The step size factor determines the searchability of beetles, so choosing a larger initial step size is helpful to improve the search range of beetles. The calculation formula of the step factor is as follows:
(2)
where is the step size, eta is the decreasing factor, , t is the current number of iterations, and n is the total number of iterations.
The position coordinates of the two whiskers of beetles are updated by the following formula:
(3)
where is the position of the left whisker, is the position of the right whisker, represents the position of the individual centroid when the number of iterations is t, and represents the length between the two whiskers.The fitness function (mean square error, MSE) is expressed by the odor concentration of the left and right whiskers, and the solution formula is as follows [71,72]:
(4)
where represents the output value of the sample model and represents the actual value of the sample. The MSE between predicted outputs and observed outputs can be minimized during this process to evaluate the predictive performance.Comparing the fitness values of the two tentacles, the beetle moves in the direction of a large fitness value, to update the position. The location update formula is as follows:
(5)
where is the step size factor of the tth iteration and is the symbolic function.The code of the BAS algorithm is shown in Algorithm 1 [5,69].
Algorithm 1 The framework of BAS algorithm | ||
Input: | : Fitness function | |
: Dimensions of variables | ||
: Decrease factor | ||
: Number of iterations | ||
: Step factor | ||
Output: | Optimal solution | |
1: Initial the initial position of the beetle | ||
2: Initial a random orientation of the beetle | ||
3: Initialization iteration number | ||
4: While () or (stop criterion) do | ||
5: | Use Equation (3) to calculate the position of the beetle’s tentacles | |
6: | Use Equation (4) to calculate fitness value | |
7: | Update coordinate using Equation (5) | |
8: | Calculate its fitness value | |
9: | If | |
10: | Update the current optimal value | |
11: | Update the current position | |
12: | End if | |
13: | ||
14: End while | ||
15: Return |
2.3.2. Backpropagation Neural Network (BPNN)
BPNN is a multilayer feedforward network prediction model trained based on the error backpropagation algorithm. Without knowing the clear mathematical equation relationship between the input data and the output data, it can learn the relationship between the input layer and the output layer, to input the corresponding value and obtain the prediction result. The structure of BPNN is composed of an input layer, hidden layer, and output layer. BPNN first needs to input influence variables, and then output the final results from the output layer through the calculation and adjustment of the hidden layer. Next, it is necessary to calculate the error between the input value and the actual value and judge whether the error is within the specified range. If the error is not within the specified range, backpropagation is required to redistribute the weight, and we then repeat the cycle until the error is within the specified range. After the test error reaches the required accuracy, the learning ends and a black box model is obtained. After this, the test and prediction of the model are carried out around the black-box model. The flow chart of BPNN is shown in Figure 3.
2.3.3. Support Vector Machine (SVM)
SVM is typical supervised learning technology. The basic idea of SVM is to find the maximum interval hyperplane, maximize the interval from different types of samples to the classification hyperplane, and optimize the classification hyperplane. In the hypothetical linear separable sample set , the expression of the linear discriminant function in d-dimensional space is as follows:
(6)
The equation for classifying hyperplanes is:
(7)
Next, the discriminant function needs to be normalized so that the distance between the classification hyperplane and the sample closest to the classification hyperplane in the two types of samples is 1. At this time, the classification interval is , and the maximum classification interval can be reached only when the minimum is met. To ensure that the classification hyperplane distinguishes all samples, the following formula needs to be satisfied:
(8)
The classification hyperplane that minimizes and satisfies the above formula is called the optimal hyperplane.
The transformation of samples from low-dimensional space to high-dimensional space is a solution to the linear inseparable problem. By this method, the transformation from the nonlinear problem to the linear problem can be realized, and then the optimal classification hyperplane can be solved. Suppose that is a nonlinear mapping, which can realize the transformation of input samples from low-dimensional space to high-dimensional feature space; that is, the construction of the optimal hyperplane can be realized by the inner product operation of high-dimensional space. Its expression is , where does not need to be calculated. The inner product operation of high-dimensional space can determine a kernel function K, and the kernel function K satisfies the following formula:
(9)
The key to transforming the nonlinear problem into the linear problem is to select the appropriate kernel function K. At this time, the calculation formula of the objective function is as follows:
(10)
The calculation formula of the classification function is as follows:
(11)
When the above expression is used to calculate the classification function, other conditions of the algorithm do not change. In short, the main idea of SVM to construct the optimal hyperplane in high-dimensional space is to transform the input vector into high-dimensional feature space mapping through the kernel function.
2.3.4. Decision Tree (DT)
The DT is a common prediction method in machine learning. It is a typical classification method that generates partition rules through the continuous logical induction of training data sets, and its main forms are binary tree and multiway tree. The construction of a decision tree algorithm mainly includes the generation of a decision tree and pruning of a decision tree. Decision tree generation refers to the process of generating decision rules by learning and training sample set data. Pruning of the decision tree mainly refers to using test data set to test, correct, and trim the rules generated by the above decision tree to prevent the over-fitting phenomenon of the decision tree. The selection of characteristic values is the most critical part of the process of DT division, and the effect of inductive classification after the completion of decision tree construction largely depends on the selected feature evaluation method. The DT algorithm is one of the most widely used inference algorithms because of its advantages of high classification accuracy, simple generation mode, and high tolerance of noisy data.
2.3.5. Random Forests (RF)
RF is an algorithm that integrates multiple trees through the bagging idea of ensemble learning. RF summarizes the classification results of all decision trees on the training sample set by constructing a large number of decision trees. If the problem studied is a classification problem, the result is the prediction category of the classification tree; if the problem studied is a regression problem, the result is the average of all regression trees. Integrated learning is one of the most common machine learning ideas at present. The advantage of integrated learning is to integrate more models and effectively avoid the inherent defects of a single model or a group of models. RF is one of the typical ensemble learning algorithms. RF can solve the problems of low accuracy and over-fitting of a single decision tree by gathering multiple decision trees. The construction process of RF is as follows:
(1). Assuming that the size of the training set is N, m training sample sets with retrieval are taken from the training sample set, and m regression trees are constructed using the extracted training sample sets.
(2). In the process of constructing a regression tree, no more than the total number of variables are randomly extracted from all independent variables at each node to branch, and each branch is scored to determine the optimal branch.
(3). Each regression tree is branched from top to bottom, and parameters such as the number of RF subtrees, the depth of RF subtrees, the maximum number of subtrees, and the minimum number of subtrees are constantly adjusted during the branching process, to optimize the accuracy of the model.
(4). Summarizing all the generated regression trees to form the RF model, the prediction effect of the model is determined by evaluating the determination coefficient and root mean square error of the test set. If the prediction effect of the model is not satisfactory, the parameters need to be adjusted continuously in the process of random forest modeling until the expected effect is achieved.
The schematic diagram of random forest construction is shown in Figure 4.
2.3.6. K-Nearest Neighbor (KNN)
KNN is one of the most simple machine learning methods. The core idea of KNN is that if k nearest samples of a sample in space belong to the same category, the sample also belongs to this category. That is, the category of a given unknown sample can be determined according to the category of k samples closest to it. In simple terms, the KNN algorithm calculates the distance between the input unknown sample and all sample points and then takes the first K samples with the smallest distance for unified annotation. The graphic description of the KNN algorithm is shown in Figure 5.
2.3.7. Logistic Regression (LR)
LR is a generalized linear regression model with the advantages of strong plasticity, fast calculation speed, and strong generalization ability. The LR algorithm can not only predict the possibility of an event occurring under the action of a variety of different input variables but also analyze two opposing events. LR has many advantages over SVM, ANN, and other self-optimized training learning algorithms in the learning of model training set and the time required for model prediction. The binary classification problem is the most important application field of LR. LR only distinguishes class 0 and class 1 in the classification process of binary classification problems, and its probability distribution formula is as follows:
(12)
(13)
where w is the model weight coefficient, , is the input variable, , Y is the output variable, , b is bias, and .In the case of multiple inputs, the weight vectors and input variables of the model need to be expanded. In this case, the mathematical expression of the LR algorithm is as follows:
(14)
(15)
where , , and represent the ith dimension of and vectors, respectively.The regression function is obtained by unifying the above two expressions as follows:
(16)
2.3.8. Multiple Linear Regression (MLR)
Regression analysis is a common statistical analysis method to deal with variable correlation. For the independent variables and dependent variables without a strict deterministic function relation quantity, regression analysis can also better determine the functional relationship between them. The conceptual diagram of the regression model is shown in Figure 6.
Regression analysis problems can usually be divided into unary regression and multiple regression. In real life, a phenomenon is often associated with multiple factors, so it is necessary to generate an optimal combination of multiple independent variables to jointly predict the dependent variable. Therefore, the application scope of multiple linear regression is often wider than that of unary regression. The general form of multiple regression linear problems is as follows:
(17)
where is the regression coefficient, is the regression constant, are the independent variable, y is the dependent variable, and is the random error.In the actual problem, if there are m groups of data, the multiple linear regression model can be expressed as follows:
(18)
where the error term must satisfy , , , .The above formula can be written as a matrix:
(19)
where , , , . The hyperparameters of the machine learning models tuned by the BAS algorithm are summarized in Table 1.3. Results and Discussion
3.1. Hyperparameter Tuning
To optimize the hyperparameters of the BPNN, SVM, DT, RF, KNN, LR, and MLR models, the BAS algorithm is used in this study to optimize the hyperparameters of the above seven machine learning models, and the optimization results are shown in Figure 7. It can be seen from Figure 7 that the RSME values of the BPNN, SVM, RF, and KNN models decline rapidly at first with the increase in the number of iterations, and tend to be stable as a whole when they fall to lower values, and the RSME values of SVM are the lowest. However, although the RSME values of the DT, LR, and MLR models are low before iteration, their RSME values do not change with the increase in the number of iterations. That is, BAS has a better hyperparameter tuning effect on the BPNN, SVM, RF, and KNN models, and the best hyperparameter tuning effect on SVM, but no hyperparameter tuning effect on the DT, LR, and MLR models.
To further determine the optimal RSME values of the above seven machine learning models, this study further tuned the hyperparameters of the above models through 10-fold cross-validation. The results of the 10-fold cross-validation on the hyperparameter optimization of the above seven models are shown in Figure 8. The 10-fold cross-validation is a common test method to test the accuracy of the algorithm. This method needs to divide the data set into ten parts, selecting one of them as the testing data in turn and the remaining nine as the training data for training. It can be seen from the figures that after BAS hyperparameter tuning, the RSME values of the BPNN, SVM, RF, and KNN models are lower; that is, the prediction effect for the compressive strength of concrete with blast furnace slag is better.
3.2. Evaluation of the Model
The box diagram of the prediction error of concrete compressive strength by the BPNN, SVM, DT, RF, KNN, LR, and MLR models optimized by BAS is shown in Figure 9. It can be seen that the KNN performed the best among all the models, as indicated by the minimum residual value. The DT, BP, and MLR models showed large residuals, demonstrating that they may not be suitable to predict the compressive strength of the concrete with blast furnace slag. The remaining machine learning models (SVM and RF) showed moderate performance in predicting the mechanical properties of the concrete with blast furnace slag, with certain accuracy.
To select the model with the best prediction effect for the compressive strength of concrete, this study further evaluated the prediction effect of the BPNN, SVM, DT, RF, KNN, LR, and MLR models tuned by BAS on the compressive strength of concrete. The comparison between the predicted values and actual values of the training set and test set of the seven models mentioned above is shown in Figure 10. It can be seen from Figure 10 that the BPNN, SVM, RF, and KNN models show high consistency between the predicted values and the actual values, while the DT, LR, and MLR models show a large difference between the predicted values and the actual values. The RSME values of the BPNN, SVM, DT, RF, KNN, LR, and MLR models of the training set are 5.8238, 0.2376, 11.1465, 3.4754, 1.0299, 9.642, and 7.5262, respectively; R values are 0.9306, 0.9999, 0.7007, 0.9809, 0.9978, 0.8658, and 0.8765, respectively; the RSME values of the test set are 19.8532, 10.968, 9.6954, 6.4661, 6.2801, 9.3101, and 7.4981, respectively; R values are 0.3485, 0.8358, 0.8197, 0.9173, 0.9165, 0.871, and 0.8836, respectively. In other words, BPNN, SVM, RF, and KNN all have higher R values and lower RSME values in the training set and test set. This shows again that the BPNN, SVM, RF, and KNN models have a better prediction effect on the compressive strength of concrete, while the LR and MLR models have a poor prediction effect on the compressive strength of concrete. Although SVM has the highest R value and the lowest RSME value in the training set, its R value decreases greatly and the RSME value increases greatly in the test set; that is, SVM appears to display the overlearning phenomenon in this study.
Figure 11 is the radar diagram of the R values and RSME values of the BPNN, SVM, DT, RF, KNN, LR, and MLR models. It can be seen more intuitively from Figure 11 that KNN has both a lower RSME value and a higher R value; that is, among the seven machine learning models, KNN is the model with the highest prediction accuracy for the compressive strength of concrete. This can be due to the fact that the KNN model itself does not need assumptions for the data set and is not sensitive to outliers, which can easily appear in concrete design. Moreover, the KNN model mainly relies on the surrounding limited adjacent samples, rather than the method of discriminating the class domain to determine the category. Therefore, the KNN model is more suitable for sample sets with more overlapping class domains (there may be more overlapping class domains in concrete design).
To further analyze the prediction effect of the training set and test set of seven models on the compressive strength of concrete, a Monte Carlo simulation was conducted on the RSME values of the seven models in this study, and the simulation results are shown in Figure 12. Although the same data for the training group and the test group were employed in this study, it can be seen that in the process of Monte Carlo simulation, the error of prediction showed obvious randomness. The possible reason for this is that these machine learning algorithms differ greatly in the underlying principles of implementation, so data differences are almost statistically irrelevant. Moreover, it can be seen from Figure 12 that BPNN has the highest R value in the training set; that is, BPNN tuned by BAS has the worst prediction effect in the training set. Although the SVM has the lowest RSME value of the training set, the RSME value of the test set is relatively high; that is, the prediction effect of the SVM optimized by BAS on the compressive strength of concrete is not stable. In contrast, KNN adjusted by BAS has lower RSME values in both the training set and the test set, which again verifies that KNN has the best prediction effect on the compressive strength of concrete among the seven machine learning models mentioned above.
3.3. Importance of Variables
Due to the complexity of the compressive strength of concrete, different admixtures have different effects on the compressive strength of concrete. The importance scores of different input variables to concrete compressive strength are shown in Figure 13. It can be seen from Figure 13 that cement has the highest importance score (2.7171) for the compressive strength of concrete; that is, cement is the most important factor affecting the compressive strength of concrete among the input variables of this study. The importance of fine aggregate to the compressive strength of concrete has the lowest score (0.3128); that is, fine aggregate is the variable with the least influence on the compressive strength of concrete among the input variables of this study. The importance scores of cement, water, blast furnace slag, coarse aggregate, superplasticizer, and fine aggregate to the compressive strength of concrete are all positive; that is, the compressive strength of concrete is proportional to the above six input variables.
4. Conclusions
Concrete is one of the most widely used building materials, and strength is an important index of its comprehensive performance; to ensure that the quality of concrete meets the requirements, concrete must reach a certain compressive strength. To enhance the compressive strength of concrete and ensure the sustainable development of the concrete industry, the use of mineral admixtures to replace part of the cement in concrete has attracted more and more researchers’ attention. In this study, the prediction effect of BPNN, SVM, DT, RF, KNN, LR, and MLR models tuned by BAS on the compressive strength of concrete containing blast furnace slag was studied, and the following conclusions were obtained:
(1). The BAS algorithm showed a small amount of computation, very fast convergence, and global optimization ability in the machine learning model used to adjust and predict the mechanical properties of concrete. By comparison with varying machine learning models, the results showed that BAS has good hyperparameter tuning effects on BPNN, SVM, RF, and KNN models, but poor hyperparameter tuning effects on DT, LR, and MLR models.
(2). Among the seven machine learning models, SVM, RF, and KNN have higher prediction accuracy for the compressive strength of concrete, while SVM has an over-fitting phenomenon for the prediction of the compressive strength of concrete. After further comparison, the KNN model is finally confirmed to be the model with the highest prediction accuracy (R value of the training set is 0.9978; R value of the testing set is 0.9165) for the compressive strength of concrete.
(3). Among all the design parameters of the concrete with blast furnace slag, the importance score of cement to the compressive strength of concrete is the highest, while the importance score of fine aggregate to the compressive strength of concrete is the lowest, and the importance values of the above five variables to the compressive strength of concrete are all positive. In other words, cement and fine aggregate have the greatest and least influence on the compressive strength of concrete among the five input variables mentioned above, and the compressive strength of concrete is proportional to any one of the five input variables in this study.
In future studies, more data can be collected to improve the reliability of the machine learning model, and the prediction model obtained should be applied to actual production practice, providing some guidance for industrial production. Moreover, the performance of the models optimized with the BAS algorithm can be compared with traditional methods to train each model to verify the effectiveness of hyperparameter tuning. The model combining the linear and nonlinear models using error series or residuals can be proposed in the future to improve the efficiency and reliability of computing.
Conceptualization, X.W., F.Z. and J.H.; Supervision, J.H.; Writing—original draft, M.Z. and M.M.S.S.; Writing—review and editing, X.W., F.Z., M.Z. and M.M.S.S. All authors have read and agreed to the published version of the manuscript.
The data presented in this study are available on request from the corresponding author.
The authors declare no conflict of interest.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Figure 1. Frequency distribution histogram of variables. (a) Cement; (b) Water; (c) Blast furnace slag; (d) Coarse aggregate; (e) Fine aggregate; (f) Superplasticizer; (g) Uniaxial compressive strength.
Figure 1. Frequency distribution histogram of variables. (a) Cement; (b) Water; (c) Blast furnace slag; (d) Coarse aggregate; (e) Fine aggregate; (f) Superplasticizer; (g) Uniaxial compressive strength.
Figure 5. Graphic description of KNN (samples are represented by different color and shape).
Figure 7. The relationship between RSME values and the number of iterations of different models.
Figure 8. RMSE values for different fold numbers of different models. (a) BPNN; (b) SVM; (c) DT; (d) RF; (e) KNN; (f) LR; (g) MLR (Red lines represent the minimum values of RMSE).
Figure 8. RMSE values for different fold numbers of different models. (a) BPNN; (b) SVM; (c) DT; (d) RF; (e) KNN; (f) LR; (g) MLR (Red lines represent the minimum values of RMSE).
Figure 9. Box plot of the prediction error of different models (+ represents the maximum value).
Figure 10. Comparison of predicted value with the actual value of different models. (a) BPNN; (b) SVM; (c) DT; (d) RF; (e) KNN; (f) LR; (g) MLR.
Figure 10. Comparison of predicted value with the actual value of different models. (a) BPNN; (b) SVM; (c) DT; (d) RF; (e) KNN; (f) LR; (g) MLR.
Figure 10. Comparison of predicted value with the actual value of different models. (a) BPNN; (b) SVM; (c) DT; (d) RF; (e) KNN; (f) LR; (g) MLR.
Figure 12. Monte Carlo simulation (number of Monto Carlo runs vs. value of RSME).
Figure 13. Importance scores of different input variables to the compressive strength of concrete.
Hyperparameters of the machine learning models tuned by the BAS algorithm.
Machine Learning Models | Hyperparameters Tuned by the BAS Algorithm | Range Values (or Requirement) of the Hyperparameters |
---|---|---|
BPNN | hidden_layer_num | 1–3 |
hidden_layer_size | 1–20 | |
SVM | C_penalty | 0.1–10 |
kernel | Linear | |
tol | 1 × 10−4–1× 10−2 | |
DT | criterion | Gini, Entropy |
max_depth | 1–100 | |
min_samples_split | 2–10 | |
min_samples_leaf | 1–10 | |
RF | criterion | Gini, Entropy |
n_estimators | 1–1000 | |
KNN | neighbors num | 1–10 |
LR | tol | 1 × 10−5–1 × 10−3 |
C_inverse | 0.1–10 |
References
1. Amario, M.; Pepe, M.; Toledo Filho, R.D. Influence of Recycled Concrete Aggregates on the Rheology of Concrete. Proceedings of the 5th Iberoamerican Congress of Self-Compacting Concrete and Special Concrete; Valencia, Spain, 5–6 March 2018; pp. 85-94.
2. Jeevanandan, K.; Sreevidya, V. Experimental Investigation on Concrete and Geopolymer Concrete. Proceedings of the International Conference on Recent Trends in Nanomaterials for Energy, Environmental and Engineering Applications (ICONEEEA); Tiruchirappalli, India, 28–29 March 2020; pp. 307-312.
3. Bressi, S.; Fiorentini, N.; Huang, J.; Losa, M. Crumb rubber modifier in road asphalt pavements: State of the art and statistics. Coatings; 2019; 9, 384. [DOI: https://dx.doi.org/10.3390/coatings9060384]
4. Huang, J.; Leandri, P.; Cuciniello, G.; Losa, M. Mix design and laboratory characterisation of rubberised mixture used as damping layer in pavements. Int. J. Pavement Eng.; 2021; 23, pp. 2746-2760. [DOI: https://dx.doi.org/10.1080/10298436.2020.1869975]
5. Ma, H.; Liu, J.; Zhang, J.; Huang, J. Estimating the compressive strength of cement-based materials with mining waste using support vector machine, decision tree, and random forest models. Adv. Civ. Eng.; 2021; 2021, 6629466. [DOI: https://dx.doi.org/10.1155/2021/6629466]
6. Xu, W.; Huang, X.; Huang, J.; Yang, Z. Structural analysis of backfill highway subgrade on the lower bearing capacity foundation using the finite element method. Adv. Civ. Eng.; 2021; 2021, 1690168. [DOI: https://dx.doi.org/10.1155/2021/1690168]
7. Jian, S.-M.; Wu, B. Compressive behavior of compound concrete containing demolished concrete lumps and recycled aggregate concrete. Constr. Build. Mater.; 2021; 272, 121624. [DOI: https://dx.doi.org/10.1016/j.conbuildmat.2020.121624]
8. Liang, X.; Yu, X.; Chen, C.; Ding, G.; Huang, J. Towards the low-energy usage of high viscosity asphalt in porous asphalt pavements: A case study of warm-mix asphalt additives. Case Stud. Constr. Mater.; 2022; 16, e00914. [DOI: https://dx.doi.org/10.1016/j.cscm.2022.e00914]
9. Kim, C.-G.; Park, H.-G.; Hong, G.-H.; Lee, H.; Suh, J.-I. Shear strength of reinforced concrete-composite beams with prestressed concrete and non-prestressed concrete. ACI Struct. J.; 2018; 115, pp. 917-930.
10. Huang, J.; Duan, T.; Lei, Y.; Hasanipanah, M. Finite element modeling for the antivibration pavement used to improve the slope stability of the open-pit mine. Shock Vib.; 2020; 2020, 6650780. [DOI: https://dx.doi.org/10.1155/2020/6650780]
11. Ahmad, M.; Tang, X.-W.; Ahmad, F.; Pirhadi, N.; Wan, X.; Cheng, K. Probabilistic evaluation of cpt-based seismic soil liquefaction potential: Towards the integration of interpretive structural modeling and bayesian belief network. Math. Biosci. Eng. MBE; 2021; 18, pp. 9233-9925. [DOI: https://dx.doi.org/10.3934/mbe.2021454]
12. Wang, Q.-A.; Zhang, C.; Ma, Z.-G.; Huang, J.; Ni, Y.-Q.; Zhang, C. Shm deformation monitoring for high-speed rail track slabs and bayesian change point detection for the measurements. Constr. Build. Mater.; 2021; 300, 124337. [DOI: https://dx.doi.org/10.1016/j.conbuildmat.2021.124337]
13. Pyataev, E.; Zhukov, A.; Vako, K.; Burtseva, M.; Mednikova, E.; Prusakova, M.; Izumova, E. Effective Polymer Concrete on Waste Concrete Production. Proceedings of the 22nd International Scientific Conference on Construction—The Formation of Living Environment (FORM), Tashkent Inst Irrigat & Agr Mechanizat Engineers; Tashkent, Uzbekistan, 18–21 April 2019; Tashkent Inst Irrigat & Agr Mechanizat Engineers: Tashkent, Uzbekistan, 2019.
14. Huang, J.; Alyousef, R.; Suhatril, M.; Baharom, S.; Alabduljabbar, H.; Alaskar, A.; Assilzadeh, H. Influence of porosity and cement grade on concrete mechanical properties. Adv. Concr. Constr.; 2020; 10, pp. 393-402.
15. Huang, J.; Zhou, M.; Yuan, H.; Sabri, M.M.; Li, X. Towards sustainable construction materials: A comparative study of prediction models for green concrete with metakaolin. Buildings; 2022; 12, 772.
16. Rahman, S.S.; Khattak, M.J. Roller compacted geopolymer concrete using recycled concrete aggregate. Constr. Build. Mater.; 2021; 283, 122624. [DOI: https://dx.doi.org/10.1016/j.conbuildmat.2021.122624]
17. Wimalasiri, M.; Robert, D.; Li, C.-Q. Permeability degradation of stressed concrete considering concrete plasticity. J. Mater. Civ. Eng.; 2020; 32, 04020265. [DOI: https://dx.doi.org/10.1061/(ASCE)MT.1943-5533.0003327]
18. Zhang, S.; Fan, Y.; Huang, J.; Shah, S.P. Effect of nano-metakaolinite clay on hydration behavior of cement-based materials at early curing age. Constr. Build. Mater.; 2021; 291, 123107. [DOI: https://dx.doi.org/10.1016/j.conbuildmat.2021.123107]
19. Huang, J.; Sabri, M.M.; Ulrikh, D.V.; Ahmad, M.; Alsaffar, K.A. Predicting the compressive strength of the cement-fly ash–slag ternary concrete using the firefly algorithm (fa) and random forest (rf) hybrid machine-learning method. Materials; 2022; 15, 4193. [DOI: https://dx.doi.org/10.3390/ma15124193]
20. Vavrus, M.; Kotes, P. Numerical Comparison of Concrete Columns Strengthened with Layer of Fiber Concrete and Reinforced Concrete. Proceedings of the 13th International Scientific Conference on Sustainable, Modern and Safe Transport (TRANSCOM); Novy Smokovec, Slovakia, 29–31 May 2019; pp. 920-926.
21. Esparham, A.; Moradikhou, A.B.; Andalib, F.K.; Avanaki, M.J. Strength characteristics of granulated ground blast furnace slag-based geopolymer concrete. Adv. Concr. Constr.; 2021; 11, pp. 219-229.
22. Fikri, H.; Krisologus, Y.P.; Permana, R.; Raafidiani, R. Cyclic Behavior of Ground Granulated Blast Furnace Slag (ggbfs) Concrete Beams. Proceedings of the 3rd International Conference on Innovation in Engineering and Vocational Education (ICIEVE); Bandung, Indonesia, 26 November 2020.
23. Qi, A.; Liu, X.; Wang, Z.; Chen, Z. Mechanical properties of the concrete containing ferronickel slag and blast furnace slag powder. Constr. Build. Mater.; 2020; 231, 117120. [DOI: https://dx.doi.org/10.1016/j.conbuildmat.2019.117120]
24. Gao, Y.; Huang, J.; Li, M.; Dai, Z.; Jiang, R.; Zhang, J. Chemical modification of combusted coal gangue for u(vi) adsorption: Towards a waste control by waste strategy. Sustainability; 2021; 13, 117120. [DOI: https://dx.doi.org/10.3390/su13158421]
25. Huang, J.; Li, X.; Kumar, G.S.; Deng, Y.; Gong, M.; Dong, N. Rheological properties of bituminous binder modified with recycled waste toner. J. Clean. Prod.; 2021; 317, 128415. [DOI: https://dx.doi.org/10.1016/j.jclepro.2021.128415]
26. Huang, J.; Zhou, M.; Yuan, H.; Sabri, M.M.S.; Li, X. Prediction of the compressive strength for cement-based materials with metakaolin based on the hybrid machine learning method. Materials; 2022; 15, 3500. [DOI: https://dx.doi.org/10.3390/ma15103500] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35629527]
27. Ayano, T.; Fujii, T. Improvement of concrete properties using granulated blast furnace slag sand. J. Adv. Concr. Technol.; 2021; 19, pp. 118-132. [DOI: https://dx.doi.org/10.3151/jact.19.118]
28. Breitenbuecher, R.; Baecker, J.; Kunz, S.; Ehrenberg, A.; Gerten, C. Optimizing the Acid Resistance of Concrete With Granulated Blast-Furnace Slag. Proceedings of the 5th International Conference on Concrete Repair, Rehabilitation and Retrofitting (ICCRRR); Cape Town, South Africa, 19–21 November 2018.
29. Salvador, R.P.; Rambo, D.A.S.; Bueno, R.M.; Silva, K.T.; de Figueiredo, A.D. On the use of blast-furnace slag in sprayed concrete applications. Constr. Build. Mater.; 2019; 218, pp. 543-555. [DOI: https://dx.doi.org/10.1016/j.conbuildmat.2019.05.132]
30. Saranya, P.; Nagarajan, P.; Shashikala, A.P. Development of ground-granulated blast-furnace slag-dolomite geopolymer concrete. ACI Mater. J.; 2019; 116, pp. 235-243.
31. Topcu, I.B.; Unverdi, A. Properties of High Content Ground Granulated Blast Furnace Slag Concrete. Proceedings of the 3rd International Sustainable Buildings Symposium (ISBS); Dubai, United Arab Emirates, 15–17 March 2018; pp. 114-126.
32. Mendes, S.E.S.; Oliveira, R.L.N.; Cremonez, C.; Pereira, E.; Pereira, E.; Trentin, P.O.; Medeiros-Junior, R.A. Estimation of electrical resistivity of concrete with blast-furnace slag. ACI Mater. J.; 2021; 118, pp. 27-37.
33. Shi, D.; Masuda, Y.; Lee, Y. Experimental Study on Mechanical Properties of High-Strength Concrete Using Blast Furnace Slag Fine Aggregate. Proceedings of the 1st International Conference on High Performance Structures and Materials Engineering; Beijing, China, 5–6 May 2011; pp. 113-118.
34. Cvetkovic, S. Estimation of factors affecting the concrete strength in presence of blast furnace slag and fly ash using adaptive neuro-fuzzy technique. Struct. Concr.; 2021; 22, 1243.
35. Zhao, Y.; Bi, J.H.; Sun, Y.T.; Wang, Z.Y.; Huo, L.Y.; Duan, Y.C. Synergetic effect of ground granulated blast-furnace slag and hooked-end steel fibers on various properties of steel fiber reinforced self-compacting concrete. Struct. Concr.; 2022; 23, pp. 268-284. [DOI: https://dx.doi.org/10.1002/suco.202000722]
36. Liu, Z.G.; Huang, Z.X. Influence of steel slag-superfine blast furnace slag composite mineral admixture on the properties of mortar and concrete. Adv. Civ. Eng.; 2021; 2021, 9983019. [DOI: https://dx.doi.org/10.1155/2021/9983019]
37. Cui, X.; Wang, Q.; Zhang, R.; Dai, J.; Li, S. Machine learning prediction of concrete compressive strength with data enhancement. J. Intell. Fuzzy Syst.; 2021; 41, pp. 7219-7228. [DOI: https://dx.doi.org/10.3233/JIFS-211088]
38. Kumar, A.; Arora, H.C.; Kapoor, N.R.; Mohammed, M.A.; Kumar, K.; Majumdar, A.; Thinnukool, O. Compressive strength prediction of lightweight concrete: Machine learning models. Sustainability; 2022; 14, 2404. [DOI: https://dx.doi.org/10.3390/su14042404]
39. Pham, A.-D.; Ngo, N.-T.; Nguyen, Q.-T.; Truong, N.-S. Hybrid machine learning for predicting strength of sustainable concrete. Soft Comput.; 2020; 24, pp. 14965-14980. [DOI: https://dx.doi.org/10.1007/s00500-020-04848-1]
40. Prayogo, D.; Santoso, D.I.; Wijaya, D.; Gunawan, T.; Widjaja, J.A. Prediction of Concrete Properties Using Ensemble Machine Learning Methods. Proceedings of the 2nd International Conference on Sustainable Infrastructure (ICSI); Yogyakarta, Indonesia, 28–29 October 2020.
41. Huang, J.; Duan, T.; Zhang, Y.; Liu, J.; Zhang, J.; Lei, Y. Predicting the permeability of pervious concrete based on the beetle antennae search algorithm and random forest model. Adv. Civ. Eng.; 2020; 2020, 8863181. [DOI: https://dx.doi.org/10.1155/2020/8863181]
42. Xu, W.; Huang, X.; Yang, Z.; Zhou, M.; Huang, J. Developing hybrid machine learning models to determine the dynamic modulus (e*) of asphalt mixtures using parameters in witczak 1-40d model: A comparative study. Materials; 2022; 15, 1791. [DOI: https://dx.doi.org/10.3390/ma15051791] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35269021]
43. Zhu, F.; Wu, X.; Zhou, M.; Sabri, M.M.; Huang, J. Intelligent design of building materials: Development of an ai-based method for cement-slag concrete design. Materials; 2022; 15, 3833. [DOI: https://dx.doi.org/10.3390/ma15113833] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35683131]
44. Van Quan, T.; Viet Quoc, D.; Ho, L.S. Evaluating compressive strength of concrete made with recycled concrete aggregates using machine learning approach. Constr. Build. Mater.; 2022; 323, 126578. [DOI: https://dx.doi.org/10.1016/j.conbuildmat.2022.126578]
45. Zhang, J.; Xu, J.; Liu, C.; Zheng, J. Prediction of rubber fiber concrete strength using extreme learning machine. Front. Mater.; 2021; 7, 582635. [DOI: https://dx.doi.org/10.3389/fmats.2020.582635]
46. Ziolkowski, P.; Niedostatkiewicz, M. Machine learning techniques in concrete mix design. Materials; 2019; 12, 1256. [DOI: https://dx.doi.org/10.3390/ma12081256]
47. Ziolkowski, P.; Niedostatkiewicz, M.; Kang, S.-B. Model-based adaptive machine learning approach in concrete mix design. Materials; 2021; 14, 1661. [DOI: https://dx.doi.org/10.3390/ma14071661]
48. Huang, J.; Asteris, P.G.; Pasha, S.M.K.; Mohammed, A.S.; Hasanipanah, M. A new auto-tuning model for predicting the rock fragmentation: A cat swarm optimization algorithm. Eng. Comput.; 2020; 38, pp. 2209-2220. [DOI: https://dx.doi.org/10.1007/s00366-020-01207-4]
49. Huang, J.; Zhang, J.; Gao, Y. Intelligently predict the rock joint shear strength using the support vector regression and firefly algorithm. Lithosphere; 2021; 2021, 2467126. [DOI: https://dx.doi.org/10.2113/2021/2467126]
50. Wang, Q.-A.; Zhang, J.; Huang, J. Simulation of the compressive strength of cemented tailing backfill through the use of firefly algorithm and random forest model. Shock Vib.; 2021; 2021, 5536998. [DOI: https://dx.doi.org/10.1155/2021/5536998]
51. Huang, J.; Zhou, M.; Sabri, M.M.S.; Yuan, H. A novel neural computing model applied to estimate the dynamic modulus (dm) of asphalt mixtures by the improved beetle antennae search. Sustainability; 2022; 14, 5938. [DOI: https://dx.doi.org/10.3390/su14105938]
52. Salimbahrami, S.R.; Shakeri, R. Experimental investigation and comparative machine-learning prediction of compressive strength of recycled aggregate concrete. Soft Comput.; 2021; 25, pp. 919-932. [DOI: https://dx.doi.org/10.1007/s00500-021-05571-1]
53. Al-Shamiri, A.K.; Yuan, T.F.; Kim, J.H. Non-tuned machine learning approach for predicting the compressive strength of high-performance concrete. Materials; 2020; 13, 1023. [DOI: https://dx.doi.org/10.3390/ma13051023]
54. Wang, L.; Wu, X.G.; Chen, H.Y.; Zeng, T.M. IOP. Prediction of Impermeability of the Concrete Structure Based on Random Forest and Support Vector Machine. Proceedings of the International Conference on Sustainable Development and Environmental Science (ICSDES); Zhengzhou, China, 19–21 June 2020.
55. Nilsen, V.; Pham, L.T.; Hibbard, M.; Klager, A.; Cramer, S.M.; Morgan, D. Prediction of concrete coefficient of thermal expansion and other properties using machine learning. Constr. Build. Mater.; 2019; 220, pp. 587-595. [DOI: https://dx.doi.org/10.1016/j.conbuildmat.2019.05.006]
56. Mahdiyar, A.; Jahed Armaghani, D.; Koopialipoor, M.; Hedayat, A.; Abdullah, A.; Yahya, K. Practical risk assessment of ground vibrations resulting from blasting, using gene expression programming and monte carlo simulation techniques. Appl. Sci.; 2020; 10, 472. [DOI: https://dx.doi.org/10.3390/app10020472]
57. Lu, S.; Koopialipoor, M.; Asteris, P.G.; Bahri, M.; Armaghani, D.J. A novel feature selection approach based on tree models for evaluating the punching shear capacity of steel fiber-reinforced concrete flat slabs. Materials; 2020; 13, 3902. [DOI: https://dx.doi.org/10.3390/ma13173902]
58. Koopialipoor, M.; Tootoonchi, H.; Armaghani, D.J.; Mohamad, E.T.; Hedayat, A. Application of deep neural networks in predicting the penetration rate of tunnel boring machines. Bull. Eng. Geol. Environ.; 2019; 78, pp. 6347-6360. [DOI: https://dx.doi.org/10.1007/s10064-019-01538-7]
59. Koopialipoor, M.; Noorbakhsh, A.; Noroozi Ghaleini, E.; Jahed Armaghani, D.; Yagiz, S. A new approach for estimation of rock brittleness based on non-destructive tests. Nondestruct. Test. Eval.; 2019; 34, pp. 354-375. [DOI: https://dx.doi.org/10.1080/10589759.2019.1623214]
60. Koopialipoor, M.; Nikouei, S.S.; Marto, A.; Fahimifar, A.; Armaghani, D.J.; Mohamad, E.T. Predicting tunnel boring machine performance through a new model based on the group method of data handling. Bull. Eng. Geol. Environ.; 2019; 78, pp. 3799-3813. [DOI: https://dx.doi.org/10.1007/s10064-018-1349-8]
61. Hasanipanah, M.; Noorian-Bidgoli, M.; Armaghani, D.J.; Khamesi, H. Feasibility of pso-ann model for predicting surface settlement caused by tunneling. Eng. Comput.; 2016; 32, pp. 705-715. [DOI: https://dx.doi.org/10.1007/s00366-016-0447-0]
62. Hasanipanah, M.; Monjezi, M.; Shahnazar, A.; Armaghani, D.J.; Farazmand, A. Feasibility of indirect determination of blast induced ground vibration based on support vector machine. Measurement; 2015; 75, pp. 289-297. [DOI: https://dx.doi.org/10.1016/j.measurement.2015.07.019]
63. Hajihassani, M.; Armaghani, D.J.; Marto, A.; Mohamad, E.T. Ground vibration prediction in quarry blasting through an artificial neural network optimized by imperialist competitive algorithm. Bull. Eng. Geol. Environ.; 2015; 74, pp. 873-886. [DOI: https://dx.doi.org/10.1007/s10064-014-0657-x]
64. Chen, W.; Hasanipanah, M.; Rad, H.N.; Armaghani, D.J.; Tahir, M. A new design of evolutionary hybrid optimization of svr model in predicting the blast-induced ground vibration. Eng. Comput.; 2019; 37, pp. 1455-1471. [DOI: https://dx.doi.org/10.1007/s00366-019-00895-x]
65. Cai, M.; Koopialipoor, M.; Armaghani, D.J.; Thai Pham, B. Evaluating slope deformation of earth dams due to earthquake shaking using mars and gmdh techniques. Appl. Sci.; 2020; 10, 1486. [DOI: https://dx.doi.org/10.3390/app10041486]
66. Armaghani, D.J.; Raja, R.S.N.S.B.; Faizi, K.; Rashid, A.S.A. Developing a hybrid pso–ann model for estimating the ultimate bearing capacity of rock-socketed piles. Neural Comput. Appl.; 2017; 28, pp. 391-405. [DOI: https://dx.doi.org/10.1007/s00521-015-2072-z]
67. Armaghani, D.J.; Mohamad, E.T.; Narayanasamy, M.S.; Narita, N.; Yagiz, S. Development of hybrid intelligent models for predicting tbm penetration rate in hard rock condition. Tunn. Undergr. Space Technol.; 2017; 63, pp. 29-43. [DOI: https://dx.doi.org/10.1016/j.tust.2016.12.009]
68. Armaghani, D.J.; Mirzaei, F.; Shariati, M.; Trung, N.T.; Shariati, M.; Trnavac, D. Hybrid ann-based techniques in predicting cohesion of sandy-soil combined with fiber. Geomech. Eng.; 2020; 20, pp. 191-205.
69. Jiang, X.; Li, S. Bas: Beetle antennae search algorithm for optimization problems. arxiv; 2017; arXiv: 1710.10724[DOI: https://dx.doi.org/10.5430/ijrc.v1n1p1]
70. Yeh, I.C. Modeling of strength of high-performance concrete using artificial neural networks. Cem. Concr. Res.; 1998; 28, pp. 1797-1808. [DOI: https://dx.doi.org/10.1016/S0008-8846(98)00165-3]
71. Silva, D.A.; Alves, G.I.; de Mattos Neto, P.S.; Ferreira, T.A. Measurement of fitness function efficiency using data envelopment analysis. Expert Syst. Appl.; 2014; 41, pp. 7147-7160. [DOI: https://dx.doi.org/10.1016/j.eswa.2014.06.001]
72. Lima Junior, A.R.; Silva, D.A.; Mattos Neto, P.S.; Ferreira, T.A. An Experimental Study of Fitness Function and Time Series Forecasting Using Artificial Neural Networks. Proceedings of the 12th Annual Conference Companion on Genetic and Evolutionary Computation; Portland, OR, USA, 7–11 July 2010; pp. 2015-2018.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Concrete production by replacing cement with green materials has been conducted in recent years considering the strategy of sustainable development. This study researched the topic of compressive strength regarding one type of green concrete containing blast furnace slag. Although some researchers have proposed using machine learning models to predict the compressive strength of concrete, few researchers have compared the prediction accuracy of different machine learning models on the compressive strength of concrete. Firstly, the hyperparameters of BP neural network (BPNN), support vector machine (SVM), decision tree (DT), random forest (RF), K-nearest neighbor algorithm (KNN), logistic regression (LR), and multiple linear regression (MLR) are tuned by the beetle antennae search algorithm (BAS). Then, the prediction effects of the above seven machine learning models on the compressive strength of concrete are evaluated and compared. The comparison results show that KNN has higher R values and lower RSME values both in the training set and test set; that is, KNN is the best model for predicting the compressive strength of concrete among the seven machine learning models mentioned above.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details

1 Department of Jewelry Design, KAYA University, Gimhae 50830, Korea;
2 Department of Jewelry Design, KAYA University, Gimhae 50830, Korea;
3 School of Mines, China University of Mining and Technology, Xuzhou 221116, China;
4 Peter the Great St.Petersburg Polytechnic University, 195251 St. Petersburg, Russia;