Introduction
In the contemporary era, the escalating demand for energy, primarily from residential and commercial sectors, poses challenges in efficiently managing industries like transportation and construction while striving to conserve energy [1, 2]. Recent studies emphasize the substantial contribution of a growing population to energy consumption in residential buildings [3, 4]. Efficiently managing a building’s energy consumption requires a thorough understanding of its performance, starting with the identification of energy sources and usage patterns. Key energy resources in buildings include district heating supply, electricity, and natural gas, with applications such as HV heating, ventilation, and air-conditioning (HVAC) systems, lighting, elevators, hot water, and kitchen equipment consuming this energy [1]. Among these, HVAC systems, important for residential infrastructure, significantly impact cooling load (CL) and heating load (HL), constituting around 40% of energy consumption in office buildings [5, 6]. Improving energy efficiency in urban residential buildings and employing dynamic load prediction in construction management are crucial measures to enhance HVAC system performance and conserve energy [7]. Forecasting dynamic air-conditioning loads is essential for HVAC system design, enabling adjustments to initiation times, curbing peak demand, optimizing costs, and improving energy utilization in cooling storage systems [8]. Accurately predicting building cooling loads is challenging due to various influencing factors, including optical and thermal characteristics and meteorological data [9–11].
Achieving sustainability in thermal management relies on efficiently separating latent and sensible loads in the cooling process. An effective strategy involves integrating an indirect evaporative cooler (IEC) with a dehumidification system, providing both enhanced cooling efficiency and a sustainable solution to rising energy demands. The improved IEC, featuring three significant modifications, becomes a cornerstone in this approach, pushing the coefficient of performance (COP) for cooling to an impressive 78. The dehumidification component, operating at a COP of approximately 4–5, complements the cooling-only COP, resulting in an overall COP of 7–8 [12].
Efforts to create energy-efficient buildings and enhance energy conservation are necessary in managing energy demand and resources. A primary strategy involves early predictions of HL and CL in residential structures. Accurate forecasting requires data on building specifications and local weather conditions [13]. Climatic elements such as temperature, wind speed, solar radiation, atmospheric pressure, and humidity significantly influence the prediction of building cooling and heating loads. Factors like relative compactness, roof dimensions, wall and glazing areas, roof height, and overall surface area should be considered when assessing a building’s load [14]. Building energy simulation tools play a crucial role in designing energy-efficient buildings, allowing for performance maximization and comparisons between buildings. Simulation outcomes have demonstrated high accuracy in replicating real-world measurements [15]. Although time-intensive and requiring proficient users, simulation software effectively assesses the influence of building design factors. In some cases, contemporary techniques like statistical analysis, artificial neural networks, and machine learning are adopted to predict cooling and heating loads and analyze the impact of different parameters [16].
HVAC system optimization involves three main categories: simulation, regression analysis, and artificial intelligence (AI). Simulation tools like DOE-2 [17], ESP-r [9], TRNSYS [10], and EnergyPlus [11, 18] are utilized for cooling load estimation when comprehensive building data is available. However, challenges arise in accurately measuring various parameters, and simplifying building models demands significant time and resources [19]. Simulation software is limited to real-time applications like online prediction or optimal operational control [20]. Regression analysis, known for its ease of use and computational efficiency, is preferred for diverse building types [21], employing both linear and nonlinear techniques [22, 23]. Additionally, research emphasizes the efficacy of ML and AI in building energy forecasting, favoring nonlinear approaches [24, 25]. Building cooling load prediction commonly involves key factors such as outdoor temperature, relative humidity, solar irradiation, and indoor occupancy schedules [26, 27]. Feature extraction methods, including engineering, statistical, and structural approaches, help condense raw data into informative formats, addressing the complexity introduced by historical data [21].
Numerous data mining methods have been applied to predict residential building energy requirements, including principal component analysis (PCA) [28], extreme learning machine (ELM) [29, 30], support vector machines (SVM) [31–33], k-means [34], deep learning [32, 33, 35–37], decision trees (DT) [38], various regression approaches, artificial neural networks [16, 39, 40], and hybrid models [41–44]. Researchers have employed diverse methodologies to forecast heating and cooling loads and energy demand in various building contexts. For instance, one study [45] predicted building heating load using the MLP method with meteorological data, while another simultaneously [46] predicted both cooling and heating loads with meteorological and date data inputs. Another study [16] examined a building’s energy performance using machine learning techniques, including general linear regression, artificial neural networks, decision trees, support vector regression (SVR), and ensemble inference models for cooling and heating load forecasting. Structural and interior design factors’ impact on cooling loads was explored through diverse regression models [47], and HVAC system energy demand was estimated from cooling and heating load requirements using different regression models. Commercial buildings’ cooling load and electric demand were forecasted for short-term and ultrashort-term management [48], enhancing energy efficiency through a hybrid SVR approach. Additionally, the SVR method was applied [49] to project cooling loads in a large coastal office building in China, introducing a novel vector-based SVR model for increased robustness and forecasting precision [50].
Naive Bayes is a fundamental probabilistic machine learning algorithm widely employed in various fields, including natural language processing, spam filtering, and classification tasks. It is rooted in Bayes’ theorem and assumes conditional independence between features, which is where the “naive” in its name originates. This simplifying assumption enables Naive Bayes to efficiently estimate the probability of a data point belonging to a particular class. Despite its simplicity, Naive Bayes often exhibits impressive classification performance, especially when dealing with high-dimensional and large datasets. To date, there is no article to use Naïve Bayes as the prediction model in the case of CL of the buildings. In this study, Naïve Bayes single model prediction performance is compared with two optimized counterparts (optimized with Mountain Gazelle Optimizer (MGO) and the horse herd optimization algorithm (HHO)). The following sections present an academic description of the model and selected optimizers and a comparative analysis between developed models.
Methods
Data collection
The main goal of this study is to forecast the cooling load (CL) in buildings. This is achieved by using experimental data extracted from energy consumption patterns documented in previous studies [51, 52]. Table 1 reports the statistical properties (minimum, maximum, average, and standard deviation) of the variables included in the training of the developed prediction models and the output. Input parameters include relative compactness (indicating the building’s surface area-to-volume ratio), surface area, roof area, wall area, orientation, overall height, glazing area (encompassing glazing, frame, and sash components), and the distribution of glazing area, and cooling load is the expected output variable.
Table 1. The statistic properties of the input variable of NB [51, 52]
Variables | Indicators | ||||
---|---|---|---|---|---|
Category | Min | Max | Avg | St. dev | |
Relative compactness | Input | 0.62 | 0.98 | 0.764 | 0.106 |
Surface area (m2) | Input | 514.5 | 808.5 | 671.7 | 88.09 |
Wall area (m2) | Input | 245 | 416.5 | 318.5 | 43.63 |
Roof area (m2) | Input | 110.25 | 220.5 | 176.6 | 45.17 |
Overall height (m) | Input | 3.5 | 7 | 5.25 | 1.751 |
Orientation | Input | 2 | 5 | 3.5 | 1.119 |
Glazing area (%) | Input | 0 | 0.4 | 0.234 | 0.133 |
Glazing area distribution | Input | 0 | 5 | 2.813 | 1.551 |
Cooling (KW) | Output | 10.9 | 48.03 | 24.59 | 9.513 |
Figure 1 visually represents the correlation among the variables examined in this study. The analysis depicted in the figure reveals compelling insights. Specifically, it becomes apparent that the overall height and relative compactness exhibit the most substantial positive impact on the cooling load. In contrast, roof area and surface area emerge as variables with the most pronounced negative influence on the cooling load. This graphical representation not only highlights the interrelationships between the variables but also emphasizes the varying degrees of impact each variable has on the cooling load.
Fig. 1 [Images not available. See PDF.]
The correlation between input and output parameters
Overview of machine learning methods and optimizers
Naive Bayes (NB)
The Naive Bayes (NB) classifier stands as a robust probabilistic model founded on Bayes’ theorem, which simplifies modeling by assuming independence among input variables. Its potential for substantial improvements in prediction accuracy becomes evident when combined with kernel density approximations, as highlighted in [53, 54].
The NB is a sophisticated system that smoothly integrates the Naive Bayes probability model into its decision-making process. This classifier relies on the maximum a posteriori (MAP) decision rule, a well-established method for identifying the most probable hypothesis from a given set of options. Additionally, there is a closely related classifier called the Bayes classifier. This robust algorithm is responsible for assigning class labels , where k can range from 1 to K. This involves a detailed evaluation of various factors and variables, leading to the categorization of data points into predefined classes.
1
In the provided equation, the variable represents the predicted class label assigned by the Naive Bayes classifier. The term denotes a specific class, where ranges from 1 to , indicating the total number of classes. The variable represents the total number of input features or variables, and refers to the input feature or variable. The term represents the prior probability of class , while denotes the conditional probability of observing given the class .
Mountain gazelle optimizer (MGO)
The MGO algorithm is inspired by the behavior of mountain gazelles, which are grouped into bachelor herds, maternity herds, and solitary, territorial males. It aims to find optimal solutions by designating adult male gazelles in herd territories as global optima. Mathematically defined, the algorithm balances exploitation and exploration, gradually moving toward optimal solutions using four specified exploration mechanisms [55].
Territorial solitary males
Mature mountain gazelles establish solitary territories, vigorously defending them from other males seeking access to females. Equation (2) models these territories.
2
Equation (2) describes as the adult man is the most effective overall solution, as seen by the position vector. The variables and are random integers that can take on a value of either 1 or 2 [55]. YH denoted the coefficient vector of utilizing Eq. (3), and one can determine the young male herd. Similarly, is computed using Eq. (4). In each iteration, the coefficient vector , selected at random, undergoes updates and is employed to augment the search capability. This coefficient vector is specified using Eq. (3).
3
Here, denotes a random solution (young 1 male) within the range of . refers to the average number of search agents, which is equal to , and is the total number of gazelles, while and are random values in .
4
Equation (4) incorporates multiple variables associated with the problem’s dimensions. A randomly generated number following a standard distribution denoted as and is the equation that employs the exponential function. shows the ongoing iteration number in the process, and signifies the total count of iterations.
5
6
Additionally, , , and are random numbers from 0 to 1 [55]. , , and denote random numbers drawn from a typical distribution, and it is related to the dimensions of the problem. indicates the current iteration number, while is the number of iterations to be performed.
Maternity herds
Maternity herds hold a crucial position within the mountain gazelles’ life cycle since they are principally responsible for producing strong male gazelles. Furthermore, male gazelles may actively participate in the delivery process of the offspring and confront the presence of younger males attempting to mate with females. This behavioral interplay is expressed mathematically in Eq. (7).
7
Here, signifies the young men’s impact factor vector, which is determined by using Eq. (3). and random vectors for the coefficients are determined independently using Eq. (5). and are random integers that can take on a value of either 1 or 2. denoted the best global solution (adult male) in the current iteration. Ultimately, corresponds to the location vector of a gazelle chosen at random from the entire herd.
Bachelor male herds
Male gazelles create territories after they reach adulthood and engage in mating pursuit, a period marked by intense competition between young and adult males for territory control and access to females, as mathematically captured in Eq. (8).
8
9
where indicates the gazelle’s current iteration’s location vector. The variables and are random integers that can take a value of either 1 or 2. The ideal answer designates the male gazelle’s location vector as . is also a random number from 0 to 1.Migration to search for food
Equation (10), which describes how mountain gazelles forage for food, takes into account their extraordinary sprinting and leaping speed.
10
where and represent the lower and upper limits of the problem, respectively. Furthermore, is a random integer in , and it is selected randomly.The pseudo-code of MGO is available as follows:
setting Inputs: The population size and maximum number of iterations Outputs: Gazelle’s location and fitness potential |
initialization Create a random population using Calculate the gazelle’s fitness level While (the stopping condition is not met) do For (each gazelle ()) do |
Alone male realm Calculate TSM using Eq. (2) |
Mother and child herd Calculate using Eq. (7) |
Young male herd Calculate using Eq. (8) |
Migration to search for food Calculate using Eq. (10) Calculate the fitness values of and and then add them to the habitat End for Sort the entire population in ascending order Update Save the best gazelles in the max number of population end, while Return , |
Horse herd optimization algorithm (HOA)
The HOA is based on how horses behave in the wild [56]. This information is based on six specific behaviors: grazing, hierarchy, imitation, sociability, roaming, and defense mechanisms. These actions are the foundation of HOA, directing the movement of horses in each cycle, as detailed in Eq. (11):
11
where denotes the position of the horse, represents the age range, and is the current iteration. also reflects the horse’s age range, while indicates the velocity vector of the horse. Horses typically live between 25 and 30 years, exhibiting various behaviors throughout their lifespan. These behaviors are categorized into (0–5 years), (5–10 years), and (older than 15 years) groups. An extensive response matrix determines how old horses are sorted by how well they perform. The top 10% form group , the next 20% belong to group , and the remaining 30% and 40% are categorized as groups and , respectively. Motion vectors corresponding to equines of varying age groups and computational cycles within the algorithm are established following these behavioral patterns.12
To elucidate the derivation of the global matrix, Eqs. (13) and (14) are utilized, and a relationship between positions () and their respective cost values () is established.
13
14
Here, indicates the count of horses, and is the dimensions of the problem. After that, the global matrix is arranged according to the final column, which signifies costs. The horse’s age is recorded in this column. The velocity of horses under 5 years age range is as follows:
15
The velocity of horses between 5 and 10 years age range:
16
The velocity of horses between 10 and 15 years age range:
17
Horses that are 15 years or older exhibit the following velocity:
18
Results and discussion
Hyperparameter results
External configurations referred to as hyperparameters—such as alpha and binarize—are important in shaping a model’s behavior. Distinguished from parameters, these hyperparameters are predetermined and not acquired through the learning process of the data. The optimization of model performance significantly relies on the fine-tuning of hyperparameters, a nuanced process that demands both experimentation and the strategic application of optimization techniques. Table 2 outlines the hyperparameter values for the NBMG and NBHH models. By providing intricate insights into the intricacies of hyperparameter configurations, it becomes an indispensable tool for comprehending and, crucially, reproducing model setups. This exposition not only elevates the technical aspects of the research but also contributes to the broader scholarly discourse in the field of machine learning.
Table 2. The results of hyperparameters for NB
Models | Hyperparameter | |
---|---|---|
Alpha | Binarize | |
NBMG | 7 | 5.61 |
NBHH | 6.52953 | 1.317853 |
Prediction performance analysis
The assessment of the predictive effectiveness of the constructed models involved the utilization of five distinct metrics, which relied on actual observed values () and corresponding predicted values (). Here, the symbols and denote the mean of all the outcomes subjected to testing and predicting. In contrast, signifies the total count of samples encompassed within the analyzed dataset. A description of these metrics is presented as follows:
The coefficient of determination (R2) numerically represents the portion of the variability in the dependent variable that can be anticipated through the independent variables integrated into the model.
19
Root-mean-square error (RMSE) denotes the square root of the squared disparities’ mean between the projected and observed values. This quantifies the typical magnitude of the discrepancies the model introduces when forecasting the target variable.
20
Mean squared error (MSE) calculates the average of the squared differences between predicted and actual values, measuring how well a model’s predictions match the actual data. Lower MSE values indicate better predictive accuracy and a closer fit to the observed data.
21
Nash–Sutcliffe efficiency (NSE) assesses how well a model’s predictions match observed values, considering the variability of the observed data. Higher NSE values indicate better model performance, with 1 indicating a perfect match.
22
MDAPE (mean directional absolute percentage error) expresses the average percentage difference between the predicted and actual values, considering the direction of the errors (underestimation or overestimation).
23
The following discussion comprehensively analyzes the model’s performance in predicting CL based on Table 3:
NB (single model): A minimum R2 value of 0.963 is reported for this model. High error values of (, , and ) indicated low accuracy of this traditional model, especially in the testing phase. Low NSE values of 0.966, 0.958, and 0.949 in the training, validation, and testing phases confirm the high variability of estimated data.
NBMG (NB + MGO): High R2 values of 0.986, 0.980, and 0.974 in training, validation, and testing phases and low error values, especially in the case of NBMG, which are almost twice lower than NB single model indicate superior optimization performance of MGO in enhancing CL prediction capability of NB.
NBHH (NB + HHO): This model with marginal lower R2 (lower than 1%) and higher error values (on average 20%) has weaker performance than NBMG. However, the MGO algorithm has notably enhanced the NB’s prediction accuracy.
Table 3. The result of developed models for NB
Model | Phase | Index values | ||||
---|---|---|---|---|---|---|
RMSE | R2 | MSE | MDAPE | NSE | ||
NB | Train | 1.742 | 0.968 | 3.035 | 6.137 | 0.966 |
Validation | 2.051 | 0.958 | 4.205 | 7.328 | 0.958 | |
Test | 2.147 | 0.953 | 4.610 | 7.482 | 0.949 | |
All | 1.856 | 0.963 | 3.446 | 6.442 | 0.962 | |
NBMG | Train | 1.129 | 0.986 | 1.275 | 2.914 | 0.986 |
Validation | 1.523 | 0.980 | 2.319 | 4.024 | 0.977 | |
Test | 1.619 | 0.974 | 2.620 | 5.055 | 0.971 | |
All | 1.278 | 0.983 | 1.633 | 3.237 | 0.982 | |
NBHH | Train | 1.428 | 0.978 | 2.039 | 2.555 | 0.977 |
Validation | 1.960 | 0.965 | 3.842 | 3.650 | 0.962 | |
Test | 1.680 | 0.970 | 2.821 | 4.188 | 0.969 | |
All | 1.557 | 0.975 | 2.426 | 3.012 | 0.973 |
Figure 2 visually illustrates the trends in error values (RMSE, MSE) and R2 for the three models developed in this study. The comparative analysis reveals a consistent decrease in R2 values from training to testing across all models, indicating a weakness in the training ability of the models. Notably, all data columns for R2 values of NBMG are higher than those of NB but show similar heights to NBHH. In terms of RMSE and MSE error values, the NBMG model, particularly during the training phase, demonstrated significantly lower error values compared to the other models. As detailed in Table 3 and depicted in Fig. 2, the NBMG model showcased the best performance in predicting CL values, boasting an impressive R2 of 0.986, RMSE of 1.129 KW, and MSE of 1.275 KW.
Fig. 2 [Images not available. See PDF.]
The comparison of parameters
Figure 3 provides a comprehensive visual representation through a scatter plot, elucidating the relationship between predicted and measured samples for the CL. The scrutiny of these samples unfolds across three distinct phases, each phase offering valuable insights into the model’s performance. The allocation of sample points in the plot is guided by two main metrics: RMSE, which characterizes the dispersion within the figure, and R2, a measure that assesses the degree of collinearity among the sample points. In this visual exploration, the coincidence of a high R2 value with a low RMSE value signifies an optimal state where the predicted values closely align with the measured values, approximating the center (). To facilitate interpretation, two dashed lines are introduced onto the plot, delineating 15% overestimation and underestimation. Significantly, upon closer examination, the NBMG and NBHH hybrid models emerge as standout performers. These models, marked by their lowest RMSE values and highest R2 values, showcase a level of performance that surpasses the NB single model. It is worth highlighting that while the NBHH model exhibits some comparative weakness against the NBMG model, it does present certain data points with overestimation exceeding 15%. This nuanced observation adds depth to the understanding of the models’ performance dynamics across various scenarios and contributes to a more comprehensive evaluation of their predictive capabilities.
Fig. 3 [Images not available. See PDF.]
The scatter plot for developed hybrid models
Figure 4 employs a line plot in this investigation to comprehensively compare the variation in error values across three developed models. The range of errors for NBMG is approximately half that of NBHH, underscoring the advantageous capability of the MGO algorithm. Furthermore, in the case of NBMG, the error rate during the training phase is only half that observed in the other two phases, suggesting that MGO exhibits superior prediction performance during the training phase compared to the other models. This observation is corroborated by Fig. 5, which illustrates the normal distribution of errors for MGO, displaying a narrow bell-shaped curve indicative of a high concentration of errors near 0%.
Fig. 4 [Images not available. See PDF.]
The error rate percentage for the hybrid models is based on the line plot
Fig. 5 [Images not available. See PDF.]
The normal distribution plot of errors among the developed models
Figure 6 presents Taylor diagrams that vividly depict the performance of the employed predictive models, namely NB, NBMG, and NBHH. These diagrams serve as statistical syntheses, integrating both observed and predicted CL and incorporating essential metrics such as RMSE, correlation coefficients (CC), and normalized standard deviations. The visual representation within the figure provides a comprehensive overview of the model performances. Notably, the NBMG model, an amalgamation of the NB model, and the MGO optimizer emerge as the optimal predictive model. The outcomes of this model closely align with the ideal benchmark observed in the experimental data. This alignment signifies the effectiveness of the NBMG model in capturing the intricate patterns of the cooling load, emphasizing its superior predictive capabilities compared to the other models under consideration.
Fig. 6 [Images not available. See PDF.]
The Taylor diagram for developed models
Examining the kernel smooth distribution of errors during the prediction of CL values across the training, validation, and testing phases, Fig. 7 provides a graphical insight into the performance of three distinct models (NB, NBMG, and NBHH). Notably, the NB model displayed the highest errors during the testing phase, whereas the NBMG model showcased the lowest errors. Consistent favorability toward the NBMG hybrid model emerged across all stages of analysis. In the testing phase of the NB model, errors ranged widely from − 25 to 30. Conversely, the NBMG model, exhibiting superior performance during the training phase, featured errors predominantly concentrated within a narrower range of − 15 to 15. This emphasis on a refined error distribution underscores the heightened predictive accuracy of the NBMG model, especially when compared to the broader range observed in the NB model’s testing phase.
Fig. 7 [Images not available. See PDF.]
The kernel smooth plot of errors among the developed models
Conclusions
Accurate building cooling load forecasting is vital for optimizing HVAC systems, reducing costs, and enhancing energy efficiency. However, it remains challenging due to the complex interplay of building characteristics and meteorological data. Prior studies emphasize the effectiveness of machine learning in building energy forecasting, favoring nonlinear approaches. Naive Bayes, a foundational machine learning algorithm, was unexplored in this context. Naive Bayes-based models encompassed a single model, one optimized with the Mountain Gazelle Optimizer (MGO) and another optimized with the horse herd optimization (HHO) algorithm. The research findings underscore the exceptional performance of the NBMG model, consistently outperforming its counterparts by reducing prediction errors by an average of 20% and achieving a maximum R2 value of 0.982 for cooling load prediction. This highlights the substantial potential of machine learning, as NBMG exemplifies, to significantly enhance the precision of energy consumption forecasts. Consequently, it empowers decision-makers in energy conservation and retrofit strategies, contributing to the overarching goals of sustainable building operations and reduced environmental impact.
Acknowledgements
I would like to take this opportunity to acknowledge that there are no individuals or organizations that require acknowledgment for their contributions to this work.
Authors’ contributions
The author contributed to the study’s conception and design. Data collection, simulation, and analysis were performed by “YX.”
Funding
This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.
Availability of data and materials
Data can be shared upon request.
Declarations
Competing interests
The author declares no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
1. Leitao, J; Gil, P; Ribeiro, B; Cardoso, A. A survey on home energy management. IEEE Access; 2020; 8, pp. 5699-5722. [DOI: https://dx.doi.org/10.1109/ACCESS.2019.2963502]
2. Gong, H; Rallabandi, V; McIntyre, ML; Hossain, E; Ionel, DM. Peak reduction and long term load forecasting for large residential communities including smart homes with energy storage. IEEE Access; 2021; 9, pp. 19345-19355. [DOI: https://dx.doi.org/10.1109/ACCESS.2021.3052994]
3. Hannan, MA; Faisal, M; Ker, PJ; Mun, LH; Parvin, K; Mahlia, TMI; Blaabjerg, F. A review of Internet of energy based building energy management systems: issues and recommendations, Ieee. Access; 2018; 6, pp. 38997-39014. [DOI: https://dx.doi.org/10.1109/ACCESS.2018.2852811]
4. Sadeghian, O; Moradzadeh, A; Mohammadi-Ivatloo, B; Abapour, M; Anvari-Moghaddam, A; Lim, JS; Marquez, FPG. A comprehensive review on energy saving options and saving potential in low voltage electricity distribution networks: building and public lighting. Sustain Cities Soc; 2021; 72, 103064. [DOI: https://dx.doi.org/10.1016/j.scs.2021.103064]
5. Sadeghian, O; Moradzadeh, A; Mohammadi-Ivatloo, B; Abapour, M; Garcia Marquez, FP. Generation units maintenance in combined heat and power integrated systems using the mixed integer quadratic programming approach. Energies (Basel).; 2020; 13, 2840. [DOI: https://dx.doi.org/10.3390/en13112840]
6. Nami, H; Anvari-Moghaddam, A; Arabkoohsar, A. Application of CCHPs in a centralized domestic heating, cooling and power network—thermodynamic and economic implications. Sustain Cities Soc; 2020; 60, 102151. [DOI: https://dx.doi.org/10.1016/j.scs.2020.102151]
7. Chen, Q; Xia, M; Lu, T; Jiang, X; Liu, W; Sun, Q. Short-term load forecasting based on deep learning for end-user transformer subject to volatile electric heating loads. IEEE Access; 2019; 7, pp. 162697-162707. [DOI: https://dx.doi.org/10.1109/ACCESS.2019.2949726]
8. Yao, Y; Lian, Z; Liu, S; Hou, Z. Hourly cooling load prediction by a combined forecasting model based on analytic hierarchy process. Int J Therm Sci; 2004; 43, pp. 1107-1118. [DOI: https://dx.doi.org/10.1016/j.ijthermalsci.2004.02.009]
9. Probst, O. Cooling load of buildings and code compliance. Appl Energy; 2004; 77, pp. 171-186.
10. Bojić, M; Yik, F. Cooling energy evaluation for high-rise residential buildings in Hong Kong. Energy Build; 2005; 37, pp. 345-351. [DOI: https://dx.doi.org/10.1016/j.enbuild.2004.07.003]
11. Ansari, FA; Mokhtar, AS; Abbas, KA; Adam, NM. A simple approach for building cooling load estimation. Am J Environ Sci; 2005; 1, pp. 209-212. [DOI: https://dx.doi.org/10.3844/ajessp.2005.209.212]
12. Shahzad, MW; Burhan, M; Ybyraiymkul, D; Oh, SJ; Ng, KC. An improved indirect evaporative cooler experimental investigation. Appl Energy.; 2019; 256, 113934. [DOI: https://dx.doi.org/10.1016/j.apenergy.2019.113934]
13. Moradzadeh, A; Moayyed, H; Zakeri, S; Mohammadi-Ivatloo, B; Aguiar, AP. Deep learning-assisted short-term load forecasting for sustainable management of energy in microgrid. Inventions; 2021; 6, 15. [DOI: https://dx.doi.org/10.3390/inventions6010015]
14. Chen, S; Zhang, X; Wei, S; Yang, T; Guan, J; Yang, W; Qu, L; Xu, Y. An energy planning oriented method for analyzing spatial-temporal characteristics of electric loads for heating/cooling in district buildings with a case study of one university campus. Sustain Cities Soc; 2019; 51, 101629. [DOI: https://dx.doi.org/10.1016/j.scs.2019.101629]
15. Tsanas, A; Goulermas, JY; Vartela, V; Tsiapras, D; Theodorakis, G; Fisher, AC; Sfirakis, P. The Windkessel model revisited: a qualitative analysis of the circulatory system. Med Eng Phys; 2009; 31, pp. 581-588. [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/19129000][DOI: https://dx.doi.org/10.1016/j.medengphy.2008.11.010]
16. Chou, J-S; Bui, D-K. Modeling heating and cooling loads by artificial intelligence for energy-efficient building design. Energy Build; 2014; 82, pp. 437-446. [DOI: https://dx.doi.org/10.1016/j.enbuild.2014.07.036]
17. Bojic M, Yik F, Wan K, Burnett J (2000) Investigations of cooling loads in high-rise residential buildings in Hong Kong, in: Thermal Sciences 2000. Proceedings of the International Thermal Science Seminar. Volume 1, Begel House Inc
18. Chou, SK; Chang, WL. Large building cooling load and energy use estimation. Int J Energy Res; 1997; 21, pp. 169-183. [DOI: https://dx.doi.org/10.1002/(SICI)1099-114X(199702)21:2<169::AID-ER232>3.0.CO;2-3]
19. Sodha MS, Kaur B, Kumar A, Bansal NK (1986) Comparison of the admittance and Fourier methods for predicting heating/cooling loads. Sol Energy (United Kingdom) 36
20. Mui, KW; Wong, LT. Cooling load calculations in subtropical climate. Build Environ; 2007; 42, pp. 2498-2504. [DOI: https://dx.doi.org/10.1016/j.buildenv.2006.07.006]
21. Shin, M; Do, SL. Prediction of cooling energy use in buildings using an enthalpy-based cooling degree days method in a hot and humid climate. Energy Build; 2016; 110, pp. 57-70. [DOI: https://dx.doi.org/10.1016/j.enbuild.2015.10.035]
22. Yun, K; Luck, R; Mago, PJ; Cho, H. Building hourly thermal load prediction using an indexed ARX model. Energy Build; 2012; 54, pp. 225-233. [DOI: https://dx.doi.org/10.1016/j.enbuild.2012.08.007]
23. Korolija, I; Zhang, Y; Marjanovic-Halburd, L; Hanby, VI. Regression models for predicting UK office building energy consumption from heating and cooling demands. Energy Build; 2013; 59, pp. 214-227. [DOI: https://dx.doi.org/10.1016/j.enbuild.2012.12.005]
24. Deb, C; Eang, LS; Yang, J; Santamouris, M. Forecasting diurnal cooling energy load for institutional buildings using artificial neural networks. Energy Build; 2016; 121, pp. 284-297. [DOI: https://dx.doi.org/10.1016/j.enbuild.2015.12.050]
25. Gunay, B; Shen, W; Newsham, G. Inverse blackbox modeling of the heating and cooling load in office buildings. Energy Build; 2017; 142, pp. 200-210. [DOI: https://dx.doi.org/10.1016/j.enbuild.2017.02.064]
26. Kavaklioglu, K. Modeling and prediction of Turkey’s electricity consumption using support vector regression. Appl Energy; 2011; 88, pp. 368-375.
27. Li, Q; Meng, Q; Cai, J; Yoshino, H; Mochida, A. Applying support vector machine to predict hourly cooling load in the building. Appl Energy; 2009; 86, pp. 2249-2256.
28. Moradzadeh, A; Sadeghian, O; Pourhossein, K; Mohammadi-Ivatloo, B; Anvari-Moghaddam, A. Improving residential load disaggregation for sustainable development of energy via principal component analysis. Sustainability; 2020; 12, 3158. [DOI: https://dx.doi.org/10.3390/su12083158]
29. Zhao, J; Liu, X. A hybrid method of dynamic cooling and heating load forecasting for office buildings based on artificial intelligence and regression analysis. Energy Build; 2018; 174, pp. 293-308. [DOI: https://dx.doi.org/10.1016/j.enbuild.2018.06.050]
30. Roy, SS; Roy, R; Balas, VE. Estimating heating load in buildings using multivariate adaptive regression splines, extreme learning machine, a hybrid model of MARS and ELM. Renew Sustain Energy Rev; 2018; 82, pp. 4256-4268. [DOI: https://dx.doi.org/10.1016/j.rser.2017.05.249]
31. Moradzadeh A, Zeinal-Kheiri S, Mohammadi-Ivatloo B, Abapour M, Anvari-Moghaddam A (2020) Support vector machine-assisted improvement residential load disaggregation, in: 2020 28th Iranian Conference on Electrical Engineering (ICEE). IEEE 1–6
32. Luo, XJ; Oyedele, LO; Ajayi, AO; Akinade, OO. Comparative study of machine learning-based multi-objective prediction framework for multiple building energy loads. Sustain Cities Soc; 2020; 61, 102283. [DOI: https://dx.doi.org/10.1016/j.scs.2020.102283]
33. Moradzadeh, A; Zakeri, S; Shoaran, M; Mohammadi-Ivatloo, B; Mohammadi, F. Short-term load forecasting of microgrid via hybrid support vector regression and long short-term memory algorithms. Sustainability; 2020; 12, 7076. [DOI: https://dx.doi.org/10.3390/su12177076]
34. Ding, Y; Su, H; Kong, X; Zhang, Z. Ultra-short-term building cooling load prediction model based on feature set construction and ensemble machine learning. IEEE Access; 2020; 8, pp. 178733-178745. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.3027061]
35. Wang, Z; Hong, T; Piette, MA. Data fusion in predicting internal heat gains for office buildings through a deep learning approach. Appl Energy; 2019; 240, pp. 386-398.
36. Roy, SS; Samui, P; Nagtode, I; Jain, H; Shivaramakrishnan, V; Mohammadi-Ivatloo, B. Forecasting heating and cooling loads of buildings: a comparative performance analysis. J Ambient Intell Humaniz Comput; 2020; 11, pp. 1253-1264. [DOI: https://dx.doi.org/10.1007/s12652-019-01317-y]
37. Song, J; Xue, G; Pan, X; Ma, Y; Li, H. Hourly heat load prediction model based on temporal convolutional neural network. IEEE Access; 2020; 8, pp. 16726-16741. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.2968536]
38. Yu, Z; Haghighat, F; Fung, BCM; Yoshino, H. A decision tree method for building energy demand modeling. Energy Build; 2010; 42, pp. 1637-1646. [DOI: https://dx.doi.org/10.1016/j.enbuild.2010.04.006]
39. Ahmad, T; Chen, H. Short and medium-term forecasting of cooling and heating load demand in building environment with data-mining based approaches. Energy Build; 2018; 166, pp. 460-476. [DOI: https://dx.doi.org/10.1016/j.enbuild.2018.01.066]
40. Moradzadeh, A; Mansour-Saatloo, A; Mohammadi-Ivatloo, B; Anvari-Moghaddam, A. Performance evaluation of two machine learning techniques in heating and cooling loads forecasting of residential buildings. Appl Sci; 2020; 10, 3829.[COI: 1:CAS:528:DC%2BB3cXhs1ansbnN] [DOI: https://dx.doi.org/10.3390/app10113829]
41. Geysen, D; De Somer, O; Johansson, C; Brage, J; Vanhoudt, D. Operational thermal load forecasting in district heating networks using machine learning and expert advice. Energy Build; 2018; 162, pp. 144-153. [DOI: https://dx.doi.org/10.1016/j.enbuild.2017.12.042]
42. Cui, B; Fan, C; Munk, J; Mao, N; Xiao, F; Dong, J; Kuruganti, T. A hybrid building thermal modeling approach for predicting temperatures in typical, detached, two-story houses. Appl Energy; 2019; 236, pp. 101-116.
43. Wang, R; Lu, S; Feng, W. A novel improved model for building energy consumption prediction based on model integration. Appl Energy; 2020; 262, 114561. [DOI: https://dx.doi.org/10.1016/j.apenergy.2020.114561]
44. Chen, Q M Kum Ja Burhan, M; Akhtar, FH; Shahzad, MW; Ybyraiymkul, D; Ng, KC. A hybrid indirect evaporative cooling-mechanical vapor compression process for energy-efficient air conditioning. Energy Convers Manag.; 2021; 248, 114798. [DOI: https://dx.doi.org/10.1016/j.enconman.2021.114798]
45. Wong, SL; Wan, KKW; Lam, TNT. Artificial neural networks for energy analysis of office buildings with daylighting. Appl Energy; 2010; 87, pp. 551-557.
46. Paudel, S; Elmtiri, M; Kling, WL; Le Corre, O; Lacarrière, B. Pseudo dynamic transitional modeling of building heating energy demand using artificial neural network. Energy Build; 2014; 70, pp. 81-93. [DOI: https://dx.doi.org/10.1016/j.enbuild.2013.11.051]
47. Schiavon, S; Lee, KH; Bauman, F; Webster, T. Influence of raised floor on zone design cooling load in commercial buildings. Energy Build; 2010; 42, pp. 1182-1191. [DOI: https://dx.doi.org/10.1016/j.enbuild.2010.02.009]
48. Fan, C; Wang, J; Gang, W; Li, S. Assessment of deep recurrent neural network-based strategies for short-term building energy predictions. Appl Energy; 2019; 236, pp. 700-710.
49. Zhong, H; Wang, J; Jia, H; Mu, Y; Lv, S. Vector field-based support vector regression for building energy consumption prediction. Appl Energy; 2019; 242, pp. 403-414.
50. B.S.A.J. khiavi; B.N.E.K.A.R.T.K. hadi Sadaghat (2023) The utilization of a Naïve Bayes model for predicting the energy consumption of buildings. J Art Intel Syst Modelling 01. https://doi.org/10.22034/JAISM.2023.422292.1003
51. Zhou, G; Moayedi, H; Bahiraei, M; Lyu, Z. Employing artificial bee colony and particle swarm techniques for optimizing a neural network in prediction of heating and cooling loads of residential buildings. J Clean Prod; 2020; 254, 120082. [DOI: https://dx.doi.org/10.1016/j.jclepro.2020.120082]
52. Pessenlehner, W; Mahdavi, A. Building morphology, transparence, and energy performance, na; 2023;
53. Hastie T, Tibshirani R, Friedman JH, Friedman JH (2009) The elements of statistical learning: data mining, inference, and prediction, Springer, New York City
54. Piryonesi, SM; El-Diraby, TE. Role of data analytics in infrastructure asset management: overcoming data size and quality problems. J Transportation Eng Part B: Pavements; 2020; 146, 4020022. [DOI: https://dx.doi.org/10.1061/JPEODX.0000175]
55. Abdollahzadeh, B; Gharehchopogh, FS; Khodadadi, N; Mirjalili, S. Mountain gazelle optimizer: a new nature-inspired metaheuristic algorithm for global optimization problems. Adv Eng Softw; 2022; 174, 103282. [DOI: https://dx.doi.org/10.1016/j.advengsoft.2022.103282]
56. MiarNaeimi, F; Azizyan, G; Rashki, M. Horse herd optimization algorithm: a nature-inspired algorithm for high-dimensional optimization problems. Knowl Based Syst; 2021; 213, 106711. [DOI: https://dx.doi.org/10.1016/j.knosys.2020.106711]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© The Author(s) 2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Cooling load estimation is crucial for energy conservation in cooling systems, with applications like advanced air-conditioning control and chiller optimization. Traditional methods include energy simulation and regression analysis, but artificial intelligence outperforms them. Artificial intelligence models autonomously capture complex patterns, adapt, and scale with more data. They excel at predicting cooling loads influenced by various factors, like weather, building materials, and occupancy, leading to dynamic, responsive predictions and energy optimization. Traditional methods simplify real-world complexities, highlighting artificial intelligence’s role in precise cooling load forecasting for energy-efficient building management. This study evaluates Naive Bayes-based models for estimating building cooling load consumption. These models encompass a single model, one optimized with the Mountain Gazelle Optimizer and another optimized with the horse herd optimization algorithm. The training dataset consists of 70% of the data, which incorporates eight input variables related to the geometric and glazing characteristics of the buildings. Following the validation of 15% of the dataset, the performance of the remaining 15% is tested. Based on analysis through evaluation metrics, among the three candidate models, Naive Bayes optimized with the Mountain Gazelle Optimizer (NBMG) demonstrates remarkable accuracy and stability, reducing prediction errors by an average of 18% and 31% compared to the other two models (NB and NBHH) and achieving a maximum R2 value of 0.983 for cooling load prediction.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 The Tourism College of Changchun University, Basic Department, Changchun, China (GRID:grid.440663.3) (ISNI:0000 0000 9457 9842)