Content area
Structural reliability analysis often entails significant computational costs. Active learning surrogate models address this challenge, yet the potential of multi-fidelity surrogate models to reduce computational costs is noteworthy, particularly due to the efficiency of low-fidelity samples. However, the traditional learning function and stopping criterion are aimed at the single-fidelity framework, so they are not suitable for the multi-fidelity framework. This study introduces a novel active learning approach, the Adaptive Multi-fidelity Co-Kriging Monte Carlo Simulation (AMCK-MCS), to overcome these limitations. First, this study proposes a novel learning function for multi-fidelity Kriging surrogate models, which enhance the modeling efficiency by actively identifying high-uncertainty regions through a balanced integration of correlation, sampling density, and computational cost. Second, this study introduces a novel stopping criterion based on the relative error estimation of failure probability, derived from confidence intervals and the uncertainty weighting. This approach effectively mitigates the premature and delayed convergence in surrogate models. The proposed method is evaluated against classical methods based on distinct principles and two established multi-fidelity Kriging surrogate models through two numerical examples and an engineering case study. Results demonstrate that the AMCK-MCS method accurately predicts the failure probability while substantially reducing the computational costs.
Introduction
Reliability analysis of engineering structures is of great significance in practical engineering, and uncertainties are widespread in real-world engineering, typically originating from inherent random variations in the structure itself or its environment, including the changes in material properties, geometric dimensions, applied loads and so on, all of which inevitably affect the performance and reliability of engineering structures [1]. Therefore, accounting for the uncertainties in both the structure and its environment is crucial for accurately assessing the reliability of engineering structures. The structural reliability analysis primarily centers on evaluating the probability of failure, which is mathematically expressed as a multidimensional integral. The limit state equations of engineering structures are generally high-dimensional, nonlinear, and implicit, making analytical integration virtually infeasible [2]. In order to solve the failure probability in structural reliability problems, the developed methods can, in principle, be divided into three categories: approximation methods, simulation methods, and surrogate model methods. Approximation methods in structural reliability analysis primarily encompass the first-order second-moment method (FOSM) [3] and the adaptive first-order second-moment method (AFOSM) [4]. These methods involve performing local approximations at the most probable failure point (MPP) [5]. However, the increasing complexity of the limit state equation and the expansion of the failure domain impede their ability to yield accurate results. In contrast, the simulation methods involve modeling the joint distribution of random vectors and quantifying the failure probability through the integration of the limit state function, including the techniques such as Monte Carlo simulation (MCS) [6], subset simulation (SS) [7], importance sampling (IS) [8], and line sampling (LS) [9]. Nevertheless, the considerable sample size required by these simulation methods limits their practical application in engineering contexts.
The surrogate model approach is widely adopted in structural reliability analysis to reduce the computational costs, owing to its superior nonlinear modeling capacity, which renders it a focal point of research. In structural reliability analysis, the current predominant surrogate models include the response surface method (RSM) [10, 11], radial basis function (RBF) [12], artificial neural network (ANN) [13, 14], support vector machine (SVM) [15], polynomial chaos expansion (PCE) [16], and Kriging model [17, 18]; these models are employed to approximate the limit state function, thereby reducing the computational demands. RSM, grounded in polynomial regression, is well suited to low-dimensional, smooth problems but cannot quantify the uncertainty. ANN effectively models complex nonlinear relationships via neural networks, yet its training is computationally intensive and uncertainty assessment is limited. RBF, relying on radial basis kernel interpolation, shares mathematical similarities with Kriging but lacks a probabilistic framework. SVM performs robustly for high-dimensional regression tasks, though uncertainty quantification requires additional methods. PCE, utilizing orthogonal polynomial expansions, excels in uncertainty propagation but is less effective for non-smooth functions. Among widely used surrogate models, the Kriging model demonstrates exceptional performance in highly nonlinear scenarios and delivers robust fitting for problems involving local response discontinuities. It exhibits strong resilience to stochastic errors. The primary objective of the Kriging model is to provide predictive values alongside approximated variances, thereby enabling the robust uncertainty quantification. The Kriging active learning process predominantly entails developing an initial, imperfect Kriging model, which is iteratively refined through the application of a learning function and a stopping criterion. The learning function and stopping criterion are critical areas of research attention [19]. Numerous researchers have developed a range of learning functions for the Kriging model, including the expected feasibility function (EFF) [20], the U learning function [21], the H learning function based on entropy measures [22], and the reliability-based lower confidence bound (RLCB) function [23]. These functions aim to improve the learning efficiency and enhance the predictive accuracy.
The stopping criterion in active learning plays a critical role in ensuring the robustness of reliability analysis methods. In conventional learning functions, the threshold is typically predefined and may either exceed or fall below the value considered adequate to terminate the learning process. The selection of these thresholds depends on the specific type of learning function; however, determining an appropriate threshold for a specific task remains challenging without prior knowledge. Sun et al. [24] proposed the concept of residual variability in predictive uncertainty for misclassification, derived from the residual uncertainty of the Kriging model, as a precise metric for evaluating the limit state. Jian et al. [25] introduced two precision metrics related to the limit state and failure probability. Yi et al. [23] developed a maximum relative error criterion based on predictive uncertainty, which allows the active learning process to attain different accuracy levels by defining specific termination thresholds. The aforementioned methods rely on high-fidelity techniques, significantly constraining their ability to reduce computational costs.
To establish a mature Kriging model, the development of a single-fidelity Kriging model necessitates evaluating the samples using a precise high-fidelity model. Nonetheless, the computational costs involved in analyzing an adequate number of samples remain daunting. The multi-fidelity Kriging model effectively addresses research objectives by utilizing a substantial number of low-fidelity samples to compensate for the limited availability of high-fidelity samples [26], thereby significantly reducing computational costs [27]. Low-fidelity samples entail minimal computational costs but produce less accurate results, whereas high-fidelity samples, despite their high computational demands, deliver precise outcomes [28]. The multi-fidelity Kriging model integrates both high-fidelity and low-fidelity data, achieving an effective balance between the estimation accuracy and the computational efficiency. Multi-fidelity Kriging models can be classified into three primary categories: scale function-based models, spatial mapping-based models, and collaborative Kriging models. The multi-fidelity Kriging model, utilizing a scaling function, effectively captures the underlying patterns of low-fidelity data. This approach can be classified into multiplicative, additive, and hybrid scaling methods [29]. The multi-fidelity Kriging approach employs spatial mapping to effectively transfer critical information from low-dimensional, low-fidelity models to high-dimensional, high-fidelity models [30]. O’Hagan et al. [31] advanced the co-Kriging framework by incorporating an autoregressive model to seamlessly integrate data across models of different fidelity levels. Although the multi-fidelity Kriging model has been explored for structural reliability analysis [1], its application remains limited compared to single-fidelity models. Consequently, there is an urgent need to develop a novel reliability analysis method leveraging the multi-fidelity Kriging surrogate model.
This study introduces an active learning method, AMCK-MCS, based on a multi-fidelity Kriging surrogate model, designed to deliver precise failure probability estimate. Firstly, a novel learning function is developed to target the regions of high uncertainty, effectively integrating low- and high-fidelity models through cross-correlation, sampling point density, and cost considerations. This approach successfully resolves the incompatibility between sample updates and low-fidelity models. Secondly, this study introduces a new stopping criterion to avoid the premature or delayed results, which integrates the adaptive confidence interval estimation, the relative error and uncertainty weighting. This criterion dynamically adapts to varying sample sizes, ensuring robust termination of the active learning process while minimizing the computational costs without the sacrificing accuracy. Through two numerical examples and an engineering case study, the proposed AMCK-MCS method is evaluated against several established structural reliability analysis methods, demonstrating its superior efficiency and accuracy.
The paper is structured as follows: Sect. 2 outlines the multi-fidelity Kriging framework. Section 3 details the key features and computational methodology of the AMCK-MCS method. Section 4 evaluates the effectiveness of the proposed algorithm through two numerical examples of varying complexity and an engineering case study. Section 5 presents the conclusions and provides the directions for future research.
Co-Kriging theory
Within the framework of multi-fidelity, the multi-fidelity Kriging model incorporates experimental design and performance function values for the high-fidelity model, respectively, represented as and , while those for the low-fidelity model are, respectively, and . The experimental design of the low-fidelity model and high-fidelity model meets the criterion . The experimental design for the multi-fidelity Kriging model, along with its corresponding performance function value, is denoted as
1
2
where is considered an instantiation of a joint Gaussian distribution. The autoregressive co-Kriging model introduced by Keane et al. [32] is employed here. The high-fidelity model and the low-fidelity model are presumed to adhere to the Markov property:3
When the low-fidelity model value of the response function at is established, the high-fidelity model value of the response function at this juncture is unaffected by the low-fidelity model value of the response function at alternative locations. Based on the Markov condition [31, 32], when the covariance is stationary, the subsequent multi-fidelity Kriging model can be delineated by
4
where represents the high-fidelity Kriging model; represents the low-fidelity Kriging model, which is developed by the experimental design based on the high-fidelity model and low-fidelity model. acts as the scaling coefficient for . The Kriging model denotes the residual difference of and . According to the aforementioned Markov property, and are independent on . Below is the covariance matrix of the multi-fidelity Kriging model:5
where represents the prior variances of the Gaussian stochastic processes , and represents the prior variances of the Gaussian stochastic processes .The covariance matrix of the multi-fidelity Kriging model has two correlation coefficients, and . Similar to the single-fidelity Kriging model, for the multi-fidelity Kriging model, we must also determine the positional parameters , , , , , , and . Specially, since and are independent of each other, the above parameters can be estimated independently in and , respectively.Initially, and can be derived from the following log-likelihood function
6
Set the derivative of Eq. (6) to zero. Then, the maximum likelihood estimators for and can be given as
7
8
By substituting Eq. (7) and Eq. (8) into Eq. (6) and eliminating the constant term, the centralized logarithmic likelihood function is derived as
9
The value of can be determined from the maximum value of Eq. (9). Obtaining an analytical solution for Eq. (9) is challenging due to its inherent implicit nature and non-differentiability. In many cases, a near-optimal hyperparameter solution is derived by employing global optimization techniques, such as the grey wolf algorithm [33].
The subsequent phase involves estimating the parameters of the differential model, which is delineated as
10
where and are the responses of high fidelity and low fidelity at , respectively. If the low-fidelity model fails to generate a response, the calculated by the low-fidelity Kriging model is supplied.Due to the assumption of independence, and can be derived by substituting the subscript in Eq. (6) with . Likewise, by altering the subscript to identify the highest value of Eq. (9), and can be determined. Upon determining the parameters, Forrester et al. [32] provided the expressions for the unbiased estimate and prediction uncertainty of an unknown point, which can be, respectively, formulated as
11
12
where and are, respectively, derived from the subsequent formula13
14
The proposed method
This study presents the Adaptive Multi-fidelity Co-Kriging Monte Carlo Simulation (AMCK-MCS) method, designed to deliver the precise failure probability estimate while minimizing the computational costs through the integration of multi-fidelity datasets. This chapter will describe in detail the construction process of the proposed model in this paper, including the design and calculation process of the learning function and the stopping criterion, aiming to lay the foundation for subsequent reliability analysis. A novel learning function and an innovative stopping criterion have been developed to enhance the computational efficiency of the proposed approach. The novel learning function enhances the capability to explore the regions of significant uncertainty, effectively balances the trade-off between exploration and exploitation, and addresses the issue of subsequent sample points’ inconsistency with the multi-fidelity model. Additionally, an adaptive stopping criterion, based on confidence interval estimation, is proposed, ensuring the precise termination of the AMCK-MCS method by aligning it with the expected failure probability. The relative error estimation is derived from an innovative bootstrap confidence estimation technique, enabling the precise control of the AMCK-MCS method through the specification of a relative error stopping threshold. This section subsequently elaborates on the detailed intricacies and computational procedures of the AMCK-MCS approach, where HF, LF, and MF denote high-fidelity, low-fidelity, and multi-fidelity models, respectively.
MU learning function
The learning function plays a pivotal role in the active learning process by identifying the next optimal point for evaluating the target function, with the objective of precisely delineating the limit state equation to effectively distinguish between positive and negative samples during the exploration and the exploitation. The positive and negative of the performance function is crucial, and high-risk regions identified by predictor must be integrated into the experimental designs for future investigations. Ambiguity in identifying ‘dangerous points’ can lead to the variations in the positive and negative values of their anticipated outcomes, thereby influencing the probability of failure. Potential high-risk points exhibit three key characteristics: proximity to the limit state surface, substantial uncertainty (elevated Kriging variance), or a combination of both.
At prediction point , the anticipated value of the Kriging surrogate model aligns with the normal distribution . Echard et al. [21] proposed the U learning function, articulated as follows:
15
where represents the mean value projected by the surrogate model, is the standard deviation projected by the surrogate model, and the update point must be established by selecting the minimum value of within the design space.Minimizing quantifies the proximity to the limit state equation. The learning process generally identifies the positions approaching the limit state. The model predicts the outcomes with heightened uncertainty. A new learning function, MU (maximizing uncertainty), has been created based on this concept, defined as follows:
16
where denotes the mean forecasted by the surrogate model, denotes the standard deviation forecasted by the surrogate model, signifies the cumulative distribution function (CDF) of the standard normal variable, and is derived from the joint probability density function (PDF). is a constant within the interval (0, 1), and the remaining components of the aforementioned equation will be elaborated upon subsequently.The expected value of the Kriging surrogate model conforms to the normal distribution. Hence combined with Eq. (15), the likelihood of prediction error for the sample symbol can be articulated as
17
The values of and are illustrated in Fig. 1. When , the cumulative probabilities for negative and positive values are calculated by and , respectively. They collectively represent the potential scope of the target value within the specified confidence interval. The multiplication of two CDFs denotes a symmetric uncertainty evaluation; specifically, if the mean exhibits a substantial probability in both positive and negative directions, the uncertainty at this point is considerable and possesses significant learning value. The product of two CDFs is always less than or equal to 1. To further adjust the probability distribution, we introduced an exponent (where is between 0 and 1) to the product of the two CDFs. This reduces the shrinking effect of the product, making the resulting value greater than the original product, thereby decreasing the influence of extreme values on the outcome and rendering the data distribution smoother and more reliable.
[See PDF for image]
Fig. 1
The function
For sample points with less uncertainty, the product of and is diminished, signifying a lower learning value for this point. Therefore, the new point can be selected by
18
The cross-correlation function
Uncertainties associated with different fidelity levels exert varying influences on the multi-fidelity Kriging model. The primary function of the cross-correlation mechanism is to quantify the correlation between high-fidelity and low-fidelity models, thereby facilitating the transfer of uncertainty from the low-fidelity model to the high-fidelity model. Tahmasebi et al. [34] asserted that the cross-correlation function mainly depends on the predicted value and its related uncertainty. It is characterized by
19
When , . When , is utilized to evaluate the return of the low-fidelity sample. Incorporating low-fidelity samples enables the quantification of reduced uncertainty in the high-fidelity Kriging model. In the multi-fidelity Kriging model, the value of exceeds that of , as the uncertainty inherent in the low-fidelity model propagates to the high-fidelity model during its development. The value of is currently in the range of 0 and 1. Furthermore, when the variance of the low-fidelity Kriging model is substantial, will attain a comparatively high value. quantifies the relationship between the low-fidelity model and the high-fidelity model. If this component of the MU learning function is removed, low-fidelity samples will invariably be selected.
The cost function
The cost function assesses the comparative simulation cost of the high-fidelity model with the low-fidelity model [35]. The value of is 1, and the value of is ascertained by juxtaposing the relative simulation duration with that of the high-fidelity model. For instance, signifies that the simulation duration of the low-fidelity sample is 20% of that of the high-fidelity sample. is able to modify the cost. Upon removal of component , the MU learning function will consistently select high-fidelity samples for updates.
The sampling point density function
The multi-fidelity Kriging model is consistent with the ordinary Kriging model, and its effectiveness depends heavily on the distribution of samples. Excessive similarity among samples of differing fidelity levels can compromise the accuracy of the multi-fidelity Kriging model. To address this, Ang et al. [36] introduced a sampling point density function to alleviate the issue of sample over-aggregation, , as follows
20
where denotes the spatial correlation function, and signifies the number of samples from the t-level fidelity model.When constitutes the training samples, equals 0. As the Euler distance between and escalates, the value of also increases. Consequently, can avert the overaccumulation of training sample points and preserve the covariance matrix of the Co-Kriging method in an optimal state.
The stopping criterion
The selection of stopping criteria is a pivotal element of the AMCK-MCS methodology. Inappropriate stopping criteria can lead to the premature termination of the active learning process, thereby undermining the accuracy of the results. Excessive reliance on data to construct the surrogate model may impede the active learning process. This section introduces a stopping criterion that integrates adaptive confidence interval estimation, relative error, and uncertainty weighting.
The relative error is generally defined as
21
where represents the actual failure probability, and denotes the computed failure probability.The true failure probability of the black box problem remains unknown, and we employ the estimated failure probability derived from the MCS method to evaluate the efficacy of the approach [37]. is delineated as
22
where is defined as an indicator function as follows23
Ensure that the reliability of results meet the following conditions [38]:
24
Equation (21) can be reformulated as
25
where signifies the number of anticipated failure samples, and indicates the number of actual failure samples. They can be linked by bridging functions [39], as elaborated below26
where represents the count of highly uncertain sample points that have actually truly failed but are anticipated to be secure, while represents the count of very unclear sample points, which are genuinely secure yet anticipated to fail.The sample points acquired by the co-Kriging approach for determining failure probability can be broadly categorized into two segments: safety samples and failure samples, despite the limit state equation. However, they can be categorized into four specific scenarios, as illustrated in Fig. 2. The triangles highlighted in blue and purple denote safety samples and failure samples, respectively, which correspond to the first two categories. Furthermore, the star sample and the circle sample in Fig. 2 pertain to the latter two categories, encompassing the actual security that is not expected (the region of the star sample) and the actual failure that is forecasted as secure (the region of the circular sample).
[See PDF for image]
Fig. 2
The schematic diagram of error estimation
When a sample is distant from the limit state surface, the prediction uncertainty of the surrogate model has negligible impact on its state classification, even when accounted for. Conversely, when a sample is proximate to the limit state surface, the prediction uncertainty of the surrogate model significantly influences its safety classification, potentially leading to the misclassification of truly safe samples as hazardous and truly hazardous samples as safe. From a theoretical perspective, if the assessment is not verified by the real limit state function, then an accurate judgment of the safety state cannot be obtained.
We can partition the total space by using confidence intervals, concentrating on the regions with a high likelihood of prediction errors. The samples characterized by significant uncertainty can be defined as
27
where indicates a sample that is considered to be positive within a certain confidence interval, but it is actually negative; signifies a sample that is classified as negative within a given confidence interval, but it is actually positive. The confidence range is 95%, hence the value of in the formula is 1.96.The subsequent formula can be used to calculate and .
28
To ensure the confidence level, Wang et al. [40] proposed the concept of maximum error to modify , defined as
29
where denotes the upper bounds of the confidence intervals for ; denotes the upper bound of the confidence intervals for . Equation (29) demonstrates that the error values are computed with respect to the safety and failure domains, respectively. To enhance the assessment of the relative significance of these domains, we propose incorporating dynamic weighting into the existing formula, enabling the error term to be adjusted based on the contributions of each domain. Normalizing the weights ensures that the errors from both domains are accurately reflected in the overall performance evaluation of the system. Selecting the maximum weight optimizes the resolution of the most intricate components of the computational process. It can be provided by30
where and represent the weights of the two domains, which can be, respectively, determined by31
where represents the cumulative probability of the failure domain, calculated as the weighted average of the failure probabilities of all samples inside that domain, so reflecting the associated risk. denotes the cumulative probability of the safety region, derived from the weighted average of the safety probabilities of all samples inside that region, so indicating the reliability of the safety zone.Therefore, the crucial step in calculating is to determine the upper limits of the confidence intervals for and , respectively. This research tackles the challenge by utilizing technology that dynamically modifies the confidence interval according on the quantity of sample points. Using as an example, the suitable approach for estimating the confidence interval is adaptively chosen based on the quantity of distinct samples. The Bayesian confidence interval approximation is utilized to estimate the confidence interval for large samples. When the sample size is considerable, the central limit theorem (CLT) [41] asserts that the mean and standard deviation of the samples approximate a normal distribution. With an increase in sample size, the posterior distribution will more closely resemble a normal distribution. Consequently, for large samples, we can employ the normal distribution approximation to characterize the posterior distribution. The confidence interval for a normal distribution can be ascertained using the quantile of the standard normal distribution. For this problem, the confidence interval is computed by [42]
32
where represents the overall probability of the security region, and denotes the square root of the sample mean square error. specifies the factor for the confidence interval width, with a value of 1.96 in this study.This work uses the quantile approach to determine the confidence interval when the sample size is moderate. By arranging the probability values of the samples, the location of the upper quantile is determined, leading to the final calculation of the upper limit of the confidence interval. The position of the upper quantile is calculated as [43]
33
where represents the number of samples; represents the confidence factor, and the value specified in this study is 0.95.When the sample size is constrained, the estimation of confidence intervals predicated on distributional assumptions is likely to be inaccurate. This work includes an extra bias to mitigate the estimate bias resulting from insufficient sample size. Conrad et al. [44] have conducted the analogous research, and it can be characterized as
34
where represents an additional bias term.This study introduces an adaptive selection strategy that dynamically selects an appropriate confidence interval estimation technique based on sample size while identifying the most relevant treatment method for each scenario. The strategy ensures the robust reliability across diverse sample sizes and effectively reduces the estimation bias arising from variations in sample size.
An innovative error cessation criterion utilizing the adaptive confidence interval estimation can be articulated as
35
where represents the defined stop threshold, and the magnitude of this threshold also influences the precision of the active learning process.The specific computation process of AMCK-MCS
The computational methodology of the AMCK-MCS method is depicted in Fig. 3. The specific stages are listed as follows:
[See PDF for image]
Fig. 3
The calculation flowchart of the AMCK-MCS method
Step 1: Sample pool of samples is formed via MCS. Construct the design space of the input random variable by selecting 5 times the standard deviation [45]; hence, the design space is
36
Step 2: The preliminary low-fidelity experimental design and high-fidelity experimental design of the multi-fidelity Kriging model have been created. The low-fidelity samples are produced using the LHS approach, while the high-fidelity experiment is structured as . To ensure the complete distribution of high-fidelity samples within the low-fidelity samples, an exchange method based on the Morris–Mitchell standard is employed.
Step 3: The low-fidelity model and the multi-fidelity model are assessed against the real model, yielding the corresponding performance function values that constitute the initial training sample set.
Step 4: The multi-fidelity Kriging model is developed.
tep 5: Determinate the updated point. The present multi-fidelity Kriging model computes the MU learning function value inside the candidate set , identifying as the location with the most uncertainty in the model estimation for updating.0
Step 6: Assess the convergence of the existing multi-fidelity Kriging model. If is true, the active learning process can be terminated, allowing for the execution of step 7; if not, it is essential to compute the actual output response value of , incorporate into the existing training sample set, and revert to step 4.
Step 7: The existing convergent multi-fidelity Kriging model is employed to estimate both the positive and negative output response values, and the failure probability is computed.
Step 8: Compute the coefficient of variation for the failure probability estimate . If , the outcome of is deemed acceptable, and the end result is achieved; otherwise, step 9 is executed.
Step 9: Expand the MCS sample pool. If occurs, it is essential to expand the candidate set , revert to step 5, and choose the update point from the new sample pool until the iteration termination criterion is met.
The low-fidelity sample size is set to 5 to 10 times the problem dimension to ensure the effective resolution. The high-fidelity model refines the details using the residual corrections, with its sample size set to approximately 20% of the low-fidelity sample size, effectively balancing the computational cost and the accuracy.
Cases studies
Following the introduction of the theoretical framework in the previous chapter, this section analyzes two numerical functions and an engineering example to assess the effectiveness of the proposed AMCK-MCS method. This research examines three classical methods of varying calculation kinds in the contemporary structural reliability analysis method and two multi-fidelity Kriging surrogate model methods. (1) AK-MCS in the surrogate model method [21]; (2) the subset simulation method incorporating importance sampling (SS-IS) in numerical simulation [46]; (3) FOSM in approximation techniques [3]; (4) AMF-MCS + AEFF in multi-fidelity Kriging surrogate model methods [47]; and (5) MF-BSC-Believer in multi-fidelity Kriging surrogate model methods [48]. The excellent performance of two multi-fidelity Kriging surrogate model methods has been fully proved in the published papers. The sources of the comparison data in Tables 1 and 3 are the references cited above.
Table 1. The comparison of different methods
Method | ||||||
|---|---|---|---|---|---|---|
– | MCS | 2.221 | – | – | ||
– | FOSM | 63 | 63 | 1.350 | – | 39.22 |
– | SS-IS | 54,000 | 54,000 | 1.779 | 1.43 | 19.90 |
– | AK-MCS | 94.47 | 94.47 | 2.189 | 4.22 | 1.44 |
AMK- MCS + AEFF | 17.80 + 0.2*63.57 = 30.57 | 17.80 + 63.57 = 81.37 | 2.305 | 4.86 | 3.78 | |
MF-BSC- Believer | 17.23 + 0.2*47.17 = 26.67 | 17.23 + 47.17 = 64.40 | 2.262 | 4.86 | 1.85 | |
AMCK-MCS | 17.53 + 0.2*51.87 = 27.91 | 17.53 + 51.87 = 69.40 | 2.282 | 4.88 | 2.75 | |
AMK- MCS + AEFF | 27.10 + 0.2*61.50 = 39.40 | 27.10 + 61.50 = 88.60 | 2.315 | 4.84 | 4.23 | |
MF-BSC- Believer | 24.00 + 0.2*53.93 = 34.78 | 24.00 + 53.93 = 77.93 | 2.293 | 4.88 | 3.24 | |
AMCK-MCS | 18.63 + 0.2*55.90 = 29.81 | 18.63 + 55.90 = 74.53 | 2.263 | 4.89 | 1.89 | |
AMK- MCS + AEFF | 24.33 + 0.2*79.03 = 40.14 | 24.33 + 79.03 = 103.36 | 2.280 | 4.88 | 2.66 | |
MF-BSC- Believer | 26.50 + 0.2*62.20 = 38.94 | 26.50 + 62.20 = 88.70 | 2.275 | 4.86 | 2.43 | |
AMCK-MCS | 20.07 + 0.2*56.73 = 31.42 | 20.07 + 56.73 = 76.80 | 2.236 | 4.91 | 0.68 |
The bold in table is mainly used to highlight the key results of the proposed method in this paper, thereby emphasizing its effectiveness. By marking these important data points in bold, we aim to enable the readers to quickly locate the core information of Tables
We will analyze their computational cost and accuracy. Additionally, several stopping thresholds of [0.03, 0.02, 0.01] are utilized to evaluate the efficiency of the proposed method under various stopping conditions. In order to allow readers to visually understand the calculation results of the proposed method in this paper and those of other methods, the results of the AMCK-MCS method are presented in bold font in the table. The computations are carried out by the software MATLAB 2023a based on the computational platform with a 12th Gen Intel(R) Core (TM) i5-12400F Processor and 16 GB RAM. In order to reduce the impact of randomness on the results, each method runs 30 times on the problem and takes its average.
Visual example: the four-branch function
This section adopts a series structure comprising of four branches [49], as delineated in the subsequent formula. This sample exemplifies a visual active learning process. The associated variables in this scenario conform to a standard normal distribution. Specifically, A and are both given the value of 1, while is assigned the value of 0.2 [50]. Table 1 presents the comparative results for Example 1.
37
To effectively elucidate the active learning process of the AMCK-MCS method, this study illustrates several representative states of the four-branch function during stochastic simulations in Fig. 4, with the stopping criterion threshold established at 0.01.
[See PDF for image]
Fig. 4
The result after adding points for Example 1
As depicted in Fig. 4a, the initial dataset comprises 20 sample points, with 6 allocated to the high-fidelity model and the remaining assigned to the low-fidelity model. Figure 4a reveals that the limit state function estimated by the initial multi-fidelity model substantially deviates from the true limit state function, underscoring the inadequacy of the untrained model in capturing its fundamental characteristics. Subsequently, the MU learning function is employed to identify and incorporate new sample points, thereby refining the predictions of the model. As illustrated in Fig. 4b, over ten iterations, three high-fidelity and seven low-fidelity sample points are selected, yielding improved the predictions compared to the initial state, though the accuracy remains limited at the four corners of the domain. Notably, all selected training points lie along the limit state boundary, highlighting the robust exploratory capability of the MU learning function. With further iterations, as shown in Fig. 4c, an additional 14 high-fidelity and 42 low-fidelity sample points are incorporated, significantly enhancing the prediction accuracy of the limit state function and accurately capturing its overall trend. Ultimately, Fig. 4d demonstrates that the process concludes after 50 iterations, utilizing 18 high-fidelity and 58 low-fidelity sample points, achieving the precise alignment with the true limit state, with newly selected points consistently positioned near the actual limit state boundary. These results affirm the efficacy of the proposed method.
Table 1 summarizes the performance of the proposed method under various stopping criterion thresholds, compared with representative methods based on distinct principles, including two multi-fidelity Kriging surrogate model approaches. As shown in Table 1, denotes the total cost of incorporating high-fidelity and low-fidelity samples, expressed as the equivalent number of high-fidelity samples. For the four-branch function, the reference failure probability is , obtained from Monte Carlo simulations. It is evident that the values for traditional single-fidelity methods—AK-MCS (94.47), SS-IS (54,000), and FOSM (63)—are substantially higher than those of multi-fidelity methods, resulting in greater computational cost. Regarding the estimation precision, the relative errors of FOSM and SS-IS are 39.22 and 19.90%, respectively; thus, these methods are not discussed further. The AK-MCS method achieves a relative error of 1.44%, surpassing most multi-fidelity methods in Table 1, demonstrating the superior precision. With the exception of AMCK-MCS at , other multi-fidelity methods also attain high precision while offering significant advantages in computational cost. At , the value of AMCK-MCS is lower than that of AMK-MCS + AEFF but slightly higher than that of MF-BSC-Believer, with the precision following a similar trend. However, at and , AMCK-MCS exhibits the outstanding performance in both efficiency and precision. Specifically, at , the value of AMCK-MCS is reduced by 24.34 and 14.29% compared to AMK-MCS + AEFF and MF-BSC-Believer, respectively, with a relative error of 1.89%. At , the value decreases by 21.72% and 19.31%, respectively, and the relative error improves to 0.68%. Furthermore, as the stopping criterion becomes more stringent, higher precision can be achieved at the cost of increased computational cost. In summary, the AMCK-MCS method significantly reduces the computational cost while maintaining the high precision.
To rigorously validate the robustness of the proposed method, Fig. 5 presents the boxplots of and derived from 30 repeated experiments. Within the boxplots, the solid and dashed lines denote the median and mean values, respectively, obtained from the randomized trials. The -value boxplots corresponding to different methods are depicted in Fig. 5a. Figure 5a illustrates that the parameter of AMCK-MCS is markedly smaller than that of the other methods presented. The FOSM approach, derived from the moment method, ensures the stability in its . The majority of trials utilizing the AMCK-MCS approach with varying stopping thresholds required around 27.91, 29.81, and 31.42 equivalent high-fidelity samples, respectively, which are inferior to those of AK-MCS. Furthermore, the range of the AMCK-MCS test is inferior to that of the AK-MCS, signifying that the present method exhibits the exceptional efficacy in yielding findings throughout each independent test. Table 1 and Fig. 5b reveal that the proposed AMCK-MCS method consistently converges to the predefined target value across various error thresholds. The proposed method exhibits the limited robustness under the condition , as evidenced by the boxplot displaying the widest box range across three evaluated cases. However, with the increased stringency of condition , the overall robustness of the proposed method is significantly enhanced. Notably, the values of the AMCK-MCS method consistently fall below the specified thresholds. The results demonstrate that the AMCK-MCS method effectively fulfills the predefined requirements while exhibiting outstanding performances.
[See PDF for image]
Fig. 5
The boxplots of the total cost and
Nonlinear oscillator problem
The nonlinear oscillator function [51] characterizing the undamped nonlinear single-degree system is presented by.
38
Table 2 presents detailed specifications of the six-dimensional design variables. Figure 6 illustrates the single-degree-of-freedom system and the applied rectangular load . Herein, and are assigned values of 1 and 0.1, respectively. Drawing on prior experience, a cost-effective low-fidelity model can be readily specified, despite its inherent complexity [52]. In this case, the initial sample size is set at 40 points, with 6 allocated to the high-fidelity model.
Table 2. The parameters of the nonlinear oscillator
Design variables | Mean | Standard deviation | Distribution |
|---|---|---|---|
1 | 0.05 | Normal | |
1 | 0.1 | Normal | |
0.1 | 0.01 | Normal | |
0.5 | 0.05 | Normal | |
1 | 0.1 | Normal | |
1 | 0.1 | Normal |
[See PDF for image]
Fig. 6
The schematic diagram plot of the single-degree system
Table 3 summarizes the performance of the proposed method under various stopping criterion thresholds, compared with representative methods based on the distinct principles, including two multi-fidelity Kriging surrogate model approaches. For the nonlinear oscillator problem, the reference failure probability is , obtained from Monte Carlo simulations. The total cost , representing the equivalent number of high-fidelity samples by combining high-fidelity and low-fidelity samples, is significantly higher for traditional single-fidelity methods: AK-MCS (88.53), SS-IS (54,000), and FOSM (112). These values exceed those of multi-fidelity methods, resulting in greater computational cost. Regarding the estimation precision, the FOSM method exhibits a relative error of 33.53%, indicating the poor accuracy; thus, it is not considered further. The SS-IS method achieves a low relative error of 0.37%, reflecting high precision, but its computational cost ( = 54,000) undermines its competitiveness. The AK-MCS method, with a value of 88.53 and a relative error of 3.87%, presents a competitive alternative to the proposed AMCK-MCS method. However, AK-MCS incurs higher computational costs, and as the convergence improves, the precision of multi-fidelity methods surpasses that of AK-MCS. A comparative analysis of multi-fidelity methods demonstrates that, at a stopping criterion threshold of , the computational cost of the AMCK-MCS method is lower than those of AMK-MCS + AEFF and MF-BSC-Believer, with its relative error positioned between the two, exhibiting higher precision than MF-BSC-Believer but lower precision than AMK-MCS + AEFF. At , the value of AMCK-MCS surpasses that of MF-BSC-Believer yet remains below that of AMK-MCS + AEFF, with a relative error that outperforms MF-BSC-Believer but falls short of AMK-MCS + AEFF. At , the performance of the AMCK-MCS method is very similar to that observed at . With the improvement of accuracy, the overall value increases. In summary, the AMCK-MCS method demonstrates robust performance in this case study, effectively balancing the precision and the computational efficiency.
Table 3. The comparison of different methods
Method | ||||||
|---|---|---|---|---|---|---|
– | MCS | 2.431 | – | – | ||
– | FOSM | 112 | 112 | 1.616 | – | 33.53 |
– | SS-IS | 54,000 | 54,000 | 2.422 | 0.76 | 0.37 |
– | AK-MCS | 88.53 | 88.53 | 2.337 | 4.08 | 3.87 |
AMK- MCS + AEFF | 23.30 + 0.1*85.73 = 31.87 | 23.30 + 85.73 = 109.03 | 2.509 | 4.85 | 3.21 | |
MF-BSC- Believer | 20.33 + 0.1*69.30 = 27.26 | 20.33 + 69.30 = 89.63 | 2.532 | 4.82 | 4.15 | |
AMCK-MCS | 16.53 + 0.1*74.3 = 23.96 | 16.53 + 74.3 = 90.83 | 2.526 | 4.57 | 3.91 | |
AMK- MCS + AEFF | 30.37 + 0.1*93.80 = 39.75 | 30.37 + 93.80 = 124.17 | 2.467 | 4.87 | 1.48 | |
MF-BSC- Believer | 24.23 + 0.1*77.27 = 31.96 | 24.23 + 77.27 = 101.50 | 2.508 | 4.87 | 3.17 | |
AMCK-MCS | 23.90 + 0.1*88.57 = 32.76 | 23.90 + 88.57 = 112.47 | 2.496 | 4.89 | 2.67 | |
AMK- MCS + AEFF | 41.00 + 0.1*112.70 = 52.27 | 41.00 + 112.70 = 153.70 | 2.531 | 4.86 | 4.11 | |
MF-BSC- Believer | 34.93 + 0.1*93.87 = 44.32 | 34.93 + 93.87 = 128.80 | 2.532 | 4.85 | 4.15 | |
AMCK-MCS | 36.60 + 0.1*109.07 = 47.51 | 36.60 + 109.07 = 145.67 | 2.480 | 4.72 | 2.02 |
The bold in table is mainly used to highlight the key results of the proposed method in this paper, thereby emphasizing its effectiveness. By marking these important data points in bold, we aim to enable the readers to quickly locate the core information of Tables
Figure 7 illustrates the superiority of the AMCK-MCS method by demonstrating the influence of iterative sample point additions on relative error throughout the active learning process. Figure 7 depicts the historical sample point additions for various methods, with the green, magenta, red, and blue lines corresponding to the AK-MCS method and the AMCK-MCS method with of 0.03, 0.02, and 0.01, respectively. In this stochastic simulation, the AK-MCS method achieves the results presented in Fig. 7 by incorporating 73 high-fidelity sample points. The relative error curves of the AMCK-MCS method, corresponding to various stopping thresholds, exhibit the fluctuations throughout the learning process; however, overall, their relative errors are diminishing. For the AMCK-MCS method with , the learning phase incorporates 11 high-fidelity samples and 40 low-fidelity samples, equivalent to 15 high-fidelity samples. For the AMCK-MCS method with , the active learning phase incorporates 18 high-fidelity samples and 64 low-fidelity samples, equivalent to 22.6 high-fidelity samples. For the AMCK-MCS method with , the active learning phase incorporates 36 high-fidelity samples and 91 low-fidelity samples, equivalent to 45.1 high-fidelity samples. When accounting for the computational cost in the active learning process, the AMCK-MCS method with of 0.03 and 0.02 surpasses the AK-MCS method in computational efficiency while maintaining the comparable accuracy. When employing the AMCK-MCS method with , the computational cost rises substantially to meet the heightened accuracy requirements. Nevertheless, in terms of computational efficiency, the AMCK-MCS method at this threshold demonstrates a marked advantage over the AK-MCS method.
[See PDF for image]
Fig. 7
The iteration process of AK-MCS and AMCK-MCS
To rigorously validate the robustness of the proposed method, Fig. 8 presents the boxplots of and derived from 30 repeated experiments. Within the boxplots, the solid and dashed lines denote the median and mean values, respectively, obtained from the randomized trials. The boxplot of the pertinent approach is displayed in Fig. 8a. Figure 8a illustrates that the value of the suggested AMCK-MCS is markedly smaller than that of the other approaches presented. The FOSM approach, derived from the moment method, ensures the stability in its . The majority of studies utilizing the AMCK-MCS approach with varying stopping thresholds incur the costs of around 23.96, 32.76, and 47.51 comparable high-fidelity samples, respectively, which are inferior to those associated with AK-MCS. Furthermore, although the range of the AMCK-MCS method at is broader, the ranges at the thresholds of 0.03 and 0.02 are narrower than those of the AK-MCS method, underscoring the consistency of the AMCK-MCS method in computational efficiency across the independent trials. Combined with the analysis of Fig. 8b and Table 3, the proposed AMCK-MCS method converges uniformly to the predefined target value under different error thresholds. Under condition , the proposed method exhibits the optimal robustness. Notably, the performance metric of the AMCK-MCS method consistently remains below the specified threshold. These results confirm that the AMCK-MCS method effectively fulfills the predefined requirements while demonstrating superior performance.
[See PDF for image]
Fig. 8
The boxplots of the total cost and
Car frontal collision problem
Offset collisions constitute a significant percentage of collision accidents and are prone to causing the casualties. In offset collisions, the front cabin is compressed and distorted, resulting in excessive backward extrusion incursion into the occupant compartment at the rear of the firewall [53]. The primary components for the force absorption in this technique are the front energy-absorbing box and the front longitudinal beam. For a typical passenger vehicle, the China New Car Assessment Program (C-NCAP) 2018 Edition test protocol specifies a frontal collision condition involving a 40% offset rigid barrier impact at a simulated speed of 64 km/h. Figure 9 depicts the finite element model [54], essential for the study of vehicle deformation. The variables
[See PDF for image]
Fig. 9
The finite element model of offset collision
, , , and specify the thickness of the inner plate of the front longitudinal beam, the thickness of the outer plate of the front longitudinal beam, the thickness of the inner plate of the front energy-absorbing box, and the thickness of the outer plate of the front energy-absorbing box, respectively. Table 4 presents the variable parameters and their corresponding distribution types. The reaction surface function for the maximum firewall intrusion is given as follows:39
Table 4. The parameters of car frontal collision problem
Design variables | Meaning | Mean | Standard deviation | Distribution |
|---|---|---|---|---|
The thickness of the inner plate of front longitudinal beam | 1 | 1.5 | Normal | |
The thickness of the outer plate of front longitudinal beam | 1.5 | 2 | Normal | |
The thickness of the inner plate of front energy-absorbing box | 1.5 | 0.05 | Normal | |
The thickness of the outer plate of front energy-absorbing box | 1 | 0.05 | Normal |
In this case, and are allocated values of 1 and 0.1, respectively, with an initial sample size of 40, of which 6 points are designated for the high-fidelity model. Table 5 presents the results of the comparison regarding the vehicle frontal collision issue.
Table 5. The comparison of different methods
Method | ||||||
|---|---|---|---|---|---|---|
– | MCS | 1.2906 | – | – | ||
– | FOSM | 90 | 90 | 1.0083 | – | 21.87 |
– | SS-IS | 54,000 | 54,000 | 1.2833 | 0.1434 | 0.57 |
– | AK-MCS | 41.07 | 41.07 | 1.2741 | 4.7917 | 1.28 |
AMCK-MCS | 10.63 + 0.1*44.40 = 15.07 | 10.63 + 44.40 = 55.03 | 1.3205 | 4.5664 | 2.32 | |
AMCK-MCS | 11.33 + 0.1*46.27 = 15.96 | 11.33 + 46.27 = 57.60 | 1.3096 | 4.5641 | 1.47 | |
AMCK-MCS | 12.96 + 0.1*48.27 = 17.79 | 12.96 + 48.27 = 61.23 | 1.2921 | 4.4602 | 0.12 |
The bold in table is mainly used to highlight the key results of the proposed method in this paper, thereby emphasizing its effectiveness. By marking these important data points in bold, we aim to enable the readers to quickly locate the core information of Tables
Table 5 demonstrates that the reference value for the failure probability concerning the vehicle frontal collision issue is , obtained from Monte Carlo samples. The FOSM method exhibits a relative error of 21.87%, rendering it impractical for reliable reference. In contrast, the SS-IS approach achieves the exceptional accuracy, with a relative error of only 0.57%. However, its substantial computational cost significantly diminishes its competitive viability. In this case, the AK-MCS method exhibits notable performance, achieving a relative error of 1.28% at a computational cost of 41.07 equivalent high-fidelity samples. However, the AMCK-MCS method demonstrates the superior efficiency. At , it attains a relative error of 2.32%, incorporating 4.63 high-fidelity samples and 4.4 low-fidelity samples during the active learning phase, equivalent to 5.07 high-fidelity samples. At , the AMCK-MCS method achieves a relative error of 1.47%, closely matching the accuracy of the AK-MCS method. In the active learning phase, it incorporates 5.33 high-fidelity samples and 6.27 low-fidelity samples, equivalent to 5.957 high-fidelity samples. At , the AMCK-MCS method attains a relative error of 0.12%, outperforming the AK-MCS method. During this phase, it employs 6.96 high-fidelity samples and 8.27 low-fidelity samples, equivalent to 7.787 high-fidelity samples. Table 5 demonstrates that, at , the AMCK-MCS method surpasses the computational accuracy of other methods while maintaining minimal computational cost.
Figure 10 depicts the historical addition points for the AK-MCS method and the AMCK-MCS method at the error thresholds of 0.03, 0.02, and 0.01, represented by red, purple, green, and blue lines, respectively. It further illustrates the relationship between the relative error of these methods and the total number of addition points. In this stochastic analysis, the AK-MCS method achieves the results presented in Fig. 10 by employing 26 high-fidelity sample points. The relative error curves of the AMCK-MCS method, evaluated across various error thresholds, display the fluctuations during the active learning phase; however, an overall trend reveals a consistent reduction in their relative errors. At , the AMCK-MCS method employs 4 high-fidelity samples and 6 low-fidelity samples during the active learning phase, equivalent to 4.6 high-fidelity samples. At , it incorporates 6 high-fidelity samples and 6 low-fidelity samples, equivalent to 6.6 high-fidelity samples. At , it utilizes 7 high-fidelity samples and 9 low-fidelity samples, equivalent to 7.9 high-fidelity samples. In contrast, the AK-MCS method requires a larger number of samples to complete its iterations, whereas the AMCK-MCS method achieves the comparable accuracy with fewer samples. In terms of computational cost, the AMCK-MCS method demonstrates the significant superiority over the AK-MCS method.
[See PDF for image]
Fig. 10
The relative error of AK-MCS and AMCK-MCS
To rigorously validate the robustness of the proposed method, Fig. 11 presents the boxplots of and derived from 30 repeated experiments. Within the boxplots, the solid and dashed lines denote the median and mean values, respectively, obtained from the randomized trials. Figure 11a illustrates that the value of the suggested AMCK-MCS is markedly lower than the other approaches presented. The FOSM approach, derived from the moment method, has the stability in . Under different stopping thresholds, most experiments using the AMK-MCS method require approximately 15.07, 15.96, and 17.79 equivalent high-fidelity samples, respectively, all of which are lower than those needed for the AK-MCS method. Furthermore, the box ranges of for both the AMCK-MCS and AK-MCS methods are notably narrow, indicating that the proposed AMCK-MCS method demonstrates the exceptional consistency across independent trials. In conclusion, the proposed technique exhibits commendable stability regarding the value. As the stop threshold becomes strict, the learning process becomes longer. By combining Table 5 and Fig. 11b, it can be found that the proposed method in this paper can converge to the preset target value well under different thresholds. In this example, Fig. 11b reveals that, with an error threshold set at 0.03, certain values converge to below 0.01, yielding a convergence value notably lower than the predefined target. Importantly, this does not impact the final outcome. Across three evaluated cases, the metric consistently meets or falls below the specified thresholds. These results demonstrate that the AMCK-MCS method effectively fulfills the predefined requirements while exhibiting the outstanding performance.
[See PDF for image]
Fig. 11
The boxplots of the total cost and
Conclusion
This study introduces an advanced reliability analysis method, termed Adaptive Multi-fidelity Co-Kriging Monte Carlo Simulation (AMCK-MCS), which effectively integrates high-fidelity and low-fidelity models. This approach enables the Kriging surrogate model to deliver the superior fitting accuracy, enhanced computational efficiency, and precise failure probability estimates at a reduced computational cost. In practical engineering contexts, AMCK-MCS facilitates the efficient reliability analysis in domains such as automotive safety design and aerospace structural optimization, achieving the robust performance within constrained computational budgets. The key conclusions are summarized as follows:
The MU learning function assesses the cost, cross-correlation, and sampling point density across low- and high-fidelity models, dynamically determining the priority of update points and identifying the samples with significant uncertainty. The learning function shows the superior performance in reducing the number of limit state function calls and improves its prediction accuracy, which has excellent application prospects for the resource-constrained practical engineering.
The innovative stopping criterion leverages the adaptive confidence interval estimation to dynamically select the most appropriate method for varying sample sizes. This approach ensures robust termination of the learning process, mitigating the risks of premature or delayed convergence in active learning, and minimizes the computational costs without compromising accuracy.
The results from three case studies demonstrate that, compared to existing methods, AMCK-MCS not only achieves the superior computational accuracy but also substantially reduces the computational costs, underscoring its considerable potential for practical engineering applications.
In this study, the application scope of AMCK-MCS is limited to the medium-dimensional and low-dimensional problems, and its applicability to high-dimensional scenes is limited. Future research will further use the advantages of low computational cost of multi-fidelity Kriging model to alleviate the problem of dimensional disaster and develop efficient and accurate high-dimensional reliability analysis methods.
Acknowledgements
This work is supported by the National Natural Science Foundation of China (11202116, 52475267).
Abbreviations
Adaptive multi-fidelity Co-Kriging Monte Carlo simulation
First-order second-moment method
Adaptive first-order second-moment method
Most probable failure point
Monte Carlo simulation
Subset simulation
Importance sampling
Line sampling
Response surface method
Radial basis function
Artificial neural network
Support vector machine
Polynomial chaos expansion
Expected feasibility function
Reliability-based lower confidence bound
Maximizing uncertainty
Cumulative distribution function
Probability density function
Limit state function
Central limit theorem
Low fidelity
High fidelity
Multi fidelity
China new car assessment program
Adriano Todorovic Fabro
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
1. Wang, L; Liu, J; Zhou, Z et al. A two-stage dimension-reduced dynamic reliability evaluation (TD-DRE) method for vibration control structures based on interval collocation and narrow bounds theories. ISA Trans; 2023; 136, pp. 622-639.
2. Rackwitz, R. Reliability analysis–a review and some perspectives. Struct Saf; 2001; 23,
3. Hasofer, AM. An exact and invarient first order reliability format. J Eng Mech Div Proc ASCE; 1974; 100,
4. Du, X; Chen, W. A most probable point-based method for efficient uncertainty analysis. J Des Manuf Autom; 2001; 4,
5. Zhu, SP; Keshtegar, B; Chakraborty, S et al. Novel probabilistic model for searching most probable point in structural reliability analysis. Comput Methods Appl Mech Eng; 2020; 366, 113027.4085898
6. Harrison, RL. Introduction to Monte Carlo simulation. AIP Conf Proceed NIH Public Access; 2010; 1204, 17.
7. Lee, D; Wang, Z; Song, J. Efficient seismic reliability and fragility analysis of lifeline networks using subset simulation. Reliab Eng Syst Saf; 2025; 260, 110947.
8. Zhang, W; Guan, Y; Wang, Z et al. A novel active learning Kriging based on improved Metropolis-Hastings and importance sampling for small failure probabilities. Comput Methods Appl Mech Eng; 2025; 435, 117658.
9. Papaioannou, I; Straub, D. Combination line sampling for structural reliability analysis. Struct Saf; 2021; 88, 102025.
10. Bahar, D; Dvivedi, A; Kumar, P. Improvement in performance during micromachining of borosilicate glass with temperature-stirring-assisted ECDM. J Braz Soc Mech Sci Eng; 2024; 46,
11. Pourfattah, F; Kheryrabadi, MF; Wang, LP. Coupling CFD and RSM to optimize the flow and heat transfer performance of a manifold microchannel heat sink. J Braz Soc Mech Sci Eng; 2023; 45,
12. Li, SH; Lv, SN; Gao, Y et al. RBF network dynamic sliding mode robust control for overhead cranes with uncertain parameters. J Braz Soc Mech Sci Eng; 2024; 46,
13. Bahar, D; Dvivedi, A; Kumar, P. Optimization of rotary-magnet assisted ECSM on borosilicate-glass using machine learning. Mater Manuf Process; 2024; 39,
14. Dvivedi, A; Kumar, P. Optimizing the quality characteristics of glass composite vias for RF-MEMS using central composite design, metaheuristics, and bayesian regularization-based machine learning. Measurement; 2025; 243, 116323.
15. Li, C; Wen, JR; Wan, J et al. Adaptive directed support vector machine method for the reliability evaluation of aeroengine structure. Reliab Eng Syst Saf; 2024; 246, 110064.
16. Zheng, X; Yao, W; Gong, Z et al. Learnable quantile polynomial chaos expansion: an uncertainty quantification method for interval reliability analysis. Reliab Eng Syst Saf; 2024; 245, 110036.
17. Zhou, C; Xiao, NC; Zuo, MJ et al. An improved Kriging-based approach for system reliability analysis with multiple failure modes. Eng Comput; 2022; 38,
18. Yang, M; Tian, Y; Guo, M et al. Optimized design of aero-engine high temperature rise combustion chamber based on" kriging-NSGA-II". J Braz Soc Mech Sci Eng; 2023; 45,
19. Wen, Z; Pei, H; Liu, H et al. A sequential Kriging reliability analysis method with characteristics of adaptive sampling regions and parallelizability. Reliab Eng Syst Saf; 2016; 153, pp. 170-179.
20. Bichon, BJ; Eldred, MS; Swiler, LP et al. Efficient global reliability analysis for nonlinear implicit performance functions. AIAA J; 2008; 46,
21. Echard, B; Gayton, N; Lemaire, M. AK-MCS: an active learning reliability method combining Kriging and Monte Carlo simulation. Struct Saf; 2011; 33,
22. Lv, Z; Lu, Z; Wang, P. A new learning function for Kriging and its applications to solve reliability problems in engineering. Comput Math Appl; 2015; 70,
23. Yi, J; Zhou, Q; Cheng, Y et al. Efficient adaptive kriging-based reliability analysis combining new learning function and error-based stopping criterion. Struct Multidiscip Optim; 2020; 62, pp. 2517-2536.4169100
24. Sun, Z; Wang, J; Li, R et al. LIF: a new kriging based learning function and its application to structural reliability analysis. Reliab Eng Syst Saf; 2017; 157, pp. 152-165.
25. Jian, W; Zhili, S; Qiang, Y et al. Two accuracy measures of the Kriging model for structural reliability analysis. Reliab Eng Syst Saf; 2017; 167, pp. 494-505.
26. Zhou, Q; Wu, Y; Guo, Z et al. A generalized hierarchical co-Kriging model for multi-fidelity data fusion. Struct Multidiscip Optim; 2020; 62, pp. 1885-1904.4156363
27. Giselle Fernández-Godino, M; Park, C; Kim, NH et al. Issues in deciding whether to use multi-fidelity surrogates. AIAA J; 2019; 57,
28. Han, ZH; Görtz, S. Hierarchical kriging model for variable-fidelity surrogate modeling. AIAA J; 2012; 50,
29. Jiang, P; Cheng, J; Zhou, Q et al. Variable-fidelity lower confidence bounding approach for engineering optimization problems with expensive simulations. AIAA J; 2019; 57,
30. Wright J, Ma Y (2022) High-dimensional data analysis with low-dimensional models: Principles, computation, and applications. Cambridge University Press.
31. Kennedy, MC; O'Hagan, A. Predicting the output from a complex computer code when fast approximations are available. Biometrika; 2000; 87,
32. Forrester, AIJ; Sóbester, A; Keane, AJ. Multi-fidelity optimization via surrogate modelling. Proc R Soc Lond A Math Phys Eng Sci; 2007; 463,
33. Liu, Y; As’arry, A; Hassan, MK et al. Review of the grey wolf optimization algorithm: variants and applications. Neural Comput Appl; 2024; 36,
34. Tahmasebi, SP; Sahimi, M. Cross-correlation function for accurate reconstruction of heterogeneous media. Phys Rev Lett; 2013; 110,
35. Kambampati, S; Chung, H; Kim, HA. A discrete adjoint based level set topology optimization method for stress constraints. Com put Methods Appl Mech Eng; 2021; 377, 113563.4215756
36. Ang, GL; Ang, AHS; Tang, WH. Optimal importance-sampling density estimator. J Eng Mech; 1992; 118,
37. Zhu, X; Lu, Z; Yun, W. An efficient method for estimating failure probability of the structure with multiple implicit failure domains by combining meta-IS with IS-AK. Reliab Eng Syst Saf; 2020; 193, 106644.
38. Huang, X; Chen, J; Zhu, H. Assessing small failure probabilities by AK–SS: an active learning method combining Kriging and Subset Simulation. Struct Saf; 2016; 59, pp. 86-95.
39. Teichert, GH; Natarajan, AR; Van der Ven, A et al. Machine learning materials physics: integrable deep neural networks enable scale bridging by learning free energy functions. Comput Methods Appl Mech Eng; 2019; 353, pp. 201-216.3955625
40. Wang, Z; Shafieezadeh, A. ESC: an efficient error-based stopping criterion for kriging-based reliability analysis methods. Struct Multidiscip Optim; 2019; 59, pp. 1621-1637.
41. Kwak, SG; Kim, JH. Central limit theorem: the cornerstone of modern statistics. Korean J Anesthesiol; 2017; 70,
42. Hazra, A. Using the confidence interval confidently. J Thorac Dis; 2017; 9,
43. Ialongo, C. Confidence interval for quantiles and percentiles. Biochem Med (Zagreb); 2019; 29,
44. Conrad, J; Botner, O; Hallgren, A et al. Including systematic uncertainties in confidence interval construction for Poisson statistics. Phys Rev D; 2003; 67,
45. Garud, SS; Karimi, IA; Kraft, M. Design of computer experiments: a review. Comput Chem Eng; 2017; 106, pp. 71-95.
46. Song, S; Lu, Z; Qiao, H. Subset simulation for structural reliability sensitivity analysis. Reliab Eng Syst Saf; 2009; 94,
47. Yi, J; Wu, F; Zhou, Q et al. An active-learning method based on multi-fidelity Kriging model for structural reliability analysis. Struct Multidiscip Optim; 2021; 63, pp. 173-195.4190836
48. Yi, J; Cheng, Y; Liu, J. A novel fidelity selection strategy-guided multifidelity kriging algorithm for structural reliability analysis. Reliab Eng Syst Saf; 2022; 219, 108247.
49. Zemed N, Cherradi T, Bouyahyaoui A, et al. (2025) Enhanced active learning for structural reliability analysis: an ensemble SVR Metamodel‐Monte Carlo approach. Quality and Reliability Engineering International
50. Marques A, Lam R, Willcox K (2018) Contour location via entropy reduction leveraging multiple information sources. Advances in neural information processing systems. 31
51. Zhao, Z; Lu, ZH; Zhao, YG. P-AK-MCS: parallel AK-MCS method for structural reliability analysis. Probabilistic Eng Mech; 2024; 27519, 103573.
52. Toal, DJJ. Some considerations regarding the use of multi-fidelity Kriging in the construction of surrogate models. Struct Multidiscip Optim; 2015; 51, pp. 1223-1245.
53. Ma, C; Zhuang, Z; Xing, B et al. Deep learning-based inverse prediction of side pole collision conditions of electric vehicle. eTransportation; 2015; [DOI: https://dx.doi.org/10.1016/j.etran.2025.100421]
54. Yu, L; Cui, Q; Liu, Y et al. Reliability analysis of automobile offset collision front longitudinal beam structure using copula function model. J Jiamusi Univ (Natural Science Edition); 2022; 40,
© The Author(s), under exclusive licence to The Brazilian Society of Mechanical Sciences and Engineering 2025.