Content area
This work investigates how non-intrusive local/global coupling strategies can be applied in the context of robust design. The objective is to propagate uncertainties from the local to the global scale using non-intrusive techniques, in order to estimate how local variabilities impact global quantities. Several uncertainty propagation methods, including perturbation techniques, polynomial chaos expansions, and Monte-Carlo simulations, are tested and compared on academic examples. Depending on the level of uncertainty, perturbation methods and non-intrusive polynomial chaos expansions appear particularly effective. A key issue in extending the approach to more complex and computationally intensive problems is the ability to exploit non-converged solutions to approach robust design configurations. A preliminary step in this direction is proposed at the end of the paper and provides a good basis for future work on more complex and realistic problems with associated challenges.
Introduction
Industrial complex structures are generally represented by a system of connected models rather than a single unified model, with the need for communication between different scales to ensure consistency and accuracy in the simulation process. In this context, model analysis based on the level of maturity of the overall project is generally conducted in a sequential and partitioned manner (Fig. 1). The global design is first defined, then the second-level architecture is specified and so on. However, this top-down approach can easily lead to too conservative or even worse, in general, non-conservative designs. Indeed, the potential impact of local details (e.g., changes in connection types, component positioning, etc.) on the overall architecture are usually counteracted by the mere use of safety factors, which may be insufficient.
[See PDF for image]
Fig. 1
Classical and agile designs of a complex engineering artefact
This paper is a first numerical attempt to propose a complementary bottom-up approach (”agile design” arrow in Fig. 1) that considers, from the preliminary design phases the potential impact of local changes on the overall architecture. This is the philosophy behind the concept of robust design, which seeks to enhance robustness to local variations and uncertainties [1, 2–3]. Nevertheless the peculiar description of the problem requires adapting the simulation chain to allow communication between the different scales.
Many performant multiscale approaches based on domain decomposition methods such as Balanced Domain Decomposition (BDD) [4], Finite Element Tearing and Interconnecting (FETI) [5], or mixed LATIN [6],are coupling approaches based on Schwarz methods and are widely used in engineering [7]. While robust and numerically efficient, these techniques are intrusive as they require significant modifications into finite element solvers and software, and often necessitate costly global-scale meshing procedures, making them difficult to implement in industrial contexts when local design changes occur. In the present context methods allowing bridging the independent models at different scales and in a flexible manner seems more appropriate. These methods are divided into two main categories: superposition and enrichment methods, and substitution methods. The first class of methods consists in enriching models with augmented approximation spaces, such as finer meshes or enrichment functions derived either analytically or numerically, and by the superposition of micro and macro solutions. Notable examples of these methods include:
Local enrichment methods based on the Partition of Unity Method (PUM) [8], Generalized Finite Element Method (GFEM) [9], and Extended Finite Element Method (XFEM) [10];
Finite element methods with localized adaptation (MsFEM) [11], where specific shape functions capture fine details of the solution;
Methods incorporating local corrections, such as the multiscale Variational Method (VMS) [12], hierarchical models for highly heterogeneous structures [13], and bridging scale techniques [14];
Multi-grid methods [15, 16] and numerical zoom approaches that employ finite element patches [17, 18].
The Mortar surface method [21, 22], which allows coupling discretizations of different types across different subdomains without overlap and enforces weak equality at the coupling interface using Lagrange multipliers;
The Nitsche surface method [23];
Energy averaging methods with volume interface methods, such as the Arlequin method, which involves overlap and energy-based coupling [24].
Reliability-based optimization: This method focuses on evaluating the probability distribution of the system’s response, considering uncertainties in the parameters. It is particularly useful in risk analysis, where the goal is to calculate failure probabilities and ensure they remain below a defined threshold. Rather than minimizing the variance, the emphasis is on managing rare events at the extremes of the probability distribution, with safety assessed through a reliability index. This approach is known as Reliability Based Design Optimization (RBDO) [33].
Robust optimization: In contrast, robust design methods aim to minimize the impact of variability on system performance. The objective is to optimize average performance while reducing fluctuations, ensuring that the system remains feasible under probabilistic constraints. This approach focuses on making the system less sensitive to variations and optimizing performance towards a target value with minimal deviation. These methods are referred to as Robust Design Optimization (RDO) [1].
A key challenge in this context is the accurate propagation of uncertainties across different scales to ensure reliable performance predictions. To address this, a probabilistic framework is proposed to propagate uncertainties, exploiting fully the non-intrusive nature of the coupling methodology. Polynomial chaos expansion (PCE) and perturbation-based approaches are considered as efficient alternatives to Monte-Carlo simulation. On the one hand, the perturbation-based approach provides first- and second-order uncertainty estimates by intelligently exploiting the structure of the local/global coupling algorithm. On the other hand, PCE constructs surrogate models that quickly estimate the system response as a function of uncertain parameters. Specifically, the non-intrusive version of the polynomial chaos methodology, particularly using regression-based techniques, is applied. This approach allows the numerical model to be considered as a ”black box”, meaning that the local/global non-intrusive algorithm remains unmodified during the uncertainty quantification process. Therefore, it is possible to efficiently propagate uncertainties across multiple scales and implement a robust-design process by taking advantage of the specificities of the non-intrusive local/global coupling algorithm. The paper is structured as follows. The local/global non-intrusive iterative algorithm is first introduced in Sect. Non-intrusive local/global coupling. Section Local/global uncertainty propagation for robust design extends the method to robust design by proposing a probabilistic framework that allows for the effective propagation of uncertainties across different scales. In Sect. Numerical experiments, different uncertainties propagation techniques will be compared (Monte-Carlo simulation, perturbation-based approach, polynomial chaos expansion) and tested on an illustrative simple 1D example which is very sensitive to geometric perturbations. Eventually, conclusions and prospects are proposed in Sect. Conclusions and perspectives.
Non-intrusive local/global coupling
The proposed multiscale strategy focuses on non-intrusive coupling methods [25]. These approaches enable the enhancement of a global model with local modifications without altering the original model. Such modifications may involve changes to material behaviour, the topology of the components being studied, computational techniques, or the nature of the models themselves. To simplify the notation, the coupling principle is explained in the case of a coupling between a local heterogeneous elasticity model and a globally homogenized one. The relevant phenomena are supposed to be localized in space. The idea is, therefore, to decompose the initial domain into two non-overlapping parts (Fig. 2):
A local zone , which includes the area of interest where variations may occur.. For simplicity, the local domain is assumed to be strictly contained within . In this zone, a model based on the original complex constitutive law is retained.
The complementary zone , in which a coarser model is considered. It is defined by replacing the original behavior with a homogeneous linear elastic behavior, using a Hooke operator .
[See PDF for image]
Fig. 2
Decomposition of the geometry into a local zone and a global zone
The interface between the two domains and is denoted and both global and local models are here assumed to be geometrically and kinematically compatible at the interface . Rather than defining the solution as the sum of the global response and a local enrichment term, the local problem is incorporated into the global formulation using a piecewise substitution approach. In this method, the local solution directly replaces the corresponding part of the global response. For that purpose, the non-intrusive coupling approach modifies the global problem by defining the support of its solution over the entire domain and virtually extending the linear elastic homogenous behaviour over , while leaving the local problem unchanged.
To comply with the non-intrusiveness requirements of the methodology, a modified Newton method on the interface quantities is generally used. Starting from a global elastic solution, each iteration proceeds as follows:
Local analysis: A detailed analysis is performed on the local model, with the current global displacements prescribed as boundary conditions.
Residual computation: The residual is expressed as an interface force that measures the violation of the coupling equations on the interface . The convergence of the iterative algorithm is checked at this stage. If the residual norm is sufficiently small, the iterations stop.
Global correction: The residual is injected into the global model as an additional internal interface force, deforming it as if the local details were present. The linear elastic constitutive law and the boundary conditions of the global model are preserved. Optional convergence acceleration techniques can be applied at this stage, after which the global solution is updated and the process is repeated until convergence [26]. Also, it is worth noting that the first iteration of the non-intrusive coupling method, without reinjecting the interface rebalancing force, corresponds to the classical submodeling approach.
1
where and are the stiffness matrices respectively in and , and are the Mortar coupling operators, is a field of Lagrange multipliers, (resp. ) is the stiffness matrix over the whole domain (resp. over the subdomain ) using the smooth linear operator and is the discrete interface reaction forces residual coming from the fictitious part of the global model computed in practice from volume integrals. Moreover, weak continuity across non-matching interfaces is ensured by the Mortar coupling operators, which project interface conditions onto a suitable Lagrange multiplier space [21].A sketch of the associated local/global algorithm is given in Fig. 3. The global stiffness matrix and the global force vector are constructed independently of the local model parameters, such as the position and shape of or the mesh refinement in the local zone. These operators are precomputed under the assumption of a homogeneous material behavior across the entire structure, without explicitly resolving complex local phenomena, and using a coarse discretization. Consequently, the global stiffness matrix is assembled and factorized only once, ensuring the non-intrusive nature of the coupling approach. Therefore, non-intrusive coupling techniques provide a powerful framework for handling various local modifications of a component without the need for remeshing. In our context, they are particularly valuable for effectively accounting for a wide range of local variations.
[See PDF for image]
Fig. 3
Illustration of the non-intrusive coupling algorithm [32]
Local/global uncertainty propagation for robust design
This paper explores the extension of non-intrusive coupling methods to robust design applications (Fig. 4). A straightforward approach might involve using the classical Monte-Carlo method, which is easy to implement and conceptually intuitive. However, one of the main challenges lies in the computational cost of probabilistic analyses. This issue becomes even more pronounced in large-scale, multiscale modelling, where the number of random variables, degrees of freedom, and the complexity of the coupling algorithms contribute to significant computational demands. The objective, therefore, is to develop a probabilistic framework that allows for efficient uncertainty propagation across different scales.
[See PDF for image]
Fig. 4
General methodology of the considered multiscale strategy
Concepts of robust design
The concepts of robust design were introduced by Taguchi in the 1950 s to enhance productivity in engineering. The main objective of robust design is to optimize the system’s average performance while minimizing its variability, or sensitivity, caused by uncertainties in material or geometric properties, without eliminating the sources of those uncertainties. In robust design, three types of parameters are taken into account: signal parameters, noise parameters, and control parameters. Signal parameters refer to the configurations that must be considered during the design process. Noise parameters, on the other hand, are factors that cannot be controlled by the designer and contribute to variability in the system, such as environmental conditions or variations in physical properties. These parameters are inherently stochastic and are represented with probability density functions. Lastly, control parameters are the input factors that guide the optimization process, aiming to reduce the system’s sensitivity to the noise parameters. The goal of robust design is to determine, for a range of configurations, the control parameters that make the system’s response as close as possible to a target value with minimal variation, without eliminating the noise factors from the system. Denoting as the vector of system responses for a particular set of parameters, the mathematical description of robust design is expressed as a constrained minimization problem.
2
where:V is the set of considered configurations;
s a vector containing the signal parameters;
is a vector containing the noise parameters, which are uncertain and intrinsically stochastic. Hence, and consequently are random variables;
s a vector containing the control parameters to be optimized;
is the vector containing the target responses;
denotes the mathematical expectation relative to the uncertainties associated with the noise factors.
3
Due to the conflicting nature of the two objectives, it is impossible to identify a single optimal solution. Robust design seeks to optimize both the mean and variance of performance, making it inherently a multi-objective and non-deterministic problem. The optimization of the mean often conflicts with the minimization of variance, requiring a trade-off to determine the most suitable design. To address this, the Pareto front represents the set of design solutions where no alternative can improve one objective without degrading another (Fig. 5). These solutions are also referred to as dominant solutions. The optimal design is typically selected as the point of maximum curvature on the Pareto front. Genetic algorithms, based on heuristic search methods, are particularly well-suited for multi-objective optimization problems. In this study, we use the ”Non-dominated Sorting Genetic Algorithm II” (NSGA-II) [35], an elitist evolutionary approach that enhances the exploration of optimal solutions while maintaining a reasonable computational complexity.[See PDF for image]
Fig. 5
Pareto front for robust design optimization
Adaptation of the perturbation method
A first step in this direction is the propagation of local uncertainties within the submodeling framework using perturbation method [19]. However, as mentioned earlier, submodeling does not consider the impact of local uncertainties on the global model, which limits its ability to provide a statistical prediction of global quantities of interest. This highlights the importance of implementing an iterative local/global coupling approach to effectively characterize the probabilistic distributions of global quantities influenced by local variabilities. The perturbation method is based on the Taylor series decomposition of the model response around the mean value of uncertain input parameters. One of the first applications in mechanics dates back to the early 1980s [36], and the method has been widely applied and analyzed within the framework of stochastic finite elements [37]. We propose a method for uncertainty propagation through a multiscale model, combining perturbation techniques with non-intrusive local/global coupling algorithms.
The variabilities are described by M random variables and are represented as a vector . We introduce which are centered and dimensionless random variables
The second-order Taylor series expansion of the local stiffness matrix around the mean values is expressed as:
4
where (resp. ) are the coefficients of first (and second) order, obtained by differentiating the considered quantity evaluated at , e.g.:5
Similarly, the different solutions at each iteration of the non-intrusive local/global coupling algorithm must also be developed in a Taylor series.6
The Taylor series expansions are injected into the deterministic formulation of the non-intrusive local/global coupling, identifying terms of the same order for further analysis.Global problem () :
7
Local problem () :
8
Remark 1
No assumption has been made regarding the form of the input random variables , allowing any type of distribution to be simulated. It is also necessary for to be dimensionless. This requirement guarantees that the interface data exchanged between the global and local models (reaction forces/displacements) are consistent. This adjustment is a minor modification to the classical perturbation method used in the context of stochastic finite elements.
The zero-order problem corresponds to the deterministic local/global non-intrusive coupling problem when the uncertain parameters are set to their mean values, whereas higher-order problems are those that allow the calculation of the solution’s sensitivity to uncertainties. Moreover, we observe that the higher-order terms involve the solution from previous orders. Hence, it is possible to consider different numerical strategies to solve this coupling problem. The first strategy consists of successively solving each order at every iteration. A second strategy consists of solving the entire local/global problem at each order successively. Here, the focus is on the intelligent initialization of the algorithm at each order. The idea is to use the converged solutions from lower orders to initialize the higher order. Moreover, the relevant quantities from lower orders will already be computed and will not need to be updated at each iteration, unlike in the first strategy. Finally, the same matrices are used throughout the algorithm. Therefore, they are assembled (and inverted if necessary) only once and stored. This is an advantage over the Monte-Carlo method, which requires reconstructing the stiffness matrix for each uncertain configuration. Also, when all unknowns are determined, it is possible to estimate the solution field, particularly the global displacement field, directly using random variables. Thus, it is easy to calculate the statistical moments of solution fields based on the probabilistic properties of the uncertain parameters . By definition, the mean of is zero, and its covariance matrix is denoted as . The first-order approximation of the response variability is given by:
9
The second-order approximation of the mean is determined similarly:10
It is also possible to determine the second-order approximation of the covariance matrix, which involves the statistical moments of up to the fourth order. This leads to more complex and costly calculations.All in all, the perturbation method is simple and computationally inexpensive when the model gradients are available. This is particularly advantageous when analytical models are used or when dealing with certain finite element codes (e.g., Code Aster or OpenSees). In contrast, gradients must be calculated with finite differences, which increases computational effort and reduces precision. The need to compute partial derivatives of the stiffness matrix with respect to different orders increases the computational cost of the approach, especially when dealing with numerous random variables. Finally, a major drawback is that the perturbation method provides satisfactory results only when the coefficient of variation of the input variables is small (around ). To address this limitation, higher-order approximations are required [38].
Polynomial chaos based methodology
To address local uncertainty problems more efficiently, this study proposes the use of a metamodel, which enables cost-effective simulation of model responses. Unlike the Monte Carlo method, which relies on extensive sampling of the response , spectral approaches are preferred for their ability to expand uncertain quantities of interest in series. In this context, methods that exploit the localized nature of uncertainties have been introduced in the literature [39]. The iterative algorithm employed requires solving simple global problems, potentially using deterministic operators, alongside local problems where approximation spaces can be effectively utilized. This approach facilitates the management of high-dimensional multiscale problems characterized by numerous sources of uncertainty. Both global and local problems are resolved using tensor approximation methods that enable the representation of functions with multiple random parameters. In this paper, the Stochastic Spectral Finite Element Method (SSFEM), originally introduced by Ghanem and Spanos [40] as an extension of the deterministic finite element method, is considered. In the following, we will investigate the benefits of this spectral method for robust design, especially in its coupling with the multiscale local/global non-intrusive methodology.
To simplify, we assume that the input random variables () are independent. The uncertain output Y (assumed to have finite variance) can be intrinsically expanded in a series over an orthonormal polynomial basis, known as the polynomial chaos expansion [40].
11
where are the basis functions and are the coefficients to be determined (“coordinates”).For practical reasons, the series is typically truncated so that only multivariate polynomials of total degree equal or less than p are kept. Thus, we seek an approximation of the model response (stochastic response surface) with a finite number of terms, as follows:
12
Then, it is possible to easily estimate the first statistical moments of the quantity of interest, which will be useful during the robust optimization procedure. The first statistical moments of the quantity of interest can be deduced analytically from the coefficients of the expansion in the polynomial chaos basis. In particular, the mean and variance are written as follows:13
where denotes an extension of the polynomial chaos expansion of order p.Then, it is essential to calculate the coefficients of the expansion in the polynomial chaos basis. Historically, the coefficients of the polynomial chaos expansion appearing in Eq. (11) are computed using a Galerkin method [40]. However, one of the main drawbacks of this method is that it requires a specific implementation for each new class of problems considered. From an industrial perspective, this intrusive approach is unsuitable because industrial software cannot be used without modifying the source code. Additionally, the stochastic problem is solved for all primal unknowns (e.g., nodal displacements, temperature, etc.) simultaneously. Therefore, if the designer is only interested in a few components of the response, they must still solve the complete coupled system. To address this difficulty, non-intrusive approaches have been developed in recent years. During the robust optimization phase, the designer only needs to compute the mean and standard deviation of one or more quantities of interest, making non-intrusive PCE methods particularly well-suited for the robust design framework. Non-intrusive methods treat the numerical model as a black box and allow for the calculation of the coefficients of the expansion (11) from a set of deterministic calculations, i.e., a set of evaluations of the model’s response for carefully chosen input parameter values. Among the non-intrusive approaches, we distinguish two classes [41]: projection methods and regression methods. In the following, we focus on applying the regression method, which aims to compute the coefficients that provide the best approximation of the model’s response in the least-squares sense. This approach is simple to apply and allows for easy control of computational costs by adjusting the input experimental design, which is also physically interpretable. It is worth noting that sparse PCE techniques have been developed to efficiently handle cases involving a large number of uncertain variables, by retaining only the most influential terms in the expansion [42].
The objective is to couple the stochastic space approximation method with the non-intrusive local/global coupling algorithm. First, non-intrusive PCE approaches allow the numerical model, often a finite element model, to be used as a black box, without modification (Fig. 6). This facilitates the use of industrial codes and is consistent with the implementation of non-intrusive local/global coupling methods. Also, the regression method proves to be particularly useful for reducing the number of model evaluations, especially when used in iterative algorithms such as the non-intrusive local/global coupling method. Finally, the main drawback of the proposed methodology lies in the convergence of the non-intrusive local/global coupling algorithm. If the stopping criterion of the coupling algorithm is not strict enough, an approximation error remains in the mechanical problem solution. This error then propagates to the design points of the polynomial chaos method, affecting the quality of the estimator for the first statistical moments of the quantity of interest. As a result, the accuracy of the stochastic quantity estimation depends not only on the intrinsic error of the polynomial chaos method but also on the convergence error of the local/global non-intrusive coupling algorithm.
[See PDF for image]
Fig. 6
Uncertainty propagation methodology using regression-based PCE
[See PDF for image]
Fig. 7
Example illustration of the loading system
Numerical experiments
Presentation of the problem
This study presents a practical case aimed at addressing realistic challenges commonly encountered in the aerospace industry. The focus is on a simplified academic test case inspired by a cargo aircraft designed to transport loads between various production sites (Fig. 7). At the core of this analysis is the aircraft’s loading system, a key component that frequently encounters distinct engineering challenges. Due to assembly constraints, local design changes may arise, introducing additional flexibility compared to the initial design specifications. The nominal design of the truss system relies on overly simplified models, leading to discrepancies in stiffness between the simplified model and the actual structure. To prevent costly, late-stage design revisions, the objective is to implement a robust design approach that accounts for potential local design variations from the early stages of development. For the sake of simplicity, the structure is modeled in the plane as an assembly of four steel beams with identical geometric properties and is discretized using Euler-Bernoulli beam elements to capture the structural behavior accurately. Two distinct models are developed to reflect different levels of detail (Fig. 8):
Nominal model: This model consists in an assembly of four identical hollow square-section beams welded together.
New detailed model: In this configuration, the same four beams are assembled using two bolted plates to simulate more realistic connection conditions. The flexibility introduced by the bolted connections is modeled using two springs.
[See PDF for image]
Fig. 8
Simplified models with 1D beam elements for the two structural design configurations
Table 1. Equivalent modelling parameters for the two structural design configurations
Material | Geometry | |||||||
|---|---|---|---|---|---|---|---|---|
E | L | h | e | F | a | b | l | |
210GPa | 240MPa | 1m | 40mm | 3mm | 7kN | a | ||
Remark 2
It should be noted that the modelling of the bolted connection is quite simple and does not account for preload or friction. The goal here is solely to model an elastic connection that introduces flexibility into the structure using consistent orders of magnitude, while developing the design approach within a simple multiscale framework.
The new design is modeled using the non-intrusive local/global coupling method, with a coarse global model representing the nominal design and a detailed local model describing the flexible connection. The coupling methodology is illustrated in Fig. 9. The subsequent numerical studies are carried out with zero initialization, within a deterministic framework, where the beams are positioned at . Also, no acceleration technique is implemented in this study, as the objective was to evaluate the different methods on a simplified test case. However, this choice does not constitute a limitation of the proposed methodology. In practice, the algorithm is governed by the error on the interface residual. Therefore, studying the influence of the coupling error on robust design will be relevant later.
[See PDF for image]
Fig. 9
Illustration of the non-intrusive local/global coupling method
Remark 3
Careful selection of the patch size t is required, as geometric incompatibilities at the interface, resulting from position uncertainties in the flexible joints, may affect the accuracy of the model coupling. To ensure convergence of the local/global non-intrusive coupling algorithm, the patch must be sufficiently large so that the effect of these incompatibilities becomes negligible. In this study, a patch size of was selected.
Estimation of a quantity of interest
To verify that both finite element models accurately represent the flexibility introduced by the design change, a sensitivity analysis was conducted on the deflection at the loading point F. The quantity of interest calculated using the nominal model was compared with the results obtained from the detailed joint model for two different positions . It is observed that the choice of modelling approach influences the deflection , and more broadly, affects the equivalent stiffness of the structure (Table 2).
Table 2. Comparison of the deflection at the loading point for both models under different configurations
Displacement (mm) | Ratio | ||
|---|---|---|---|
Nominal model | 2.13 | 1 | |
New detailed model | 2.05 | 0.96 | |
2.09 | 1.08 |
The simulation parameters are defined as follows: the number of iterations of the local/global algorithm is and the design parameter is fixed at . To compare the results, the analysis will focus on the global displacement , with particular attention to the displacement at the loading point .
Perturbation method
The uncertainties are entirely localized within the patch defining the local model. The objective is to propagate these uncertainties using the perturbation method within the local/global non-intrusive coupling framework. The perturbation of the local stiffness matrix can thus be expressed as follows:
14
It is noteworthy that the variabilities affect only a few elements, making the matrices and generally very sparse. Moreover, it has been shown that in certain cases, the second-order terms are equal to zero. This is the case here for the term . Consequently, the problem becomes relatively easy to solve. In our case, since the calculations are straightforward, the partial derivatives of the local stiffness matrix have been determined analytically. For more complex examples, these derivatives would need to be computed using finite difference methods. Based on the architecture of the proposed coupling algorithm under uncertainties, two computational strategies can be considered.The first strategy consists of sequentially solving each order at every iteration, as the computation of order i relies on the solutions of the lower orders.
[See PDF for image]
Algorithm 1
Strategy 1
A second strategy involves solving the entire local/global problem successively at each order. In this approach, the efficiency lies in the initialization of the algorithm at each order. Specifically, the converged solutions from lower orders are used to initialize the higher order computations. Additionally, the quantities from lower orders, which are already computed, do not need to be updated at each iteration, unlike in the first strategy.
[See PDF for image]
Algorithm 2
Strategy 2
[See PDF for image]
Fig. 10
Evolution of the relative error on the quantity of interest (first-order expansion)
Polynomial chaos expansion (PCE)
For the subsequent analysis, a third-order polynomial chaos expansion is employed for the quantity of interest. The methodology for the regression method is presented, and the results are obtained using the UQLab software [43] (Fig. 11).
Creation of the design of experiments for input variability with a sample size N. The design size N should be chosen based on the number of unknown coefficients to be identified. As previously said, Berveiller proposes a minimal design . Consequently, results in an indeterminate system, whereas can lead to overfitting. In general, the empirical rule is used.
Evaluation of the quantity of interest using the numerical model and the design of experiments.
Creation of the metamodel using Polynomial Chaos Expansion (PCE). If necessary, this allows for rapid estimation (and at a lower cost) of the distribution or other characteristics of the quantity of interest.
Estimation of the mean and standard deviation of the quantity of interest using Eq. (13).
[See PDF for image]
Fig. 11
Design of experiments for input size
Comparison of the results
The first statistical moments obtained using the perturbation method and PCE are compared with those calculated via the Monte Carlo method, which serves as our reference. The numerical results are of the correct order of magnitude and closely match the reference value (Table 3). A better approximation of the mean and standard deviation is observed with a second-order expansion, which is consistent with expectations. Additionally, in our case, the coefficient of variation for the input uncertainties is on the order of , which is relatively high. With such levels of variability, the perturbation method loses precision, necessitating a higher-order expansion. Given the simplicity of the test case, a second-order expansion proves sufficient. For example, it is clear that a first-order expansion does not adequately approximate the distribution of the output quantity (Fig. 12). Moreover, the method based on Polynomial Chaos Expansion (PCE) provides satisfactory results, offering an efficient alternative for estimating the distribution and statistics of the output. Also, quantifying the efficiency and computational gains provided by the newly introduced uncertainty propagation techniques is important. Specifically, one can compare the number of model evaluations required to estimate the mean and standard deviation of the quantity of interest (Table 3). The polynomial chaos expansion method, in particular, allows for a reduction of up to three times in the number of model evaluations, as it relies on a smaller sample size to construct the experimental design necessary for computing the chaos coefficients. Consequently, the proposed methodologies enable the implementation of a more efficient uncertainty propagation framework, particularly within the context of non-intrusive local/global coupling techniques, when compared to the conventional Monte Carlo method. Also, it is essential to evaluate how the approximation of non-intrusive local/global coupling affects the prediction accuracy of the proposed multiscale propagation methodology. Indeed, we know that after a sufficient number of iterations, the coupling algorithm converges to the exact solution. For a smaller number of iterations, the algorithm provides an approximate value of the desired solution. To assess the quality of the predictions, the mean and standard deviation of the quantity of interest obtained using different uncertainty propagation methods (such as the perturbation method and polynomial chaos expansion) are compared with the results at convergence using the Monte Carlo method (Fig. 13). The accuracy of the predicted quantities of interest depends on both the intrinsic error of the uncertainty propagation methods (with convergence values represented by dotted lines) and the approximation error of the non-intrusive local/global iterative coupling algorithm. The results indicate that selecting an adequate number of iterations is essential to ensure accurate estimation of the first statistical moments of the quantity of interest. Furthermore, the polynomial chaos expansion method provides the most accurate approximation while maintaining computational efficiency. Its non-intrusive nature allows it to be directly employed with the non-intrusive local/global coupling algorithm, treating it as a black-box model, thus facilitating its implementation in complex multiscale problems.
Table 3. Comparison of the first statistical moments of the quantity of interest
Mean (mm) | Standard deviation (mm) | Computational gain | |
|---|---|---|---|
Monte-Carlo (reference) | 0.0608 | 1 | |
Perturbation: order 1[1] | 0.0562 | 1000 | |
Perturbation: order 2[1] | 0.0603 | 700 | |
PCE (regression method) | 0.0607 | 33 |
The derivatives of the local stiffness matrix are computed analytically. When such expressions are not available, finite difference approximations may be used instead, potentially introducing numerical errors and increasing the computational cost
[See PDF for image]
Fig. 12
Statistic analysis of the quantity of interest
[See PDF for image]
Fig. 13
Evolution of the first moments of the quantity of interest with respect to local/global coupling error
Application to robust design
The objective is to implement a static design approach using a beam assembly model that accounts for positional variability in the joints, which are imperfect in reality. Here, the beams have fixed material properties and lengths. Thus, the only design parameter considered is the beam cross-section, while the uncertainty concerns the positions of the beams in the flexible connection, modeled by a uniform distribution. The parameters are detailed in Table 4.
Table 4. Parameters of the robust optimization problem
Control factors | |
|---|---|
Noise factors | |
Signal factors | (static design) |
Due to assembly constraints, the new structure introduces a difference in flexibility compared to the nominal design. The first example of robust design therefore consists in determining the beam section width h that minimizes the stiffness discrepancy between the two models, despite positional variabilities in the flexible connection , which are modeled as stochastic parameters (Table 4). In this case, the quantity of interest is the normalized difference between the deflection of the detailed model and that of the nominal model , evaluated in a least-squares sense. The robust design problem is formulated as a minimization problem while ensuring compliance with the mechanical constraints. In particular, the equivalent Von Mises stress in the flexible connection must not exceed the yield strength (with a safety factor ).
First, it is essential to highlight the benefits of robust design compared to a traditional design approach. To do so, we compute the deterministic solution of the design problem, which involves solving a minimization problem where the parameters, initially considered stochastic, are fixed at deterministic values .
15
Several deterministic optimization solutions were computed for different positional configurations (using a gradient descent algorithm). The results indicate a strong dependence of the optimized design parameter h on the specific configuration considered. This observation underscores the need to explicitly incorporate positional uncertainties within the optimization problem formulation (Table 5).Table 5. Some design solutions for the deterministic optimization problem
Optimal solution h (mm) | |
|---|---|
50.4 | |
46.5 | |
41.8 |
The robust optimization problem is formulated as a constrained multi-objective optimization problem, aiming to minimize both the mean and the standard deviation of the quantity of interest in the presence of uncertainties in the parameters . Similar to the evaluation of the objective functions in the optimization problem, verifying the joint integrity also requires propagating uncertainties using the Monte Carlo method. Indeed, the computation of the equivalent Von Mises stress depends on the considered parameters . Consequently, the maximum equivalent stress over the entire set of stochastic parameter samples is considered as a constraint of the optimization problem.
16
From this point on, all robust optimization simulations will be performed using the NSGA-II algorithm with a population of 150 individuals over 300 generations. The Expectation/Variance trade-off is illustrated in Fig. 14. The static quantities, and , are calculated for a criterion analogous to a relative quadratic error. Thus, for the robust solutions, we observe an average error of about and a dispersion of results, in the presence of uncertainties in the joint parameters , of around . As the structure becomes stiffer, the deflection becomes smaller and moves further away from the nominal design.[See PDF for image]
Fig. 14
Comparison between the Pareto front and the robust solutions
To illustrate the effectiveness of the robust design approach, a useful statistical indicator is the coefficient of variation (). This is a statistical measure of the relative dispersion of data around the mean and is particularly useful for comparing the relative variation of data with different means. In general, a CV lower than indicates good reliability and consistency of the data. We note, therefore, the relevance of the robust design approach for the example treated in this section (Table 6), which is consistent with real-world observations highlighting a significant change in stiffness caused by the new design. The definition domain of the Pareto front is intrinsically dependent on the constraints of the optimization problem. Thus, the nominal solution or deterministic optima may dominate all Pareto points, but these design solutions are not statically admissible. Finally, the choice of the optimal solution on the Pareto front depends on the criteria defined by the user. For example, if the designer wishes to minimize the system’s mass, the solution appears to be the most suitable.
Table 6. Comparison of the coefficients of variation of the robust and several deterministic solutions
h(mm) | ||
|---|---|---|
Robust solutions | 57.1 | 3.1 |
68 | 1.1 | |
100 | 0.1 | |
Deterministic solutions | 50.4 | 9.1 |
46.5 | 5.2 | |
41.8 | 29 | |
Nominal solution | 40 | 71 |
Then, the idea is to address the previously defined robust optimization problem by employing alternative uncertainty propagation methods. These methods are expected to be more computationally efficient and better suited to the problem compared to the Monte Carlo approach, which will serve as the reference solution. In the context of robust optimization, different uncertainty propagation methods yield varying levels of accuracy in approximating the Pareto front. The second-order perturbation method improves upon the first-order approach but tends to shift the Pareto front to the right due to an overestimation of the Von Mises stress constraint, affecting the feasibility domain. The polynomial chaos method, on the other hand, provides a good approximation of the Pareto front and robust solutions, though it results in a more uniform distribution of solutions, making it harder to pinpoint the minimal robust solutions precisely. Despite these challenges, the most relevant solution—the one at the maximum curvature of the Pareto front, representing the best trade-off–can still be estimated with less than error. Additionally, while mean values are relatively easy to estimate, standard deviation proves more challenging, particularly for small values of the design parameter h i.e. when the system is more sensitive to variability (Fig. 15).
[See PDF for image]
Fig. 15
Comparison of robust solutions for different methods
It has been observed that the PCE method provides an accurate approximation of the Pareto front while significantly reducing computational cost (Table 3). Therefore, this methodology is selected to study the impact of local/global algorithm coupling on robust optimization. The influence of the number of iterations of the non-intrusive local/global algorithm on the Pareto front is analyzed, showing that after a certain number of iterations, the estimated Pareto front converges to the reference front (Fig. 16). In all cases, once the error of the non-intrusive local/global coupling algorithm reaches a sufficiently low level, the Pareto fronts remain admissible. Although the robust solutions are not fully optimal, they allow for conducting a robust and fast pre-design phase. A more detailed analysis is conducted to evaluate the impact of the number of iterations on the values of the two optimization objectives for the robust solutions obtained. Two key observations emerge from this study. On the one hand, the approximation error is more significant for the standard deviation than for the mean, mainly due to the quadratic nature of the standard deviation. On the other hand, ensuring the convergence of the non-intrusive local/global coupling algorithm is essential to accurately estimate the objectives of robust optimization. There are two main sources of error, each with a different impact. The first is related to precision, which can lead to an inaccurate approximation of the quantity of interest. The second concerns the need to ensure the admissibility of robust solutions, which may lead to unacceptable and potentially problematic design decisions. It should be noted that the submodeling method does not yield satisfactory results, mainly due to the highly localized nature of the uncertainties. Finally, it is important to be able to certify the coupling error, particularly to ensure the admissibility of the obtained design solutions. Therefore, it could be useful to integrate an admissibility error estimator (e.g., through a goal-oriented method) into the optimization phase, in order to automatically guarantee the admissibility of non-converged solutions.
[See PDF for image]
Fig. 16
Comparison of Pareto fronts (PCE) for different numbers of local/global coupling errors
A first strategy to ensure admissibility
In the previous section, we have highlighted a robust design methodology aligned with the use of non-intrusive local/global coupling techniques. In particular, the influence of coupling error on both the quality and admissibility of robust solutions has been investigated. Our results indicate that ensuring a certain level of convergence in the coupling algorithm is crucial to guarantee the admissibility of robust solutions (Fig. 16). Consequently, a sufficient number of evaluations of the local/global coupling model is required. Moreover, the admissibility threshold of a solution is not known a priori, which necessitates the exploration of a broader search space (Table 7). This leads to an increased number of evaluations of the assessment algorithm and, therefore, higher computational costs. The next step is to establish a multifidelity strategy aimed at: (1) guaranteeing, a priori, the admissibility threshold of robust solutions, and (2) providing a low-cost, yet accurate, estimation of the Pareto front. All stochastic studies presented in this section are performed using the Polynomial Chaos Expansion (PCE) method. The first step is therefore to guarantee, a priori, the admissibility of solutions with a limited number of evaluations of the coupling algorithm, potentially relying on non-converged solutions. A straightforward approach would be to perform a deterministic single-objective optimization, such as a gradient-based descent, to determine the minimal section size h that ensures the Von Mises stress criterion is satisfied within the coupling zone. However, the structure of the problem offers an advantage: the maximum Von Mises stress within the coupling zone evolves linearly with respect to the section size h. Therefore, only two evaluations of the fully converged local/global coupling algorithm are sufficient to accurately estimate the admissibility threshold. This significantly reduces the computational cost associated with satisfying the optimization problem constraints. Using linear regression, the admissibility threshold parameter can be rapidly estimated as with an error of less than 1%. The admissibility threshold for the design parameters is now established and can serve as a reference throughout the design process. The second step consists in ensuring the admissibility of robust solutions even when the local/global coupling algorithm has not fully converged. In other words, the goal is to avoid exploring designs that fall below the previously identified threshold. This objective can be achieved by appropriately adjusting the safety factor introduced in the formulation of the robust optimization problem optimization problem in Eq. (16).
Table 7. Admissibility threshold solution depending on the convergence of the coupling algorithm
Convergence error (%) | Admissibility threshold |
|---|---|
22 | 51.2 |
11 | 52.4 |
5 | 53.9 |
0.7 | 56.5 |
0.08 | 57.1 |
0.001 | 57.3 |
The local/global coupling algorithm tends to underestimate the value of the Von Mises stress in the system when not fully converged. Therefore, increasing the safety factor in the presence of large coupling errors appears consistent, while gradually reducing it as the algorithm converges becomes appropriate (Fig. 17). It can be observed that the safety factor naturally converges toward when the coupling error becomes sufficiently small. The next step, which involves implementing a more efficient and targeted optimization strategy than the one described previously. In the earlier approach, the design parameter space was explored too broadly. From now on, the search is focused around the admissibility threshold, with a maximum deviation of 20%, for example. The robust design problem is thus reformulated to concentrate the optimization effort around this target value. Also, the goal is to estimate the Pareto front with a reduced number of iterations of the coupling algorithm. To achieve this, the non-converged version of the coupling algorithm is used, allowing for a coupling error of approximately 20%. In order to preserve the admissibility of the solutions, the safety factor in the robust optimization constraints is adjusted accordingly (Fig. 17). Although the specification of tighter bounds for the design parameter and the adjustment of the safety factor both rely on the same underlying information, presenting both remains essential. These two perspectives provide complementary means of integrating admissibility into the design process. Offering both options allows the designer to select the most appropriate strategy depending on the problem context and the available computational tools (Fig. 18).
[See PDF for image]
Fig. 17
Evolution of the safety factors according to the local/global coupling error
[See PDF for image]
Fig. 18
Robust solutions with the new strategy
It is observed that designs with h values below the threshold are no longer explored, which inherently guarantees admissibility. Additional benefits can be observed regarding the efficiency of the proposed low-cost strategy. By controlling the coupling error, the number of local/global exchanges is reduced, leading to fewer iterations and computations. Furthermore, by restricting the search space of the optimization algorithm, the number of model evaluations required for a given population in the genetic algorithm can be significantly decreased. By guiding the search within a narrower region and ensuring in advance the admissibility of candidate solutions, the multi-objective optimization algorithm explores a smaller portion of the design space, concentrates its efforts in a more relevant area, and requires fewer evaluations while also discarding fewer solutions. As an example, the number of solver evaluations has been reduced by approximately 50% through a targeted restriction of the optimization algorithm’s search space. Nevertheless, an estimation error on the first statistical moments of the performance function may still persist. As a result, it remains important to ensure that the designer is making a reliable decision despite this approximation. A validation phase is therefore essential to assess the accuracy of the predicted quantities and to confirm the robustness of the chosen solution (Fig. 19).
[See PDF for image]
Fig. 19
Coefficient of variations according to robust solutions
As previously mentioned, one possible criterion for selecting a robust solution is the coefficient of variation CV, to be consistent with notations used along the paper of the performance function, which intrinsically depends on the chosen design. For example, if a designer seeks a coefficient of variation on the order of 1%, this value can be selected based on the low-cost estimate provided by the non-converged local/global algorithm. In the present case, this corresponds to a section size of . To ensure that the selected robust solution actually satisfies the desired criterion, a cross-validation phase can be performed. This involves evaluating the coefficient of variation using the statistical estimates obtained from the fully converged local/global algorithm, which, in our case, confirms the validity of the selected design.
Conclusions and perspectives
This paper presents a robust design approach for multiscale simulations, marking a significant step toward effective uncertainty management in engineering applications. A comparative analysis of uncertainty propagation methods, such as polynomial chaos expansion and perturbation techniques, helps determine the most appropriate strategies for different study contexts, ensuring reliable predictions in complex systems. In addition to uncertainty quantification, addressing the computational challenges associated with optimization is essential. In fact, while genetic algorithms are widely used for multi-objective problems and provide a good trade-off between exploration and convergence, their high computational cost due to numerous solver evaluations makes them less suitable in a multi-query framework. Also, in more complex multiscale simulations, the RDO framework may be computationally expensive and not fully compatible with operational or realistic constraints. In such cases, it could be more appropriate to adopt a design under uncertainty approach, which focuses solely on minimizing the expected value of the objective function [44, 45]. As for other works on non-intrusive approaches methods such as acceleration techniques, preconditioning, and enhanced iterative solvers will be used to improve computational efficiency and convergence. More specific techniques will focus on implementing multi-solution [46] and multi-fidelity strategy [47] based on low-cost models to further improve the efficiency of the uncertainty propagation framework.
Author Contributions
LK developed the computational model, carried out the investigation, and was the primary author of the manuscript. OA and LC supervised the project, guided the research direction, and contributed to both the methodological framework and the conceptual design. SG facilitated the development of a simplified example based on a real industrial case.
Funding
Not applicable
Data availibility
All data and materials are available upon request.
Declarations
Competing interests
The authors declare that they have no Competing interest.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
1. Zang, C; Friswell, M; Mottershead, JE. A review of robust optimal design and its application in dynamics. Comput Struct.; 2005; 83,
2. Guedri, M; Cogan, S; Bouhaddi, N. Robustness of structural reliability analyses to epistemic uncertainties. Mech Syst Signal Process.; 2012; 28, pp. 458-469. [DOI: https://dx.doi.org/10.1016/j.ymssp.2011.11.024]
3. Ouisse, M; Cogan, S. A Decision-making Methodology for the Robust Design of Spot Welds in Automotive Structures. IMAC XXVI; 2008; [DOI: https://dx.doi.org/10.1016/j.ymssp.2009.09.012]
4. Mandel, J. Balancing domain decomposition. Commun Numer Methods Eng.; 1993; 9,
5. Farhat, C; Lesoinne, M; LeTallec, P; Pierson, K; Rixen, D. FETI-DP: a dual–primal unified FETI method—part I: a faster alternative to the two-level FETI method. Int J Numer Meth Eng.; 2001; 50,
6. Ladevèze, P; Loiseau, O; Dureisseix, D. A micro-macro and parallel computational strategy for highly heterogeneous structures. Int J Numer Meth Eng.; 2001; 52,
7. Gosselet, P; Rey, C. Non-overlapping domain decomposition methods in structural mechanics. Arch Comput Methods Eng.; 2006; 13, pp. 515-572.2303317 [DOI: https://dx.doi.org/10.1007/BF02905857] 1171.74041
8. Melenk, JM; Babuška, I. The Partition of Unity Finite Element Method: basic Theory and Applications. Comput Methods Appl Mech Eng.; 1996; 139,
9. Li, H; O’Hara, P; Duarte, CA. Non-intrusive coupling of a 3-d generalized finite element method and abaqus for the multiscale analysis of localized defects and structural features. Finite Elem Anal Des.; 2021; 193, 4238956 [DOI: https://dx.doi.org/10.1016/j.finel.2021.103554] 103554.
10. Moës, N; Belytschko, T. X-FEM, de nouvelles frontières pour les éléments finis. Revue Européenne des éléments Finis.; 2002; 11,
11. Efendiev, Y; Hou, T; Ginting, V. Multiscale Finite Element Methods for Nonlinear Problems and Their Applications. Commun Math Sci.; 2004; 2119929 [DOI: https://dx.doi.org/10.4310/CMS.2004.v2.n4.a2]
12. Hughes, TJR; Feijóo, GR; Mazzei, L; Quincy, JB. The variational multiscale method—a paradigm for computational mechanics. Comput Methods Appl Mech Eng.; 1998; 166,
13. Oden, JT; Vemaganti, K; Moës, N. Hierarchical modeling of heterogeneous solids. Comput Methods Appl Mech Eng.; 1999; 172,
14. Wagner, GJ; Liu, WK. Coupling of atomistic and continuum simulations using a bridging scale decomposition. J Comput Phys.; 2003; 190,
15. Parsons, ID; Hall, JF. The multigrid method in solid mechanics: part I-Algorithm description and behaviour. Int J Numer Meth Eng.; 1990; 29,
16. Rannou, J; Gravouil, A; Baïetto-Dubourg, MC. A local multigrid X-FEM strategy for 3-D crack propagation. Int J Numer Meth Eng.; 2009; 77,
17. Glowinski, R; He, J; Rappaz, J; Wagner, J. Finite element approximation of multi-scale elliptic problems using patches of elements. Numer Math.; 2005; 101, pp. 663-687.2195403 [DOI: https://dx.doi.org/10.1007/s00211-005-0614-5] 1080.65109
18. Picasso, M; Rappaz, J; Rezzonico, V. Multiscale algorithm with patches of finite elements. Commun Numer Methods Eng.; 2008; 24,
19. Minigher, P; Arteiro, A; Turon, A; Fatemi, J; Guinard, S; Barrière, L; Camanho, PP. On an efficient global/local stochastic methodology for accurate stress analysis, failure prediction and damage tolerance of laminated composites. Int J Solids Struct.; 2024; 303, [DOI: https://dx.doi.org/10.1016/j.ijsolstr.2024.113026] 113026.
20. Narvydas, E; Puodziuniene, N; Thorappa, AK. Application of Finite Element Sub-modeling Techniques in Structural Mechanics. Mechanics; 2021; 27,
21. Belgacem, FB. The Mortar finite element method with Lagrange multipliers. Numer Math.; 1999; 84,
22. Bernardi, C; Maday, Y; Rapetti, F. Basics and some applications of the mortar element method. GAMM-Mitteilungen; 2005; 28,
23. Hansbo, A; Hansbo, P. An unfitted finite element method, based on Nitsche’s method, for elliptic interface problems. Comput Methods Appl Mech Eng.; 2002; 191,
24. Dhia, HB; Rateau, G. The Arlequin method as a flexible engineering design tool. Int J Numer Meth Eng.; 2005; 62,
25. Gendre, L; Allix, O; Gosselet, P; Comte, F. Non-intrusive and exact global/local techniques for structural problems with local plasticity. Comput Mech.; 2009; 44, pp. 233-245.2507806 [DOI: https://dx.doi.org/10.1007/s00466-009-0372-9] 1165.74040
26. Duval, M; Passieux, JC; Salaün, M; Guinard, S. Non-intrusive coupling: recent advances and scalable nonlinear domain decomposition. Arch Computat Methods Eng.; 2016; 23, pp. 17-38.3449510 [DOI: https://dx.doi.org/10.1007/s11831-014-9132-x] 1348.65175
27. Bettinotti, O; Allix, O; Perego, U; Oancea, V; Malherbe, B. A fast weakly intrusive multiscale method in explicit dynamics. Int J Numer Meth Eng.; 2014; 100,
28. Bouclier, R; Passieux, JC; Salaün, M. Local enrichment of NURBS patches using a non-intrusive coupling strategy: geometric details, local refinement, inclusion, fracture. Comput Methods Appl Mech Eng.; 2016; 300, pp. 1-26.3452762 [DOI: https://dx.doi.org/10.1016/j.cma.2015.11.007]
29. Guguin, G; Allix, O; Gosselet, P; Guinard, S. On the computation of plate assemblies using realistic 3D joint model: a non-intrusive approach. Adv Model Simul Eng Sci.; 2016; 3, pp. 1-18. [DOI: https://dx.doi.org/10.1186/s40323-016-0069-5]
30. Gosselet, P; Blanchard, M; Allix, O; Guguin, G. Non-invasive global-local coupling as a Schwarz domain decomposition method: acceleration and generalization. Adv Model Simul Eng Sci.; 2018; 5,
31. Blanchard, M; Allix, O; Gosselet, P; Desmeure, G. Space/time global/local noninvasive coupling strategy: application to viscoplastic structures. Finite Elem Anal Des.; 2019; 156, pp. 1-12. [DOI: https://dx.doi.org/10.1016/j.finel.2019.01.003]
32. Tirvaudey, M; Chamoin, L; Bouclier, R; Passieux, JC. A posteriori error estimation and adaptivity in non-intrusive couplings between concurrent models. Comput Methods Appl Mech Eng.; 2020; 367, 4099878 [DOI: https://dx.doi.org/10.1016/j.cma.2020.113104] 1442.74239 113104.
33. Enevoldsen, I; Sorensen, JD. Reliability-based optimization in structural engineering. Struct Saf.; 1994; 15,
34. Forouzandeh Shahraki, A; Noorossana, R. Reliability-based robust design optimization: a general methodology using genetic algorithm. Comput Ind Eng.; 2014; 74, pp. 199-207. [DOI: https://dx.doi.org/10.1016/j.cie.2014.05.013]
35. Deb, K. A Fast and Elitist Multiobjective Genetic Algorithm: NSGA-II. IEEE Trans Evol Comput.; 2002; 6,
36. Handa, KN; Clarkson, BL. Application of finite element method to the dynamic analysis of tall structure. J Sound Vib.; 1971; 18,
37. Stefanou, G. The stochastic finite element method: past, present and future. Comput Methods Appl Mech Eng.; 2009; 198,
38. Kamiński, M. Generalized perturbation-based stochastic finite element method in elastostatics. Comput Struct.; 2007; 85,
39. Chevreuil, M; Nouy, A; Safatly, E. A multiscale method with patch for the solution of stochastic partial differential equations with localized uncertainties. Comput Methods Appl Mech Eng.; 2013; 255, pp. 255-274.3029038 [DOI: https://dx.doi.org/10.1016/j.cma.2012.12.003] 1297.65192
40. Ghanem, R; Spanos, PD. Stochastic Finite Elements: a Spectral Approach; 1991; New York, Springer: [DOI: https://dx.doi.org/10.1007/978-1-4612-3094-6] 0722.73080
41. Berveiller, M; Sudret, B; Lemaire, M. Stochastic finite element: a non intrusive approach by regression. Eur J Comput Mech/Revue Eur de Mécan Num.; 2006; [DOI: https://dx.doi.org/10.3166/remn.15.81-92]
42. Lüthen, N; Marelli, S; Sudret, B. Sparse polynomial chaos expansions: literature survey and benchmark. SIAM/ASA J Uncert Quantif.; 2021; 9,
43. Marelli S, Sudret B. UQLab: A Framework for Uncertainty Quantification in Matlab. In: Vulnerability, Uncertainty, and Risk. 2022. p. 2554–63. https://doi.org/10.1061/9780784413609.257.
44. Zhang, J; Taflanidis, AA; Medina, JC. Sequential approximate optimization for design under uncertainty problems utilizing kriging metamodeling in augmented input space. Comput Methods Appl Mech Eng.; 2017; 315, pp. 369-395.3595260 [DOI: https://dx.doi.org/10.1016/j.cma.2016.10.042] 1439.74260
45. Zhang, J; Taflanidis, A. Multi-objective optimization for design under uncertainty problems through surrogate modeling in augmented input space. Struct Multidiscip Optim.; 2019; 59, pp. 351-372.3905400 [DOI: https://dx.doi.org/10.1007/s00158-018-2069-1]
46. Allix, O; Vidal, P. A new multi-solution approach suitable for structural identification problems. Comput Methods Appl Mech Eng.; 2002; 191, pp. 2727-2758.1899133 [DOI: https://dx.doi.org/10.1016/S0045-7825(02)00211-6]
47. Courrier, N; Boucard, PA; Soulier, B. The use of partially converged simulations in building surrogate models. Adv Eng Softw.; 2014; 67, pp. 186-197. [DOI: https://dx.doi.org/10.1016/j.advengsoft.2013.09.008]
© The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.