Content area
The advent of revolutionary advances in artificial intelligence (AI) has sparked significant interest among researchers across a spectrum of disciplines. Machine learning (ML) has become a potent tool for advancing materials research, offering solutions beyond traditional methods. This study discusses traditional machine learning (TML) and deep learning (DL) algorithms, providing a concise overview of commonly used ML algorithms in materials research. It also examines the general workflow of ML applications in superalloys, focusing on key aspects such as data preparation, feature engineering, model selection, and optimization, offering insights into the ML modeling process. From the perspective of the materials tetrahedron, this review explores ML applications in the research and development of superalloy composition, microstructure, processing, and performance. It highlights the use of advanced ML models to predict material properties, optimize alloy compositions and microstructure, and enhance manufacturing processes. It covers the use of advanced ML models and discusses the prospects of ML in superalloy research, highlighting its transformative potential in alloy material science.
The advent of revolutionary advances in artificial intelligence (AI) has sparked significant interest among researchers across a spectrum of disciplines. Machine learning (ML) has become a potent tool for advancing materials research, offering solutions beyond traditional methods. This study discusses traditional machine learning (TML) and deep learning (DL) algorithms, providing a concise overview of commonly used ML algorithms in materials research. It also examines the general workflow of ML applications in superalloys, focusing on key aspects such as data preparation, feature engineering, model selection, and optimization, offering insights into the ML modeling process. From the perspective of the materials tetrahedron, this review explores ML applications in the research and development of superalloy composition, microstructure, processing, and performance. It highlights the use of advanced ML models to predict material properties, optimize alloy compositions and microstructure, and enhance manufacturing processes. It covers the use of advanced ML models and discusses the prospects of ML in superalloy research, highlighting its transformative potential in alloy material science.
INTRODUCTION
Superalloys play a significant role in national strategy1,2 and are widely utilized in aerospace and energy fields.3,4 They are used not only for manufacturing turbine disks,5 blades,6 and various structural parts of engines but also for applications in ships,7 ground-based gas turbines,8 and nuclear energy production.9,10 To meet the demands of extreme working environments, such as high temperatures and complex loads, and to enhance hightemperature strength, oxidation resistance, and corrosion resistance, superalloys have evolved from the first to fourth generation.11'12 However, the research and application of superalloys still face development challenges.8 The fifth-generation Reand Ru-containing Ni-based single-crystal (SX) superalloys continue to balance high-temperature performance and processability by finely controlling alloy composition and microstructure design to enhance high-temperature strength, creep resistance, oxidation, and corrosion resistance.1 ~15 The demand for higher operating temperatures in gas turbine engines and land-based power generation has been increasing. Advanced ultra-supercritical (A-USC) conditions of temperature and pressure include steam conditions of up to 760°C and 35 MPa. The long-term exposure of superalloys to high temperatures and stress results in significant microstructural instability.16 Controlled processing typically helps to minimize defects in superalloy products which may serve as crack nucleation sites or microsegregations, leading to the formation of detrimental phases.17 The relationship among the composition, microstructure, and properties of superalloys has been extensively studied, but the development and optimization of superalloys remain a long and challenging journey.18
From a materials tetrahedron perspective, ML has advanced R&D in superalloys across generations, facilitating ML-assisted and ML-driven optimization of superalloy compositions, microstructures, processing, manufacturing, and properties. Flemings introduced the comprehensive concept of the materials research tetrahedron in 1999, which combines the "synthesis/processing," "composition/structure," "performance," and "properties" as four basic elements to describe and categorize materials.19,20 Most researchers believe that explaining and controlling one or more of the four fundamental elements is a goal pursued by the broad field of materials science and engineering (Fig. 1). The development of ML has accelerated research in the materials tetrahedron, serving as a powerful driver for materials design, development, and optimization. ML algorithms that train, validate, evaluate, and optimize models using data to make predictions about the research objectives of materials have already achieved significant success in many superalloy studies due to their unique flexibility, fast response, adaptivity, prediction accuracy, and excellent generalization capabilities.21-23 The traditional R&D process for materials mostly relies on trial-and-error methods and accidental discovery24,25 as well as extensive experimentation.26,27 In comparison to these costly, timeconsuming, inefficient, and labor-intensive methods, ML28,29 has been integrated into the materials development process as an assistant tool or core driver. The feasibility of material development is predicted through ML, which is at the forefront of material development experiments. This changes the traditional approach to material development, which relied solely on experience and experiments, and significantly reduces product scrap, material waste, and time costs associated with the material development process.
Our research discusses ML-assisted design and development of super alloys, focusing on the "materials tetrahedron." Section "Machine Learning Algorithms" provides a brief overview of the ML algorithms commonly used in the material, including TML and DL algorithms. Section "Methodology for Machine Learning Modeling of Superalloys" provides a brief overview of the general process of ML modeling of superalloys, with a focus on data preparation, feature engineering, model selection and optimization. The general workflow is designed to make more nuanced and targeted changes to the specific details of the work when confronted with various research objectives and data types. Section "Application of Superalloy Aspects with Machine Learning" reviews the results of MLassisted development of superalloys and discusses the potential of ML applications in superalloys. Section "Summary and Perspectives" summarizes the material and offers new insights into the potential of ML in materials beyond the field of super alloys.
MACHINE LEARNING ALGORITHMS
The goal of ML is to extract "patterns" from historical data samples and to effectively adapt to new samples. ML is increasingly focused on solving real-life problems, encompassing both theoretical and modeling research. With the iterative advancement of algorithms, ML has evolved into two primary categories: TML and DL. TML models typically exhibit relatively simple architectures and limited data processing capabilities, necessitating comparatively low computational power. In contrast, DL, developed upon the foundations of TML to address more complex problems, features intricate architectures and robust data processing capabilities, thereby requiring significantly greater computational resources.30 Both TML and DL play pivotal roles in enabling computers to make specialized predictions and decisions with minimal human intervention.31 The deployment of ML models in scientific research not only augments the efficiency and precision of data analysis but also furnishes new tools and methodologies for scientific discovery and technological innovation. This section focuses on providing an overview and comparison of TML algorithms and DL algorithms.
Traditional Machine Learning Algorithms
Typical TML32,33 is mainly categorized into supervised learning and unsupervised learning. 4,35 Supervised learning is a common ML approach in which models are trained using labeled training samples and then used to predict new data.3 Unsupervised learning refers to the process of discovering hidden structures and patterns from unlabeled training data.37 The development of specialized semi-supervised learning algorithms is essential when labeled and unlabeled data in a dataset cannot be effectively utilized with supervised and unsupervised learning algorithms. Semisupervised learning utilizes both labeled and unlabeled datasets to train the model and falls between supervised and unsupervised learning.38 Semi-supervised learning can be further divided into pure semi-supervised learning and transductive learning. The former assumes that the unlabeled samples in the training data are not the data to be predicted and hopes that the learned model can be applied to the data not observed during the training process, while the latter assumes that the unlabeled samples considered during the learning process are the data to be predicted and attempts to predict the data not observed during the learning process.39 Semi-supervised learning, regardless of its classification, ultimately aims to maximize the use of data and optimize the model's generalization ability.
Early ML was initially characterized by simple inductive statistics and logical reasoning aimed at minimizing information entropy. Between the 1950s and the 1980s, the paradigm of ML transitioned from "reasoning" to "learning," focusing on the acquisition and utilization of domain-specific knowledge to develop expert systems. By the 1980s, ML had been formally identified as the "crucial solution to the knowledge engineering bottleneck".39 Table I compiles frequently employed TML algorithms in the field of materials science. These findings demonstrate that TML algorithms are particularly well suited for classical regression, classification, and clustering analyses, capable of effectively functioning on smaller datasets and yielding outstanding results, thereby enabling extrapolative predictions for target achievement.40 The specific selection of algorithms depends on the particular problem and data characteristics.41 Therefore, the application of ML in materials research is not simply about calling algorithms and pairing model data. It should not be limited to assessing the effectiveness of existing model screening. This rigid screening process restricts the flexibility and progress of ML in materials research applications. Not being restricted to standardized algorithms is a concern for all researchers.
In materials science research, TML algorithms are highly reliant on expert knowledge, necessitating manual operations such as feature selection, extraction, and dimensionality reduction. This imposes rigorous demands on data preprocessing for model inputs. To address feature redundancy and optimize the RF algorithm for fitting creep data of Ni-based SX superalloys, Huang et al.42 employed principal component analysis (PCA) for dimensionality reduction. Although this approach improves model fitting, it can result in the loss of the physical significance of the input features. The integration of CALPHAD methods can guide dimensionality reduction or help retain the physical significance to the greatest extent possible. Šuch feature engineering enhances the interpretability of TML models. The principles and structures of TML algorithms are relatively simple and transparent, facilitating the comprehension of patterns and data relationships within ML models. TML algorithms remain firmly grounded in their mathematical principles during application, exhibiting a degree of inherent rigidity in both model structure and performance.43 While typical TML algorithms can be applied to research data across various domains and may exhibit exceptional performance on certain datasets, they are less proficient at handling unstructured data such as images, time series, and volumetric data, which are prevalent in scientific research. Moreover, the scope for improvement in TML algorithms is constrained because of their standardized nature, which is not inherently conducive to the innovative essence of scientific research. Consequently, DL algorithms exhibit superior flexibility.
Deep Learning Algorithms
DL is a subset of ML, which falls under the umbrella of AI65,66 (Fig. 2). DL has evolved from the simple neural networks (NN) used in TML. The concept of DL originated in cybernetics and later developed within the field of connectionism. Typical structures of DL algorithms are illustrated in Fig. 3: (1) Deep Feedforward Network (DNN), the most typical DL model, defines a mapping y = f (x; 6) and learns the value of the parameter 9 to obtain the best approximation of the function. (2) Feedforward Neural Networks (FNN) lack feedback connections between the model's output and the model itself. However, FNNs can introduce feedback pathways within the network, feeding the output of the network back to neurons in the previous or the same layer. When extending a feedforward neural network to include feedback connections, it transforms into a Recurrent Neural Network (RNN), commonly used for sequential data processing. (3) Convolutional Neural Networks (CNN) are designed to handle data with a grid-like structure, such as time-series, image, and volumetric data. CNNs require using convolution operations in at least one layer of the convolutional network as opposed to general matrix multiplication operations.
As illustrated by the typical DL algorithms in Fig. 3, the fundamental architecture of neurons allows for significant flexibility in adjusting neural network structures, such as the number of neurons, layers, activation functions, etc. This flexibility empowers researchers to design custom networks and loss functions tailored to specific tasks, whereas TML generally depends on predefined algorithms for implementation.
Compared to TML, the computational models with multiple processing layers and robust nonlinear transformation capabilities enable DL to learn data representations at various levels of abstraction. This allows DL to effectively handle large-scale and high-dimensional data, particularly in processing raw natural data, demonstrating superior data adaptability and enabling automatic feature engineering, offering a significant advantage in handling unstructured data.67 However, the nested nonlinear structures of DL are inherently difficult for humans to interpret, making their predictions more challenging to trace compared to TML.68-70 Unlike TML, which requires designing and training models from scratch for each new task, DL can achieve model transfer through pre-trained models.71 This overcomes the generalization limitations of small datasets, allowing a model trained on a large dataset to be fine-tuned for new tasks and datasets, thus quickly adapting to new tasks.72 For instance, the deep neural network based on Transformer (SaTNC) pre-trained on a traditional nickelbased SX superalloy creep dataset was fine-tuned on ultra-high-temperature creep data to predict the creep rupture life of SX super alloys.73 DL can also implement reinforcement learning and dynamic updates to maintain the model's real-time accuracy, a capability far beyond that of TML. Although DL can construct larger and more complex models in terms of algorithmic structure, its effectiveness across different fields cannot be generalized. In specific cases, the performance of TML and DL may be indistinguishable or even better for TML. For example, TML outperformed CNN in face recognition databases such as AR and Yale.74 A comparative analysis of the TML algorithm SVM and the DL algorithm CNN in image classification showed that on the large-sample MNIST dataset, SVM achieved an accuracy of 0.88, while CNN reached 0.98. However, with the small-sample COREL1000 dataset, SVM achieved an accuracy of 0.86, surpassing CNN's 0.83.75
In conclusion, the suitability of machine learning algorithms should be carefully considered based on different research objectives and data types. One should not blindly use the more "advanced" DL and overlook TMUs capability to simplify problems.
METHODOLOGY FOR MACHINE LEARNING MODELING OF SUPERALLOYS
In materials science, the general workflow for the application of ML includes data collection, preprocessing, feature engineering, model selection and training, evaluation, and optimization, as illustrated in Fig. 4. The data must be accurate, complete, consistent, timely, reliable, and interpretable. Feature engineering involves processing and transforming raw data to extract more information and express features, aiming to enhance model performance and generalization. The quality of the features determines the maximum performance of the agent model and the efficiency of the search for suitable candidates.76 Model selection considers data characteristics, business requirements, performance expectations, computational resources, and application scenarios to ensure the final model's accuracy and generalization capability. Model optimization is a critical step because the collaborative adjustment of the entire machine learning modeling process is necessary to achieve the optimal model performance. Therefore, this subsection primarily reviews and discusses the key aspects of applying machine learning in superalloys, focusing on data preparation, feature engineering, model selection, and optimization.
Data Preparation
Data Collection
Data science is highly dependent on and limited by the acquisition of large-scale data. Wang et al.77 proposed a natural language processing pipeline for extracting literature data to obtain chemical composition and property data of super alloys, enabling analysis and prediction through data mining. Wang et al.78 developed a semi-supervised text mining method to effectively extract data from published literature related to synthesized and processed action sequences. A total of9853 sets of superalloy synthesis and processing data, including chemical compositions, were automatically extracted from 16,604 articles on super alloys. The synthesis factors identified through text mining significantly enhance the performance of a data-driven model for predicting / size. The text mining approach complements the data collection process prior to constructing a ML model.
The enhancement of superalloy-related databases not only demonstrates a significant advantage in data preparation79 but also streamlines the application of ML in superalloys to expedite the design and optimization of superalloys. In addition, there are also data from various sources such as simulations, computations, and experiments.80 However, it is very difficult for these sources to autonomously generate a large amount of data, and the limited amount of data is likely to restrict the usefulness and predictive accuracy of ML models.81 For this reason, Chen et al.82 developed high-fidelity graph networks to accurately predict material properties with limited data. Sutojo et al.83 employed the method of virtual sample generation (VSG) to augment the training set by adding the generated virtual samples. This was done to improve the correlation between descriptors and objectives. The challenge of small data can be addressed at the algorithmic level through small data modeling and imbalance learning84 or by incorporating information from open databases or other research sources.
Data Preprocessing
Heterogeneous data require essential cleaning processes such as eliminating duplicates, filling in missing information, addressing outliers, smoothing data, converting to discrete values, and standardizing to enhance the model's computational efficiency, generalization, and robustness.85 Data preprocessing has evolved into a systematic process. However, for different types of heterogeneous data, specific research questions should be addressed using targeted preprocessing methods.86
There is a gap between how computers and humans understand semantics. Currently, TML algorithms are the predominant approach in the field of superalloy research, making the structured processing of large-scale unstructured data particularly important. Information extraction (IE) in natural language processing (NLP) aims to extract structured information from unstructured text, enabling computers to understand natural language. To minimize manual intervention, Yan et al.87 proposed a semi-supervised materials IE framework that automatically generates corpora and tested it on the У solvation temperature, density, and solidus temperature of superalloys. Microstructure characterization in super alloy research often involves images and three-dimensional structures. Images can be converted into matrices composed of pixel values88'89 and can be quantitatively analyzed using DL models.90 Threedimensional structures can be transformed into volume elements to fit superalloy microstructures into machine learning models91 or structured data can be established through correlation analysis and physical transformation.92 However, DL algorithms, compared to TML, can broaden the window for data processing requirements, where encoding is a key step. A typical example is the Transformer encoder93 (see Fig. 5b). Utilizing self-attention mechanisms and feedforward neural networks, the encoder transforms input sequences into an internal representation, known as a context vector. The decoder then generates output sequences based on this context vector using Masked Multi-Head Attention and feedforward neural networks. OpenAI's ChatGPT, developed based on the "Transformer," is a large multimodal model that achieves encoder-decoder capabilities for complex and vast data like text-image.94'95 Such advanced multimodal algorithms demand high computational power, which, in the context of superalloys, might exceed experimental costs, contradicting the datadriven development purpose for superalloys. Understanding DUs exceptional encoding and decoding capabilities is enlightening for superalloys research. Yang et al.96 established a physically constrained creep rupture life prediction model for nickel-based SX superalloys using a neural network framework based on Transformers. Currently, DL in encoding raw data for superalloys primarily relies on simpler neural network principles (Fig. 5a), such as CNN, LSTM, and RNN. The design of encoders and decoders must also align with research needs, particularly considering the data type, whether it requires structuring, semantic conversion, or data parameterization. As algorithms continue to evolve, researchers are eager to experiment with novel models to explore their applicability in superalloy, anticipating the continual transplantation of more advanced AI models into superalloys research in the future.
Feature Engineering
Raw data may contain many redundant or irrelevant features, leading to the curse of dimensionality. Selecting, combining, or extracting features is an effective means of reducing data dimensionality. Effective mathematical representation is essential for both physical and machine learning models as well as for constructing fingerprints and descriptors.97 Converting raw data into descriptors containing domain knowledge98 allows for better adaptation and utilization of model algorithms. Taylor et al.99 re-parameterized microstructure data of 97 experimental superalloys to establish descriptors that capture underlying physics and are more suitable for machine learning. Using partition coefficients instead of phase compositions provides simpler physical information encoding, as partition coefficients for a given element in various alloys are usually more similar than their respective phase compositions.
In materials science, manually processing data to provide reliable labels is highly challenging. Machine learning algorithms can also construct representation models100 or surpass traditional methods' constraints. Zheng et al.101 used a U-net network for microstructure image segmentation and feature extraction and analyzed feature importance based on a DNN + ResNet model by progressively removing features. Learning models based on persistent functions (PFs) offer significant accuracy advantages over traditional descriptor-based models. In graphical (or network) models, pairwise interactions within and/or between molecular structures are described.102 This approach to feature engineering using representation models is more systematic. Soofi et al.103 investigated the adaptive coding of state name variables, which contain critical information for alloy manufacturing and are often considered important features for predicting alloy properties in ML models. This adaptive coding can be applied to a variety of decomposable variables to advance ML-assisted alloy design. DL has an advantage over TML in feature engineering. The growing popularity of DL can be attributed to its ability to read raw data directly for pattern extraction and to handle more complex problems.
Model Selection
Various types of problems are appropriate for different ML models. Classification problems can be addressed using logistic regression, decision trees, support vector machine, etc. Regression problems can be tackled using linear regression, ridge regression, gradient boosted regression, etc. Clustering problems can be approached using К-means clustering, hierarchical clustering, etc. Dai et al.104 argued that linear regression is more suitable for fitting expressions, making it applicable for selecting the most appropriate descriptors and fitting the structure-property relationship. Consider the size of the data and the number of features when selecting a suitable ML model. Large-scale datasets are typically suitable for utilizing models that can efficiently process large amounts of data, such as random forests,105 gradient boosted trees, and deep neural networks. The simplicity of models that are applicable to small-scale datasets cannot be generalized. If the dataset contains many features, dimensionality reduction techniques such as principal component analysis and linear discriminant analysis may be necessary to reduce the dimensionality of the features. There are also discrepancies in how well different data types adapt to various models. For example, structured data (e.g., tabular data) are suitable for traditional supervised learning algorithms. Text data are suitable for NLPrelated models,106 such as RNN, CNN, and Attention Mechanisms, while image data are suitable for CNN. In addition, complex models (e.g., DNN) may have greater modeling capabilities but are also more susceptible to overfitting. Conversely, straightforward models (e.g., linear regression) may be easier to explain and understand. When selecting a model, it is important to strike a balance between complexity and interpretability. At the same time, researchers are encouraged to enhance and develop models with improved adaptability and performance, including algorithm fusion.107 Depending on the specific problem and dataset, the predictive accuracy and evaluation metrics of the model are also crucial factors to consider when selecting a ML model. The final model should be able to meet the needs of prediction accuracy or other assessment metrics to produce the best prediction results. In conclusion, choosing the right ML model necessitates thorough consideration and adaptable adjustment and optimization tailored to the specific problem and data scenario.
Model Optimization
The data preparation stage involves preprocessing and data cleaning, feature engineering, and model selection and initialization of model parameters, all of which are processes aimed at optimizing the model. Balancing the data, enhancing its quality, and improving its distribution are all beneficial for training and predicting models, leading to improved model performance. Feature extraction and selection are also important, and features based on expert knowledge are not necessarily closely related to the goal. Algorithms can be used to rank the importance of feature combinations, which is crucial for determining the most important descriptors and ultimately impacts the final results of the model. After completing the model training, the model is evaluated using validation sets or crossvalidation methods. Evaluation metrics such as accuracy, precision, recall, and Fl scores are then calculated to assess and judge the model performance. Model selection and parameter initialization are challenging tasks when aiming to obtain excellent models directly. Subsequently, hyperparameters such as learning rate, regularization coefficient, and the number of hidden layer nodes are adjusted to optimize the model.108 The optimization of the model at this stage is an iterative cyclic process, in which the performance and generalization ability of the model are enhanced through continuous attempts, adjustments, and improvements. The optimized model is deployed in real-world applications, and it is monitored and updated, providing timely detection of the model's performance in the real environment, identifying model drift or performance degradation, and retraining and optimizing the model using new data. Continuously iteration and optimization of the model are carried out based on its performance and feedback in real-world applications. This may involve adjusting the feature engineering, experimenting with new algorithms, incorporating additional training data, and so on, further to enhance the performance and generalization of the model. By iterating through the above steps, the model can be gradually improved and optimized to better align with the actual problem and generate more accurate and reliable predictions. Constant optimization and iteration are crucial aspects of the ML model development process, as they can continuously enhance the quality and performance of the model. Reinforcement learning is a method for acquiring optimal behavioral strategies through the interaction between an intelligent agent and its environment. It involves optimizing decision-making through trial-and-error and a reward mechanism. Common algorithms in reinforcement learning include Q-learning, deep reinforcement learning, and others. Ding et al.109 introduced the basics of classical reinforcement learning and provided an overview of deep reinforcement learning in their book "Introduction to Reinforcement Learning."
APPLICATION OF SUPERALLOY ASPECTS WITH MACHINE LEARNING
This section provides an account of ML-assisted superalloy design and development, covering the four elements of the materials research tetrahedron. It focuses on ML-assisted superalloy composition design, processing and fabrication, microstructure analysis, and performance prediction and optimization.
Compositional Design of Superalloys Based on ML
The goal of designing superalloy compositions is to achieve innovation in development and optimize performance. Traditionally, desired alloy compositions and promising parameter combinations are obtained through extensive calculations and experiments. This approach is inefficient because of the low precision in composition and parameter design and high time and monetary costs, and it is not conducive to the accurate positioning and rapid development of super alloy compositions. The highthroughput design of superalloy compositions typically relies on the coupling of computational thermodynamics and kinetics, which is the core of integrated computational materials engineering (ICME).110'111
However, using simulation alone to capture the compositional space and achieve rapid response to the vast data and structural complexity of superalloys remains challenging. The integration of ML algorithms can mitigate such issues. Li et al.112 embedded DL algorithms into ICME, enhancing its robustness and computational accuracy for highthroughput microstructure simulation of compositionally complex alloys. The combination of machine learning and computational materials science provides a strong driving force for the design and optimization of superalloy compositions. By integrating first principles, phase diagrams, and algorithms, Zhao et al.1 3 utilized JMatPro software and ML to achieve multi-objective optimization of lowdensity, high-strength nickel-based superalloys. In designing the composition system for optimizing the characteristics of nickel-based SX super alloys, Xu et al.114 used ICME and theoretical models to narrow the training data space for machine learning, combined with exogenous data, and utilized the SYR algorithm to select the optimal subset, resulting in the elimination of 99.9985% of the alloys. The Ll2 Co3(A1,W) ternary precipitation phase plays a crucial role in strengthening Co-based super alloys, but it is not very stable.115 Doping transition metal (TM) is one of the methods to improve the stability of this precipitated phase. Guo et al.116 conducted calculations to determine the energy stability and structure of Co3(Al, X) doped with over 30 types of 3d, 4d, and 5d TMs using first principles (FP) methods. They initially identified the doping elements Hf, Ta, and Ti through screening. Based on the FP results of the three TMs, three centerenvironment (CE) models and chemical composition (CC) models were constructed to predict the formation energies and lattice constants of Co3(Al, X) (X = 3ćZ, 4d, and bd TM elements) using RF and support vector regression with radial basis function (SVR_RBF) algorithms. The Co3(Al, WX3) structure becomes most stable when X is a transition metal element from groups IVB and VB. It demonstrates that ML methods can effectively enhance the capabilities of first principles design for superalloy compositions.117 In nickel-based superalloys, when matrix a2< 110>-type dislocations pass through the ordered precipitates, the high-energy antiphase boundary (APB) generated in the {111} family plane promotes the strengthening of the Y phase in the superalloy.118-120 Chen et al.121 generated a large number of density functional theory (DFT) data by analyzing the compositional dependence of у щ in a ternary Ni3Al-based alloy model. RF was used to analyze feature correlations, identifying nine physical properties and constructing a transferable APB energy prediction model.
Based on the previous literature review on MLassisted superalloy composition design, there is a similar workflow for ML in composition design, as shown in Fig. 6. While ML models can be used for the rapid screening of alloy compositions, significantly reducing the costs associated with relying solely on computation and experimentation,12 there are two key points to consider when researchers use ML for the design and optimization of super alloy compositions. First, materials computation plays a crucial role in the front-end design of superalloy compositions, as depicted in Fig. 6. This is due to the scarcity of real experimental data in superalloy composition research. Although ML accelerates the design of superalloy compositions as an auxiliary tool, the ultimate results of composition optimization are directly linked to real-world data. The accuracy, reliability, and fidelity of the training dataset are heavily influenced by the materials, computational simulations and experimental data. Second, caution should be exercised when ML models assume linear and monotonic relationships or additional physical features should be incorporated to impose conditional constraints, making them more realistic. Although DL can provide better nonlinear reasoning, it often lacks interpretability regarding the principles of composition design, which is why materials computation is incorporated to enhance physical explanations.
Microstructure Development of Superalloys Based on ML
The traditional approach to studying the microstructure of superalloys is based on two-dimensional microstructural characterizations,123 including optical microscopy, scanning electron microscopy (SEM), transmission electron microscopy (TEM), energy-dispersive spectroscopy (EDS), and others. The identification and analysis of microstructures largely rely on human expertise.124 However, visual recognition, classification, and generalization of the microstructure of superalloys can be accomplished by using ML techniques for image recognition, reasoning, and others.125 Automated microstructural recognition is crucial for characterizing and designing novel superalloys, particularly in high-throughput experiments, where it can significantly reduce the need for human intervention.89,126 The development of DL has expanded the range of data that TML is designed to process, allowing the automatic identification of microstructures in superalloys.127 Of particular significance are CNNs,128,129 which are extensively employed for the analysis of both two- and threedimensional data in super alloys. CNNs are among the most representative algorithms utilized in the investigation of superalloy microstructure, as illustrated in Fig. 7. Xu et al.130 rapidly characterized the large-scale microstructure of nickel-based SX superalloys during high-temperature creep across the entire typical stress range using high-resolution SEM and the ALTLAS module. The U-Net algorithm, derived from basic CNNs, achieved regional recognition through pixel-level classification of twodimensional characterization data, as shown in Fig. 7a. Subsequently, logical algorithms were used to continuously quantify the 7/7' microstructural parameters in the dendritic regions, establishing a quantitative relationship between the microstructural evolution of nickel-based SX superalloys and creep. Khatavkar et al.131 also predicted the Vickers hardness (HV) of cobalt-based and nickel-based superalloys based on microstructural and compositional features using two-point correlation (TPC) method combined with image processing techniques and ML. Machine learning is highly dependent on data, offering flexibility and adaptability, which facilitates the implementation of transfer learning. Automated models for identifying the microstructure of superalloys can also be applied across different data sources. Yang et al.132 proposed a semi-supervised deep transfer learning framework for recognizing the microstructure of nickel-based superalloys with varying compositions and heat treatment procedures. Using two U-Net + complexes, including supervised feature alignment and unsupervised distribution alignment, the framework enabled the transfer of knowledge from a source domain to a target domain for the recognition model.
The 3D-CNNs evolved into CNN models that directly operate on three-dimensional data,134 including volumetric and time-series data. Cecen et al.133 employed 3D-CNNs to estimate features of higher-order spatial correlations, as shown in Fig. 7b. These features were then combined with TPC method to establish the relationship between 3D microstructures and their effective homogenized properties. This process was used to construct ML models for microstructure-property relationships and to predict the properties of new microstructures. Hestroffer et al. described the 3D polycrystalline microstructure of representative volume elements as homogeneous undirected graphs, designed with nodes consisting of individual grains connected by edges (shared grain boundaries). Utilizing information such as crystallographic orientation, size, and grain adjacency, graph neural networks (GNNs) were used to model the stiffness and yield strength of a-Ti microstructures. The prediction error for unseen microstructures was around 1%, comparable to the performance of 3DCNNs, and exhibited excellent extrapolative capabilities.
The excellent high-temperature strength and resistance to oxidation and creep of superalloys are attributed to their chemical properties and microstructure. In Ni- and Co-based superalloys, the microstructure is primarily composed of ordered 7' precipitates with an Ll2 structure uniformly embedded in a disordered 7 matrix, playing a crucial role in enhancing the performance of these alloys. Liu et al.135 constructed classification models using LR, DT, AdaBoost, and gradient tree boosting (GTB) algorithms. They combined these with efficient global optimization through adaptive iterative loops to achieve multi-objective optimization of the microstructural stability, 7' solvus temperature, 7' volume fraction, and density of Co-based superalloys. Subsequently, they employed various TML algorithms in conjunction with a tri-objective optimization algorithm derived from the probability density function of multivariate Gaussian distributions (MGD) to optimize the 7' volume fraction, size, and morphology in CoNiAlCr-based superalloys, aiming to achieve the desired 7/7' microstructure for alloy synthesis,136 as shown in Fig. 8a. Topologically close-packed (TCP) phases and geometrically close-packed (GCP) phases are recognized as detrimental to mechanical properties137 and should be avoided in the design and optimization of superalloys. Qin et al.138 developed phase prediction models using TML algorithms (LR, KNN, GBRT, RF, DT, and SVM) to predict the relationship between the composition of multi-component Ni-based superalloys and the formation of harmful phases, guiding the design to avoid the formation of detrimental phases, as illustrated in Fig. 8b.
Machine learning not only enables the automatic recognition and feature extraction of superalloy microstructures but also facilitates direct analysis of three-dimensional spatial structures, thereby establishing the relationship between microstructures and properties of superalloys. However, the characterization data of superalloy microstructures are not limited to two- and three-dimensional datasets;139 they also encompass time-resolved data and various modal forms such as in situ 3D data. Currently, the application of such data remains underexplored, and there are no outstanding research examples in this area. Nonetheless, multimodal models, such as CLIP and DALL-E, have demonstrated exceptional information integration, robustness, and expressiveness in Open AI, making them noteworthy achievements in the machine learning field. These multimodal models can be considered the culmination of current advancements in machine learning. We anticipate that truly intelligent large models, under the constraints of expert systems, will harness the creativity provided by high-dimensional and logical data, as well as multimodal data, to drive the continuous development of superalloys. By combining genetic algorithms, Pareto frontier algorithms, and others, multi-objective performance optimization can be further realized. Trained ML models will be used to identify the necessary microstructures based on desired properties, providing guidance for researchers to synthesize or prepare corresponding structures.
Processing Optimization of Superalloys Based on ML
The synthesis and processing of superalloys play a crucial role in designing alloys with the desired microstructure and properties. The selection and design of optimal synthetic routes traditionally rely on manual experimental evaluation and data analysis. Process optimization is a time-consuming, labor-intensive, and material-intensive endeavor when conducted through specific manual experiments. This is further complicated by the unpredictability of real-world environments that may arise. Therefore, quickly and accurately obtaining the optimal process parameters at a low cost has become a primary concern for researchers. Without relying on expert knowledge or empirical formulations, Tamura et al.140 optimized the process parameters in powder manufacturing using a pure ML approach (Bayesian optimization) to optimize and determine the process parameters for gas atomization in the manufacturing of nickel-cobaltbased superalloy powders (Fig. 9). The integration of ML into the superalloy processing and manufacturing process enables workers to design and manufacture superalloys that meet the desired properties with minimal reliance on experimentation, simulation, computation, and expert knowledge. Companies like Carbon and General Electric employ AI in AM to optimize design and production. To produce superalloy parts with complex geometries, additive manufacturing (AM) presents a costeffective method.141-143 Establishing the relationship between processing parameters and product quality through machine learning models to predict product quality under different process parameters is a typical research method for AM process optimization. Aman et al.144 created a dataset for AM IN718 through experiments and used nine TMLs-artificial neural network (ANN), DT, RF, SVR, XGBoost, multivariable linear regression, GB, KNN, and kernel SVM-to establish predictive relationships between process parameters and the density and porosity of nickel-based superalloys, guiding the optimization of AM process parameters. Lesko et al.145 utilized TML regression models to study the relationship between HV of nickel-based superalloy and AM process parameters. Ren et al.146 trained a CNN algorithm on labeled image segments to determine whether keyholes formed during the AMDE process, combining high-energy xrays, thermal imaging, and lasers to track and acquire data. This process was validated in the laser powder bed fusion of Tİ-6A1-4V, showing that machine learning methods could accurately detect the formation of keyholes in the powder bed samples with 100% accuracy and sub-millisecond time resolution. The high response rate and accurate prediction of machine learning will promote intelligent processing and control in AM.
Although DL algorithms have rapidly developed and achieved significant progress in various fields, most AM research still employs TML algorithms (Fig. 10). This is mainly because the data collected during the AM process are mostly structured and constitute small datasets, making TML more suitable than DL. However, with the development of AM processing technology, the market's demand for higher structural and performance requirements of superalloys may expose more processing issues. The accumulation of data will also converge towards multimodal data, and DL will continue to drive AM towards a physics-based, data-driven paradigm by uncovering hidden patterns in high-dimensional and multimodal data.147 ML methods are not limited to specific synthesis or processing methods and can be transferred and extended. Liu et al.148 used machine learning to transfer existing knowledge from existing data to new printers, promoting the transfer of knowledge between multiple metal AM printers. This method represents a more comprehensive intellectual advancement.149,150 The AM manufacturing workflow has high process repeatability, and the increasing number of online process monitoring sensors also facilitates the implementation of predictive models, control simulations, and model strengthening for online prediction, quality control, and model self-optimization.
Performance Prediction and Optimization of Superalloys Based on ML
One of the most significant applications of ML in the field of superalloys is its predictive capabilities, including unidirectional prediction of superalloy properties and performance (e.g., yield strength, creep life),151 prediction of superalloy-related mechanisms and modes (e.g., fracture modes, reinforcement mechanisms), and bi-directional predictionguided reverse design.152 Pinz et al.153 correlated the grain morphology and crystallography with the location of crack nucleation sites for the nickel-based superalloy René 88DT under fatigue loading. A probabilistic crack nucleation model based on Bayesian inference was developed by integrating a multiscale model and an anisotropic continuum plasticity intrinsic model to identify the potential mechanisms driving crack nucleation (Fig. 11). Based on the computationally generated micromechanical data of the state variables at potential nucleation points, the probability of observing a crack nucleation event is derived to predict fatigue crack initiation in 88DT poly crystalline tissues. Compared to the traditional analysis of constitutive equations, eigenequations, and material mechanisms, the flexibility of ML in handling data and its black-box operation can often yield unexpected results and inspire researchers. Theoretical research on crack nucleation in materials still relies heavily on experimental and computational methods, which demand high standards for characterization, equipment, and equation approximation. However, the current level of development in exploring the research details is still insufficient. Huang et al.62 examined the accuracy of the Arrhenius model and several ML techniques, including RF, SVM, BP-ANN, and radial basis function ANN, to compare the high-temperature flow stress prediction of GH3536 superalloy. The accuracy of high-temperature flow stress prediction is ranked as follows: RBF > BP > SVM > Arrhenius model > RF. The complexity of the material structure indicates that the process is not simply a statistical description and approximation but rather a complex mapping of potential relationships, which places high demands on the ML algorithms. Of course, this is a common problem when predicting other material properties at the same time. The data dependency of ML and alignment of algorithms with the actual physical environment are promising research directions for minimizing errors.
ML applied to historical data can uncover potential new theories and mechanisms directly from the data. However, it is also plagued by the issue of noninterpretability. Zhao et al.154 developed a ML model using "discarded" experimental data that did not meet the desired hardness, conductivity, etc., criteria. They also incorporated additional features to create an effective Gaussian regression model from limited training data. This model was used to guide the design of high-performance copper alloys. The field of computational materials science provides robust data support that can be combined with ML methods to achieve accurate extrapolation predictions.
Real-world applications frequently demand superalloy properties that are multi-conditional and multi-objective,155 including tensile strength,156,157 wear resistance,158 thermal conductivity,159'160 and corrosion resistance.161'162 Creep fracture life is a crucial metric for designing nickelbased superalloys.163 Gao et al.164 utilized ANN to optimize multiple properties, including microstructure stability, У precipitation volume fraction, processing window, solidification range, and density, to enhance the performance of the commercial K403 superalloy. This approach led to an approximately threefold increase in its creep fracture life at 975°C and 1025°C. To improve the accuracy of predicting creep fracture life using ML, physical constraints can be incorporated into the model96 to enhance the model's generalization ability. Liu et al.135 employed a classification model to aid in the multiobjective performance optimization of /-enhanced Co-based superalloys. They combined domain knowledge and empirical data to simultaneously optimize microstructural stability, / solvent combining temperature, У volume fraction, density, processing window, freezing range, and oxidation resistance. Co-36Ni-12Al-2Ti-4Ta-lW-2Cr, which exhibits the best zero-harmful-phase precipitation performance, was chosen from a pool of > 210,000 potential materials. The best prediction models, GTB and RF, were used to predict the temperature of the / solvent line, solid phase line, liquid phase line, and density. A three-objective optimization algorithm, based on the probability density function of the MGD, was employed to optimize the У volume fraction, size, and morphology in CoNiAlCr-based superalloys simultaneously.1
SUMMARY AND PERSPECTIVES
Traditionally, the exploration of the correlation among the four key elements in material tetrahedra has primarily relied on physicochemical models. ML can disrupt the traditional approach to establishing correlations between production, microstructure, and properties by directly constructing relational models using historical, simulated, experimental, and characterization data, enabling bi-directional prediction. The capability of ML to establish material tetrahedral correlations is demonstrated in the studies of superalloy composition, processing, microstructure, and properties in Sect. "Summary and Perspectives". In both science and engineering, researchers are constantly working to improve properties and performance by regulating and optimizing composition, structure, and processing. None of these elements can exist independently of the tetrahedron. They influence and constrain each other, driving each other to develop. This paper has presented an overview of the progress in ML-driven assisted superalloy development.
However, the advancement of ML in the field of materials research is ongoing.
(1) ML can contribute to expanding materials theory, uncovering new mechanisms, and aiding super alloys characterization techniques. However, material research is becoming increasingly complex, with more and more parameters, which puts higher demands on ML algorithms. TML is too superficial to fully uncover the depths of the data. In contrast, DL emerges as a superior choice after TML. The motivation behind the development of more complex ML algorithms and models is the aspiration for machines to emulate human thinking, judgment, and prediction. This is a challenging goal to achieve at the current level, but it could potentially replace certain human functions, thus propelling the advancement of superalloy research.
(2) The development of unique algorithms applicable to one's own research area. There has been a shift from simple algorithm calls and borrowed techniques from other fields to the use of more complex algorithms such as algorithm fusion, DL, etc. This still leaves many unresolved issues, for the exploration of materials science problems, investigation of materials research methodology, and study of TML and DL algorithms themselves.
(3) ML-driven superalloy research approaches even integrate data-intensive research tools into the first three paradigms, including machine-learning-supported thermodynamic models, kinetic models, and machine-learning potentials, and have demonstrated potential for advancing simulations. Similarly, ML heavily relies on computing science for synthetic data generation and model architecture design.
(4) ML has long been recognized as a powerful tool for directly identifying underlying patterns in data, but how exactly does it work in the "search" process? Interpretable ML has developed aiming to leverage multidisciplinary knowledge to provide coherent explanations for the behavior and outcomes of ML. A model is considered transparent if all its components are easily understood. If only one part is understandable or has a physicochemical basis, then the model is inherently interpret able.
(5) Machine learning algorithms have been increasingly refined, and AI tools based on multimodal large models have emerged. The benefits created for various industries are selfevident. Transforming mature algorithms into key tools driving research in the field of superalloys to accelerate their development and innovation is widely anticipated. Although GE Digital has made a significant advancement in industrial AI applications by using machine learning to create a digital twin model for gas turbines, enabling autonomous adjustment under operating conditions, the models and outcomes of machine learning in superalloy research have not yet shown prominent performance in practical applications and the industry. This is a critical consideration when conducting research in superalloys.
ACKNOWLEDGEMENTS
This study received financial support from the following funding sources: the National Key Research and Development Project (2022YFB3706804), the National Natural Science Foundation of China (52201045), and the National Natural Science Foundation of China (52031012).
CONFLICT OF INTEREST
The authors declare that they have no conflict of interest.
REFERENCES
1. E.O. Ezugwu, Z.M. Wang, and A.R. Machado, J. Mater. Process. Technoi. 86, 1 https://doi.org/10.1016/S0924-0136( 98)00314-8 (1999).
2. Z. Gaole, J. Yun, H. Xiaoan, H. Jia, W. Jinwu, and W. Yun, Mater. Sei. Technoi. 36, 1523 https://doi.org/10.1080/0267 0836.2020.1799137 (2020).
3. D.V.V. Satyanarayana, and N. Eswara Prasad, NickelBased Superalloys, in Aerospace Materials and Material Technologies, ed. by N. Eswara Prasad, and R.J.H. Wanhill (Springer, Singapore, 2017), pp. 199-228. https://doi.org/! 0.1007/978-981-10-2134-3_9.
4. M. Griffiths, Ni-Based Alloys for Reactor Internals and Steam Generator Applications, in Structural Alloys for Nuclear Energy Applications, ed. by G.R. Odette, and S.J. Zinkle (Elsevier, 2019), pp. 349-409. https://doi.org/10.101 6/B978-0-12-397046-6.00009-5.
5. T.M. Smith, N.A. Zarkevich, A.J. Egan, J. Stuckner, T.P. Gabb, J.W. Lawson, and M.J. Mills, Commun. Mater. 2, 106 https://doi.org/10.1038/s43246-021-00210-6 (2021).
6. M. Perrut, P. Caron, M. Thomas, and A. Couret, Comptes Rendus Phys. 19, 657 (2018).
7. LG. Akande, O.O. Oluwole, O.S.I. Fayomi, and O.A. Odunlami, Mater. Today Proc. 43, 2222 https://doi.org/10. 1016/j.matpr.2020.12.523 (2021).
8. M. Detrois, JOM 72, 1783 https://doi.org/10.1007/sll837020-04124-5 (2020).
9. M. Moschetti, P.A. Burr, E. Obbard, J.J. Kruzic, P. Hosemann, and B. Gludovatz, J. Nucl. Mater. 567, 153814 h ttps://doi.org/10.1016/j.jnucmat.2022.153814 (2022).
10. A. Kollová, and K. Pauerová, Manuf. Technoi. https://doi. org/10.21062/mft.2022.070 (2022).
11. J. Rame, P. Caron, D. Locq, O. Lavigne, L. Mataveli Suave, V. Jaquet, M. Perrut, J. Delautre, A. Saboundji and J.Y. Guedou, In Superalloys 2020, ed. S. Tin, M. Hardy, J. Clews, J. Cormier, Q. Feng, J. Marcin, C. O'Brien and A. Suzuki (Springer International Publishing: Cham, 2020), pp 31-40.
12. K. Kawagishi, H. Harada, A. Sato, A. Sato, and T. Kobayashi, JOM 58, 43 https://doi.org/10.1007/sll837-006-0 067-z (2006).
13. N.V. Petrushin, E.S. Elyutin, E.M. Visik, and S.A. Golynets, Russ. Metal. 2017, 936 https://doi.org/10.1134/S0036 029517110118 (2017).
14. A. Sato, H. Harada, A.-C. Yeh, K. Kawagishi, T. Kobayashi, Y. Koizumi, T. Yokokawa, and J.X. Zhang, Proc. Int. Symp. Superalloys. https://doi.org/10.7449/2008/Superalloys_200 8_131_138 (2008).
15. K. Kawagishi, A.-C. Yeh, T. Yokokawa, T. Kobayashi, Y. Koizumi, and H. Harada, Superalloys 9, 189 (2012).
16. K.A. Unocic, X. Chen, and P.F. Tortorelli, JOM 72, 1811 h ttps://doi.org/10.1007/sll837-020-04119-2 (2020).
17. S. Matsunaga, D. Huang, S.B. Inman, J.C. Mason, D. Könitzer, D.R. Johnson, and M.S. Titus, JOM 72, 1794 http s://doi.org/10.1007/sll837-020-04091-x (2020).
18. M. Shahwaz, P. Nath, and I. Sen, J. Alloys Compd. http s://doi.org/10.1016/j.jallcom.2022.164530 (2022).
19. M.C. Flemings, Annu. Reo. Mater. Sei. 29, 1 https://doi.org/ 10.1146/annurev.matsci.29.1.1 (1999).
20. M.C. Flemings, Ado. Mater. 2, 165 https://doi.org/10.1002/ adma. 19900020402 (1990).
21. P.L. Taylor, and G. Conduit, Comput. Mater. Sci. https://d oi.org/10.1016/j.commatsci.2023.112265 (2023).
22. A. Raj, J.P. Misra, and D. Khanduja, J. Ado. Manuf. Syst. 21, 557 https://doi.org/10.1142/s0219686722500196 (2022).
23. J. Lamb, M. Echlin, A. Polonsky, R. Geurts, K. Pusch, E. Raeker, A. Botman, C. Torbet, and T. Pollock, Microsc. Microanal. 28, 862 https://doi.org/10.1017/sl43192762200 3828 (2022).
24. Y.-H. Lee, Y.-C. Chang, K.-C. Pan, and S.-T. Chang, Mater. Chern. Phys. 72, 232 https://doi.org/10.1016/S0254-0584(0 1)00443-6 (2001).
25. J.S.T. Yao, and M.K. Eskandari, Surgery 151, 126 https://d oi.org/10.1016/j.surg.2011.09.036 (2012).
26. Y. Abdelhamid, H. Farahat, M.N. Othman, Y.M. Mater, and A.M. Ahmed, Mater. Today Proc, https://doi.org/10.10 16/j.matpr.2023.08.128 (2023).
27. Y. Kumar, M. Rezasefat, and J.D. Hogan, Mater. Today Proc. https://doi.Org/10.1016/j.matpr.2023.01.354 (2023).
28. S.G. Louie, Y.-H. Chan, F.H. da Jornada, Z. Li, and D.Y. Qiu, Nat. Mater. 20, 728 https://doi.org/10.1038/s41563-02 1-01015-1 (2021).
29. R. Ramprasad, R. Batra, G. Pilania, A. Mannodi-Kanakkithodi, and C. Kim, npj Comput. Mater. 3, 54 https://d oi.org/10.1038/s41524-017-0056-5 (2017).
30. Y. LeCun, Y. Bengio, and G. Hinton, Nature 521, 436 h ttps://doi.org/10.1038/naturel4539 (2015).
31. A. Chahal, and P. Gulia, Int. J. Innoo. Technoi. Expior. Eng. 8, 4910 https://doi.org/10.35940/ijitee.L3550.1081219 (2019).
32. A.L. Fradkov, IFAC-PapersOnLine 53, 1385 https://doi.org/ 10.1016/j.ifacoL2020.12.1888 (2020).
33. Z.-H. Zhou, Front. Comp. Sci. 10, 589 https://doi.org/10.10 07/Sİ1704-016-6906-3 (2016).
34. R. Hierons, Softw. Test. Verif. Reliab. 9, 191 (1999).
35. H. Liu, and M. Cocea, Traditional Machine Learning, in Granular Computing Based Machine Learning, ed. by H. Liu, and M. Cocea (Springer International Publishing, Cham, 2018), pp. 11-22.
36. C. Sammut, and G.I. Webb (eds.), Encyclopedia of Machine Learning (Springer US, Boston, 2010)., pp941-941.
37. C. Sammut, and G.I. Webb (eds.), Encyclopedia of Machine Learning (Springer US, Boston, 2010)., ppl009-1009.
38. X. Zhu, Semi-Supervised Learning, in Encyclopedia of Machine Learning, ed. by C. Sammut, and G.I. Webb (Springer US, Boston, 2010), pp. 892-897. https://doi.org/ 10.1007/978-0-387-30164-8_749.
39. Z.-H. Zhou, Semi-Supervised Learning, in Machine Learning, ed. by Z.-H. Zhou (Springer Singapore, Singapore, 2021), pp. 315-341. https://doi.org/10.1007/978-98115-1967-3_13.
40. I.H. Sarker, SN Comput. Sei. 2, 160 https://doi.org/10.1007/ s42979-021-00592-x (2021).
41. Y. Qi, D. Hu, Y. Jiang, Z. Wu, M. Zheng, E.X. Chen, Y. Liang, M.A. Sadi, K. Zhang, and Y.P. Chen, Ado. Opt. Mater, https://doi.org/10.1002/adom.202203104 (2023).
42. Y. Huang, J. Liu, C. Zhu, X. Wang, Y. Zhou, X. Sun, and J. Li, Comput. Mater. Sei. 227, 112283 https://doi.org/10.1016/ j.commatsci.2023.112283 (2023).
43. K. Choudhary, B. DeCost, C. Chen, A. Jain, F. Tavazza, R. Cohn, C.W. Park, A. Choudhary, A. Agrawal, S.J.L. Billinge, E. Holm, S.P. Ong, and C. Wolverton, npj Comput. Mater. 8, 59 https://doi.org/10.1038/s41524-022-00734-6 (2022).
44. R. Kumar, P.K. Amrita, and Mishra, Mater. Today Proc. 34, 679 https://doi.Org/10.1016/j.matpr.2020.03.332 (2021).
45. J. Ortegon, R. Ledesma-Alonso, R. Barbosa, J. Vázquez Castillo, and A. Castillo Atoche, Comput. Mater. Sei. 148, 336 https://doi.Org/10.1016/j.commatsci.2018.02.054 (2018).
46. M.T. Dau, M. Al Khalfioui, A. Michoń, A. Reserbat-Plantey, S. Vézian, and P. Boucaud, Sci. Rep. 13, 5426 https://doi. org/10.1038/s41598-023-31928-7 (2023).
47. T. Oishi, Y. Hayashi, M. Noguchi, F. Yano, S. Kumada, K. Takayama, K. Okada, and Y. Onuki, Int. J. Pharm. 577, 119083 https://doi.Org/10.1016/j.ijpharm.2020.119083 (2020).
48. T. Bong, J.-K. Kang, V. Yargeau, H.-L. Nam, S.-H. Lee, J.W. Choi, S.-B. Kim, and J.-A. Park, J. Clean. Prod. 314, 127967 https://doi.Org/10.1016/j.jclepro.2021.127967 (2021).
49. K. Pałczyński, M. Czyżewska, and T. Talaśka, J. Comput. Appl. Math. 425, 115038 https://doi.Org/10.1016/j.cam.202 2.115038 (2023).
50. S. Sapkal, B. Kandasubramanian, P. Dixit, and H.S. Panda, Mater. Today Energy. https://doi.Org/10.1016/j.mtener. 2023.101402 (2023).
51. M. Osenberg, A. Hilger, M. Neumann, A. Wagner, N. Bohn, J.R. Binder, V. Schmidt, J. Banhart, and I. Manke, J. Power Sour. 570, 233030 https://doi.org/10.1016/jjpowsour. 2023.233030 (2023).
52. A. Hussien, W. Khan, A. Hussain, P. Liatsis, A. Al-Shamma'a, and D. Al-Jumeily, J. Build. Eng. 69, 106263 http s://doi.org/10.1016/j.jobe.2023.106263 (2023).
53. O. Addin, S.M. Sapuan, E. Mahdi, and M. Othman, Mater. Design 28, 2379 https://doi.Org/10.1016/j.matdes.2006.07.0 18 (2007).
54. J. Gong, S. Chu, R.K. Mehta, and A.J.H. McGaughey, npj Comput. Mater. 8, 140 https://doi.org/10.1038/s41524-02200826-3 (2022).
55. Z. Lu, X. Chen, X. Liu, D. Lin, Y. Wu, Y. Zhang, H. Wang, S. Jiang, H. Li, X. Wang, and Z. Lu, npj Comput. Mater. 6, 187 https://doi.org/10.1038/s41524-020-00460-x (2020).
56. R. Cohn, and E. Holm, Integr. Mater. Manuf. Innov. 10, 231 https://doi.org/10.1007/s40192-021-00205-8 (2021).
57. F. Gao, D. Stead, and D. Elmo, Comput. Geotech. 78, 203 h ttps://doi.org/10.1016/j.compgeo.2016.05.019 (2016).
58. W.-K. Yang, B.-L. Hu, Y.-W. Luo, Z.-M. Song, and G.-P. Zhang, Int. J. Fatigue 172, 107671 https://doi.Org/10.1016/j. ijfatigue.2023.107671 (2023).
59. X.W. Liu, Z.L. Long, W. Zhang, and L.M. Yang, J. Alloys Compd. 901, 163606 https://doi.Org/10.1016/j.jallcom.2021. 163606 (2022).
60. L. Chen, W. Xia, and T. Yao, Comput. Mater. Sei. 226, 112216 https://doi.Org/10.1016/j.commatsci.2023.112216 (2023).
61. J.H. Friedman, Ann. Stat. 29, 1189 (2001).
62. M. Huang, J. Jiang, Y. Wang, Y. Liu, Y. Zhang, and J. Dong, Mater. Lett. 349, 134754 https://doi.Org/10.1016/j.ma tlet.2023.134754 (2023).
63. D.-C. Feng, Z.-T. Liu, X.-D. Wang, Y. Chen, J.-Q. Chang, D.-F. Wei, and Z.-M. Jiang, Constr. Build. Mater. 230, 117000 https://doi.Org/10.1016/j.conbuildmat.2019.117000 (2020).
64. J. Ren, H. Zhao, L. Zhang, Z. Zhao, Y. Xu, Y. Cheng, M. Wang, J. Chen, and J. Wang, J. Build. Eng. 49, 104049 h ttps://doi.org/10.1016/j.jobe.2022.104049 (2022).
65. T.J. Sejnowski, Proc. Natl. Acad. Sei. 117, 30033 https://d oi.org/10.1073/pnas.l907373117 (2020).
66. C. Janiesch, P. Zschech, and K. Heinrich, Electron. Mark. 31, 685 https://doi.org/10.1007/sl2525-021-00475-2 (2021).
67. M. Soori, B. Arezoo, and R. Dastres, Cognitive Robotics 3, 54 https://doi.Org/10.1016/j.cogr.2023.04.001 (2023).
68. G. Pilania, Comput. Mater. Sei. 193, 110360 https://doi.org/ 10.1016/j.commatsci.2021.110360 (2021).
69. X. Zhong, B. Gallagher, S. Liu, B. Kailkhura, A. Hiszpański, and T.Y.-J. Han, npj Comput. Mater. 8, 204 https://doi. org/10.1038/s41524-022-00884-7 (2022).
70. J. Wanner, L.-V. Herm, K. Heinrich, and C. Janiesch, J. Bus. Anal. 5, 29 (2022).
71. W. Li, W. Li, Z. Qin, L. Tan, L. Huang, F. Liu, and C. Xiao, Materials 15, 4251 (2022).
72. J. Stuckner, B. Harder, and T.M. Smith, npj Comput. Mater. 8, 200 https://doi.org/10.1038/s41524-022-00878-5 (2022).
73. F. Yang, W. Zhao, Y. Ru, S. Lin, J. Huang, B. Du, Y. Pei, S. Li, S. Gong, and H. Xu, npj Comput. Mater. 10, 149 http s://doi.org/10.1038/s41524-024-01349-9 (2024).
74. J.S. Finizola, J.M. Targino, F.G.S. Teodoro and C.A. de Moraes Lima, In Advances in Artificial Intelligence - IBERAMIA 2018, ed. G.R. Simari, E. Fermé, F. Gutiérrez Segura and J.A. Rodríguez Melquíades (Springer International Publishing: Cham, 2018), pp 217-228.
75. P. Wang, E. Fan, and P. Wang, Pattern Recogn. Lett. 141, 61 https://doi.Org/10.1016/j.patrec.2020.07.042 (2021).
76. S. Feng, H. Fu, H. Zhou, Y. Wu, Z. Lu, and H. Dong, npj Comput. Mater. I, 10 https://doi.org/10.1038/s41524-020-0 0488-z (2021).
77. W. Wang, X. Jiang, S. Tian, P. Liu, D. Dang, Y. Su, T. Lookman, and J. Xie, npj Comput. Mater. 8, 9 https://doi. org/10.1038/s41524-021-00687-2 (2022).
78. W. Wang, X. Jiang, S. Tian, P. Liu, T. Lookman, Y. Su, and J. Xie, npj Comput. Mater. 9, 183 https://doi.org/10.1038/ S41524-023-01138-W (2023).
79. M. Salmi, P.-Y. Lavertu and T. Malo, In 2022 Annual Composites and Advanced Materials Expo, CAMX 2022, October 17, 2020 - October 20, 2020, (The Composites and Advanced Materials Expo (CAMX): Anaheim, CA, United states, 2022).
80. R. Batra, Nature 589, 524 https://doi.org/10.1038/d41586020-03259-4 (2021).
81. P. Xu, X. Ji, M. Li, and W. Lu, npj Comput. Mater. 9, 42 h ttps://doi.org/10.1038/s41524-023-01000-z (2023).
82. C. Chen, Y. Zuo, W. Ye, X. Li, and S.P. Ong, Nat. Comput. Sei. 1, 46 https://doi.org/10.1038/s43588-020-00002-x (2021).
83. T. Sutojo, S. Rustad, M. Akrom, A. Syukur, G.F. Shidik, and H.K. Dipojono, npj Mater. Degrad. 7, 18 https://doi.org/ 10.1038/s41529-023-00336-7 (2023).
84. Y. Zhang, and C. Ling, npj Comput. Mater. 4, 25 https://doi. org/10.1038/s41524-018-0081-z (2018).
85. K. Maharana, S. Mondai, and B. Nemade, Global Trans. Proc. 3, 91 https://doi.Org/10.1016/j.gltp.2022.04.020 (2022).
86. S. Kamm, S.S. Veekati, T. Müller, N. Jazdi, and M. Weyrich, Comput. Ind. 149, 103930 https://doi.org/10.1016/ j.compind.2023.103930 (2023).
87. R. Yan, X. Jiang, W. Wang, D. Dang, and Y. Su, Sci. Data 9, 401 https://doi.org/10.1038/s41597-022-01492-2 (2022).
88. A. Pratap, and N. Sardana, Mater. Today Proc. 62, 7341 h ttps://doi.org/10.1016/j.matpr.2022.01.200 (2022).
89. E.A. Holm, R. Cohn, N. Gao, A.R. Kitahara, T.P. Matson, B. Lei, and S.R. Yarasi, Metall. Mater. Trans. A. 51, 5985 h ttps://doi.org/10.1007/sll661-020-06008-4 (2020).
90. P. Zhou, X. Zhang, X. Shen, H. Shi, J. He, Y. Zhu, F. Jiang, and F. Yi, Comput. Mater. Sei. 242, 113063 https://doi.org/ 10.1016/j.commatsci.2024.113063 (2024).
91. C.K. Hansen, G.F. Whelan, and J.D. Hochhalter, Int. J. Fatigue 178, 108019 https://doi.Org/10.1016/j.ijfatigue.202 3.108019 (2024).
92. J.M. Hestroffer, M.-A. Charpagne, M.I. Latypov, and I.J. Beyerlein, Comput. Mater. Sei. 217, 111894 https://doi.org/ 10.1016/j.commatsci.2022.111894 (2023).
93. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A.N. Gomez, Ł. Kaiser and I. Polosukhin, Attention is all you need. Paper presented at the Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, California, USA, 2017.
94. K. Crowson, S. Biderman, D. Kornis, D. Stander, E. Hallaban, L. Castricato and E. Raff, In Computer Vision - ECCV 2022, ed. S. Avidan, G. Brostow, M. Cissé, G.M. Farinella and T. Hassner (Springer Nature Switzerland: Cham, 2022), pp 88-105.
95. G. Bansal, V. Chamóla, A. Hussain, M. Guizani, and D. Niyato, Cogn. Comput. https://doi.org/10.1007/sl2559-02310236-2 (2024).
96. F. Yang, W. Zhao, Y. Ru, Y. Pei, S. Li, S. Gong, and H. Xu, Mater. Design 232, 112174 https://doi.Org/10.1016/j.matdes. 2023.112174 (2023).
97. S. Stuart, J. Watchorn, and F.X. Gu, Npj Comput. Mater. 9, 102 https://doi.org/10.1038/s41524-023-01040-5 (2023).
98. Y. Liu, X. Zou, S. Ma, M. Avdeev, and S. Shi, Acta Mater. 238, 118195 https://doi.Org/10.1016/j.actamat.2022.118195 (2022).
99. P.L. Taylor, and G. Conduit, Comput. Mater. Sei. 201, 110916 https://doi.Org/10.1016/j.commatsci.2021.110916 (2022).
100. A. Roy, M.F.N. Taufique, H. Khakurel, R. Devanathan, D.D. Johnson, and G. Balasubramanian, npj Mater. De- grad. 6, 9 https://doi.org/10.1038/s41529-021-00208-y (2022).
101. K. Zheng, Z. He, L. Che, H. Cheng, M. Ge, T. Si, and X. Xu, Mater. Sei. Semicond. Process. 179, 108514 https://doi.org/ 10.1016/j.mssp.2024.108514 (2024).
102. D.V. Anand, Q. Xu, J. Wee, K. Xia, and T.C. Sum, npj Comput. Mater. 8, 203 https://doi.org/10.1038/s41524-02200883-8 (2022).
103. Y. J. Soofi, Y. Gu, and J. Liu, Comput. Mater. Sei. 226, 112248 https://doi.Org/10.1016/j.commatsci.2023.112248 (2023).
104. D. Dai, Q. Liu, R. Hu, X. Wei, G. Ding, B. Xu, T. Xu, J. Zhang, Y. Xu, and H. Zhang, Mater. Design 196, 109194 h ttps://doi.org/10.1016/j.matdes.2020.109194 (2020).
105. L. Breiman, Mach. Learn. 45, 5 https://doi.Org/10.1023/A: 1010933404324 (2001).
106. M. Zhou, N. Duan, S. Liu, and H.-Y. Shum, Engineering 6, 275 https://doi.Org/10.1016/j.eng.2019.12.014 (2020).
107. X. Wang, Y. Xu, J. Yang, J. Ni, W. Zhang, and W. Zhu, Comput. Mater. Sei. 169, 109117 https://doi.Org/10.1016/j.c ommatsci.2019.109117 (2019).
108. M. He, and L. Zhang, Comput. Mater. Sei. 196, 110578 h ttps://doi.org/10.1016/j.commatsci.2021.110578 (2021).
109. Z. Ding, Y. Huang, H. Yuan, and H. Dong, Introduction to Reinforcement Learning, in Deep Reinforcement Learning: Fundamentals, Research and Applications. ed. by H. Dong, Z. Ding, and S. Zhang (Springer Singapore, Singapore, 2020), pp. 47-123.
110. F. Foadian, R. Kremer, and S. Khani, Rapid alloying in additive manufacturing using integrated computational materials engineering, in Quality Analysis of Additively Manufactured Metals. (Elsevier, 2023), pp. 583-624. http s://doi.org/10.1016/B978-0-323-88664-2.00007-5.
111. A. van de Walle, and M. Asta, MRS Bull. 44, 252 https://d oi.org/10.1557/mrs.2019.71 (2019).
112. Y. Li, B. Holmedal, B. Liu, H. Li, L. Zhuang, J. Zhang, Q. Du, and J. Xie, Calphad 72, 102231 https://doi.org/10.1016/ j.calphad.2020.102231 (2021).
113. W. Zhao, Q. Ren, Z. Yao, J. Zhao, H. Jiang, and J. Dong, Metall. Mater. Trans. A. 54, 3796 https://doi.org/10.1007/ SI1661-023-07133-6 (2023).
114. B. Xu, H. Yin, X. Jiang, C. Zhang, R. Zhang, Y. Wang, and X. Qu, Comput. Mater. Sei. 202, 111021 https://doi.org/10. 1016/j .commatsci.2021.111021 (2022).
115. J. Sato, T. Omori, K. Oikawa, I. Ohnuma, R. Kainuma, and K. Ishida, Science 312, 90 https://doi.org/10.1126/science. 1121738 (2006).
116. J. Guo, B. Xiao, Y. Li, D. Zhai, Y. Tang, W. Du, and Y. Liu, Comput. Mater. Sei. 200, 110787 https://doi.Org/10.1016/j.c ommatsci.2021.110787 (2021).
117. S. Xi, J. Yu, L. Bao, L.-P. Chen, Z. Li, R. Shi, C. Wang and X. Liu, J. Mater. Inf. 2, 15 (2022).
118. K.V. Vamsi, and T.M. Pollock, Scr. Mater. 182, 38 https://d oi.org/10.1016/j.scriptamat.2020.02.038 (2020).
119. M. Dodaran, A.H. Ettefagh, S.M. Guo, M.M. Khonsari, W.J. Meng, N. Shamsaei, and S. Shao, Intermetallics 117, 106670 https://doi.Org/10.1016/j.intermet.2019.106670 (2020).
120. O. Gorbatov, I. Lomaev, Y. Gornostyrev, A. Ruban, D. Furrer, V. Venkatesh, D. Novikov, and S. Burlatsky, Phys. Rev. ? 93, 224106 https://doi.org/10.1103/PhysRevB.93.22 4106 (2016).
121. E. Chen, A. Tamm, T. Wang, M.E. Epier, M. Asta, and T. Frolov, npj Comput. Mater. 8, 80 https://doi.org/10.1038/ S41524-022-00755-1 (2022).
122. Y. Liu, B. Xu, W. Huangfu, and H. Yin, Comput. Mater. Sei. 220, 112065 https://doi.Org/10.1016/j.commatsci.2023. 112065 (2023).
123. K. Hou, M. Wang, M. Ou, H. Li, X. Hao, Y. Ma, and K. Liu, J. Mater. Sei. Technoi. 68, 40 https://doi.org/10.1016/jjmst. 2020.08.001 (2021).
124. G. Angella, M.F. Brunella, C. Malara, and A. Serafini, Metal. Ital. 112, 6 (2020).
125. K. Liu, J. Wang, Y. Yang, and Y. Zhou, J. Alloys Compd. 883, 160723 https://doi.org/10.1016/jjallcom.2021.160723 (2021).
126. H. Chan, M. Cherukara, T.D. Loeffler, B. Narayanan, and S.K.R.S. Sankaranarayanan, npj Comput. Mater. 6, 1 htt ps://doi.org/10.1038/s41524-019-0267-z (2020).
127. S. Madireddy, D.-W. Chung, T. Loeffler, S.K.R.S. Sankaranarayanan, D.N. Seidman, P. Balaprakash, and O. Heinonen, Sci. Rep. 9, 20140 https://doi.org/10.1038/s4159 8-019-56649-8 (2019).
128. K. Simonyan and A. Zisserman, CoRR, abs/1409.1556 (2014).
129. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, Proc. IEEE 86, 2278 (1998).
130. J. Xu, L. Li, X. Liu, H. Li, and Q. Feng, Mater. Charact. h ttps://doi.org/10.1016/j.matchar.2022.111857 (2022).
131. N. Khatavkar, S. Swetlana, and A.K. Singh, Acta Mater. 196, 295 https://doi.Org/10.1016/j.actamat.2020.06.042 (2020).
132. C. Yang, X. You, R. Yu, Y. Xu, J. Zhang, X. Fan, W. Li, and Z. Wang, Mater. Charact. 203, 113094 https://doi.org/10. 1016/j.matchar.2023.113094 (2023).
133. A. Cecen, H. Dai, Y.C. Yabansu, S.R. Kalidindi, and L. Song, Acta Materialia 146, 76 https://doi.Org/10.1016/j.acta mat.2017.11.053 (2018).
134. C. Wang, In 2023 IEEE 3rd International Conference on Power, Electronics and Computer Applications (ICPECA), (2023), pp 1204-1208.
135. P. Liu, H. Huang, S. Antonov, C. Wen, D. Xue, H. Chen, L. Li, Q. Feng, T. Omori, and Y. Su, npj Comput. Mater. 6, 62 https://doi.org/10.1038/s41524-020-0334-5 (2020).
136. P. Liu, H. Huang, C. Wen, T. Lookman, and Y. Su, npj Comput. Mater. 9, 140 https://doi.org/10.1038/s41524-02301090-9 (2023).
137. J. Belan, Mater. Today Proc. 3, 936 https://doi.org/10.1016/ j.matpr.2016.03.024 (2016).
138. Z. Qin, Z. Wang, Y. Wang, L. Zhang, W. Li, J. Liu, Z. Wang, Z. Li, J. Pan, L. Zhao, F. Liu, L. Tan, J. Wang, H. Han, L. Jiang, and Y. Liu, Mater. Res. Lett. 9, 32 https://doi.org/10. 1080/21663831.2020.1815093 (2021).
139. Y. Jiang, M.A. Ali, I. Roslyakova, D. Burger, G. Eggeler, and I. Steinbach, Model. Simul. Mater. Sei. Eng. https://d oi.org/10.1088/1361-651Х/ассО89 (2023).
140. R. Tamura, T. Osada, K. Minagawa, T. Kohata, M. Hirosawa, K. Tsuda, and K. Kawagishi, Mater. Design 198, 109290 https://doi.Org/10.1016/j.matdes.2020.109290 (2021).
141. S. Ghaemifar, and H. Mirzadeh, J. Mater. Res. Technoi. 26, 8071 https://doi.Org/10.1016/j.jmrt.2023.09.149 (2023).
142. M. Srivastava, S. Rathee, V. Patel, A. Kumar, and P.G. Koppad, J. Mater. Res. Technoi. 21, 2612 https://doi.org/10. 1016/j.jmrt.2022.10.015 (2022).
143. T. DebRoy, H.L. Wei, J.S. Zuback, T. Mukherjee, J.W. Elmer, J.O. Milewski, A.M. Beese, A. Wilson-Heid, A. De, and W. Zhang, Prog. Mater. Sei. 92, 112 https://doi.org/10.1016/ j.pmatsci.2017.10.001 (2018).
144. A.K. Sah, M. Agilan, S. Dineshraj, M.R. Rahul, and B. Govind, Mater. Today Commun. 30, 103193 https://doi.org/ 10.1016/j.mtcomm.2022.103193 (2022).
145. C.C.C. Lesko, L.C. Sheridan, and J.E. Gockel, J. Mater. Eng. Perform. 30, 6630 https://doi.org/10.1007/sll665-02105938-3 (2021).
146. Z. Ren, L. Gao, S.J. Clark, K. Fezzaa, P. Shevchenko, A. Choi, W. Everhart, A.D. Rollett, L. Chen, and T. Sun, Science 379, 89-94 https://doi.org/10.1126/science.add4667 (2023).
147. S. Guo, M. Agarwal, C. Cooper, Q. Tian, R.X. Gao, W. Guo, and Y.B. Guo, J. Manuf. Syst. 62, 145 https://doi.org/10. 1016/j.jmsy.2021.11.003 (2022).
148. S. Liu, A.P. Stebner, B.B. Kappes, and X. Zhang, Addit. Manuf. 39, 101877 https://doi.Org/10.1016/j.addma.2021.l 01877 (2021).
149. L. Cannavacciuolo, G. Ferraro, C. Ponsiglione, S. Primario, and I. Quinto, Technouation 124, 102733 https://doi.org/10. 1016/j.technovation.2023.102733 (2023).
150. G. Shankarrao Patange, and A. Bharatkumar Pandya, Mater. Today Proc. 72, 622 https://doi.Org/10.1016/j.matpr. 2022.08.201 (2023).
151. R. Wu, L. Zeng, J. Fan, Z. Peng, and Y. Zhao, Meeh. Mater. 187, 104819 https://doi.Org/10.1016/j.mechmat.2023.10481 9 (2023).
152. G. Wang, H. Fu, L. Jiang, D. Xue, and J. Xie, npj Comput. Mater. 5, 87 https://doi.org/10.1038/s41524-019-0227-7 (2019).
153. M. Pinz, G. Weber, J.C. Stinville, T. Pollock, and S. Ghosh, npj Comput. Mater. 8, 39 https://doi.org/10.1038/s41524-0 22-00727-5 (2022).
154. Q. Zhao, H. Yang, J. Liu, H. Zhou, H. Wang, and W. Yang, Mater. Des. 197, 109248 https://doi.Org/10.1016/j.matdes.2 020.109248 (2021).
155. R.-K. Xie, X.-C. Zhong, S.-H. Qin, K.-S. Zhang, Y.-R. Wang, and D.-S. Wei, Int. J. Fatigue 175, 107730 https://doi.org/ 10.1016/j.ijfatigue.2023.107730 (2023).
156. J. Liu, Y. Zhang, Y. Zhang, S. Kitipornchai, and J. Yang, Mater. Design 213, 110334 https://doi.Org/10.1016/j.matde s.2021.110334 (2022).
157. J. Ruan, W. Xu, T. Yang, J. Yu, S. Yang, J. Luan, T. Omori, C. Wang, R. Kainuma, K. Ishida, C.T. Liu, and X. Liu, Ac/a Mater. 186, 425 https://doi.Org/10.1016/j.actamat.2020.01. 004 (2020).
158. I. Sevim, and LB. Eryurek, Mater. Design 27, 911 https://d oi.org/10.1016/j.matdes.2005.03.009 (2006).
159. F. Jaliliantabar, J. Energy Stor. 46, 103633 https://doi.org/ 10.1016/j.est.2021.103633 (2022).
160. T.A. Alrebdi, Y.S. Wudil, U.F. Ahmad, F.A. Yakasai, J. Mohammed, and F.H. Kallas, Int. J. Therm. Sei. 181, 107784 https://doi.Org/10.1016/j.ijthermalsci.2022.107784 (2022).
161. L. Yan, Y. Diao, Z. Lang, and K. Gao, Sei. Technoi. Adu. Mater. 21, 359 https://doi.org/10.1080/14686996.2020.1746 196 (2020).
162. X. Yang, J. Yang, Y. Yang, Q. Li, D. Xu, X. Cheng, and X. Li, Int. J. Miner. Metal. Mater. 29, 825 https://doi.org/10. 1007/Sİ2613-022-2457-9 (2022).
163. Y. Zhu, F. Duan, W. Yong, H. Fu, H. Zhang, and J. Xie, Comput. Mater. Sei. 211, 111560 https://doi.Org/10.1016/j.c ommatsci.2022.111560 (2022).
164. J. Gao, Y. Tong, H. Zhang, L. Zhu, Q. Hu, J. Hu, and S. Zhang, Mater. Charact. 198, 112740 https://doi.org/10.101 6/j.matchar.2023.112740 (2023).
Copyright Springer Nature B.V. Jan 2025