Content area
We present a novel machine learning (ML)-based method to accelerate conservative-to-primitive inversion, focusing on hybrid piecewise polytropic and tabulated equations of state. Traditional root-finding techniques are computationally expensive, particularly for large-scale relativistic hydrodynamics simulations. To address this, we employ feedforward neural networks (
Full text
1. Introduction
In numerical relativity, accurately modeling astrophysical systems such as neutron star mergers [1,2,3,4,5,6,7,8,9,10,11,12,13,14] relies on solving the equations of relativistic hydrodynamics, which involve the inversion of conservative-to-primitive (C2P) variable relations [15,16,17]. This process typically requires computationally expensive root-finding algorithms, such as Newton-Raphson methods, and interpolation of complex, multi-dimensional equations of state (EOS) tables [18,19]. These methods, while robust, incur significant computational costs and can lead to inefficiencies, particularly in large-scale simulations, where up to billions of C2P calls may be required per time step. The inherent complexity of this mapping, however, often conceals underlying symmetries and lower-dimensional relationships that a machine learning model can be trained to recognize and exploit.
In view of these considerations, and taking into account the advent of GPU-based exascale supercomputers such as Aurora and Frontier and ongoing efforts to port relativistic hydrodynamics software into GPUs [20,21,22], this work explores the use of machine learning (ML) algorithms that leverage GPU-accelerated computing for C2P conversion. CPU-based algorithms for C2P conversion typically involve an iterative non-linear root finder, for which the number of iterations required to achieve a given target accuracy depends on the input data, resulting in different runtimes for different points of the numerical grid. This limits the potential to use SIMD (for CPUs) or SIMT (for GPUs) parallelism, reducing the effective rate of conversion achievable using these schemes. An ML approach with its more predictable runtime and regular memory access pattern may help alleviate these issues. Indeed, this work is motivated by recent studies that have explored the potential of ML to replace traditional root-finding approaches for C2P inversion [23]. Specifically, neural networks have shown promise in accelerating the C2P inversion process while maintaining high accuracy [23]. Building on this, the present work introduces a novel approach that leverages ML to accelerate the recovery of primitive variables from conserved variables in relativistic hydrodynamics simulations, with particular focus on hybrid piecewise polytropic and tabulated EOS. These EOS models provide more realistic descriptions of the dense interior of neutron stars, yet their complexity makes the traditional C2P procedure very computationally expensive.
To help address these computational challenges, we present a suite of feedforward neural networks trained to directly map conserved variables to primitive variables, bypassing the need for traditional iterative solvers. In particular, we employ a hybrid approach, utilizing the flexibility of neural networks to handle the challenges posed by complex EOS models. Our models are implemented using modern deep learning tools, such as PyTorch, and optimized for GPU inference with NVIDIA TensorRT [24]. Through comprehensive performance benchmarking, we demonstrate that our approach significantly outperforms traditional numerical methods in terms of speed, particularly when using mixed-precision deployment on modern hardware accelerators like NVIDIA A100 GPUs in the Delta supercomputer.
We evaluate the scalability of our ML models by comparing their inference performance against a single-threaded CPU implementation of a traditional numerical method from the RePrimAnd library [25]. The benchmark was conducted on a Delta supercomputer compute node, featuring dual AMD 64-core 2.45 GHz Milan processors, 8 NVIDIA A100 GPUs (40 GB HBM2 RAM), and NVLink. For dataset sizes ranging from 25,000 to 1,000,000 points, the numerical method exhibited linear scaling of inference time. In contrast, TensorRT-optimized and TorchScript-based neural networks achieved substantially faster inference, typically demonstrating sub-linear scaling. We investigate two feedforward neural network architectures: a smaller network (
This article is structured as follows. Section 2 introduces the EOS considered in this study, along with the methodologies employed for designing, training, validating, and testing the ML models. In Section 3, we present our key results, including an assessment of the accuracy of the ML models across different model types and quantization schemes. Additionally, we provide a comparison of the computational performance of the ML models relative to traditional root-finding methods. Finally, Section 4 offers a summary of the findings and outlines potential avenues for future research.
2. Methods
We present an ML-based model with the potential to accelerate the recovery of primitive variables from conserved variables in general relativistic hydrodynamics (GRHD) simulations, specifically focusing on scenarios employing hybrid piecewise polytropic EOS and tabulated EOS. As in traditional approaches, this conversion requires inverting the conservative-to-primitive map, a process often reliant on computationally expensive root-finding algorithms. While previous work has demonstrated the success of machine learning for this task with the -law EOS [23], here, we investigate its application to hybrid piecewise polytropic EOS, which offers a more realistic representation of neutron star interiors, as well as the tabulated EOS, which incorporates the current nuclear physics model of neutron matter. To evaluate the performance of our neural network, we use a traditional CPU-based root-finding algorithm (provided by the RePrimAnd library) as a baseline for comparison. Our aim is to demonstrate the speed advantages of the neural network approach for conservative-to-primitive variable conversion. Our network is implemented using PyTorch (2.0+) and the inference speed tests are performed using
In general relativity, the equations of relativistic hydrodynamics can be expressed in a conservation form suitable for numerical implementation. Specifically, in a flat spacetime, they constitute the following first-order, flux-conservative hyperbolic system:
(1)
where is the metric determinant, and is the determinant of the three metrics induced on each spacelike hypersurface. The state vector of the conserved variables is , and the flux vector is given by(2)
where is the lapse function and the spacelike shift vector: two kinematic variables describing the evolution of spacelike foliations in spacetime as in a typical (ADM) formulation.The five quantities satisfying Equation (1), all measured by an Eulerian observer sitting at a spacelike hypersurface, are the relativistic rest-mass density, D, the three components of the momentum density, , and the energy density relative to the rest mass density, , respectively. These are related to the primitive variables; rest-mass density, , three-velocity, , specific internal energy, , and pressure, p through
(3)
where is the Lorentz factor, and is the specific enthalpy.Incorporating the EOS into the picture provides the thermodynamical information linking the pressure to the fluid’s rest-mass density and internal energy, which, combined with the definitions above, closes the system of equations given in Equation (1) [26,27,28].
We will first focus on the hybrid piecewise polytropic EOS. The hybrid piecewise polytropic EOS was introduced for simplified simulations of stellar collapse to model the stiffening of the nuclear EOS at nuclear density and include thermal pressure during the postbounce phase [29]. In gravitational-wave science, it is more commonly used as described in Read et al. [30], where it enables gravitational-wave parameter estimation and waveform modeling by effectively capturing macroscopic neutron star observables with minimal parameters. The structure of this EOS consists of multiple cold polytropes, defined by parameters and , where
(4)
where is the segment-specific constant, and the rest mass density, , is assumed to fall into the segments specified by each of the . These equations apply to segment i, where the rest-mass density is in the range .In addition to the hybrid piecewise polytropic EOS-based model, we will train a separate network to infer the conservative-to-primitive transformation utilizing the tabulated EOS data. Specifically, we will use the Lattimer-Swesty EOS with a compressibility parameter (hereafter referred to as
Below, we outline the dataset preparation, model architecture, training process, and methods used in inference speed testing with
2.1. Data
2.1.1. Piecewise Polytropic EOS-Based Model Data
We generate a dataset of 500,000 samples using geometrized units where . Without loss of generality, we furthermore use a Minkowski metric . The rest-mass density, , is sampled uniformly from , and the fluid’s three-velocity is assumed one-dimensional along the x-axis, sampled uniformly from . These ranges are chosen to be representative of the conditions found in binary neutron star mergers and to facilitate a direct comparison with the previous work in [23]. Following Ref. [30], we use an SLy four-segment piecewise polytropic EOS with segment-wise polytropic indices . The first segment’s polytropic constant, , is set to . Subsequent polytropic constants, , are determined by enforcing pressure continuity. Similarly, the first segment’s constant, , is set to zero, while subsequent values ensure continuity of internal energy. The density breaks for the segments are specified at , , and . The thermal component has an adiabatic index of . Additionally, the thermal component of the specific internal energy, , is sampled uniformly from (where ). A structured dataset is then constructed by converting the primitive variables to conserved variables using the standard relativistic hydrodynamic relations given in Equation (3). In this dataset, conserved variables serve as input features, and the pressure is the target variable. The resulting dataset is then split into training, validation, and test sets, with each set fully standardized to zero mean and unit variance to ensure equal contribution of all features during neural network training (Figure 1).
2.1.2. Tabulated EOS-Based Model Data
To generate the training data for the tabulated EOS-based model, we sample from a provided EOS table and follow a procedure similar to the one described in Section 2.1.1. We begin by reading in the EOS table, which contains the variables electron fraction (), temperature (T), rest-mass density (), specific internal energy (), and pressure (p). These quantities are stored in logarithmic form in the table and are extracted accordingly. For each data point, a random one-dimensional three-velocity, , is sampled uniformly on a linear scale from the interval . Values for electron fraction and temperature are also sampled uniformly on a linear scale from their respective ranges in the table. The rest-mass density is chosen by randomly selecting one of the grid points from the table, which are logarithmically spaced. For this study, we fetched the corresponding values of p and directly from the table without interpolation to ensure the training data perfectly represents the tabulated EOS. Using these, the corresponding values of , , and p are then fetched from the EOS table. The primitive variables are then converted into conserved variables using standard relativistic hydrodynamics relations given in Equation (3). A total of 1,000,000 data points are generated using this process [32]. Similarly to the hybrid piecewise polytropic EOS-based model, the data is split into training, validation, and test sets, with each set fully standardized to zero mean and unit variance before being used for neural network training.
2.2. Model Architecture
2.2.1. Piecewise Polytropic EOS-Based Model
For the hybrid piecewise polytropic EOS-based model, we tested two feedforward neural networks of varying complexity to represent the conservative-to-primitive variable transformation. Each network takes as input the three conserved variables (Equation (3)) and outputs the pressure p (Equation (4)), assuming the remaining momentum density components are zero for simplicity. This architecture is designed to effectively learn the hidden symmetries in the relationship between the conserved and primitive variables, approximating the intricate C2P transformation without explicit root-finding. After experimenting with multiple multi-layer perceptron (MLP) architectures, as detailed in Appendix A, we identified two models that offered an optimal balance between accuracy, speed, and trainability. The smaller model,
ReLU activation functions were applied to the hidden layers to introduce nonlinearity, with the output layer kept linear. We found these models strike an effective balance between complexity and performance, making them well-suited for our task.
2.2.2. Tabulated EOS-Based Model
For the tabulated EOS-based model, we use a single feedforward neural network,
We explored several MLP architectures, varying in parameters, layers, and training strategies, to identify an optimal design for our task. Among these, an architecture identical to
2.3. Training Approach
We use a similar procedure to optimize all neural networks:
(5)
where represents the network’s estimation for feature i, is the corresponding target value, ReLU is the familiar rectified linear unit defined by , and represents an inverse normalization procedure based on the training data statistics. The penalty factor, q, was optimized for each model, with forAll models were trained using the Adam optimizer with an initial learning rate of . A learning rate scheduler reduced the learning rate by a factor of 0.5 if the validation loss failed to improve for five consecutive epochs.
After completing the training phase for each epoch, the model’s performance is evaluated on the validation dataset, accumulating the validation loss similarly to the training loss. Both losses are normalized by the size of the respective datasets and stored for further analysis, specifically for clues of potential overtraining.
2.4. Inference Speed Tests
In our inference speed tests, we evaluated two main approaches for efficient deployment: a TorchScript model and NVIDIA’s TensorRT optimized engines. These tests were conducted to measure and compare inference speed under typical deployment conditions, aiming to take advantage of the
2.4.1. TorchScript Deployment
To prepare models for inference with TorchScript, we first saved a scripted version of the model, which is compatible with PyTorch’s JIT compiler, optimizing runtime execution without modifying the model’s core structure. TorchScript’s scripting provides some degree of optimization, enabling faster model execution than standard PyTorch models but without the hardware-level optimizations that TensorRT offers.
2.4.2. TensorRT Deployment
For TensorRT, we explored both Model Export to ONNX: First, we exported the PyTorch model to the ONNX format. This conversion enables interoperability with TensorRT, which uses ONNX as its primary model input format. TensorRT Engine Building: Using TensorRT’s Python API, we constructed both Parsing and Validating the ONNX Model: We loaded the ONNX model into TensorRT, where the Configuration and Optimization Profiles: The Engine Serialization: Finally, we serialized and saved the engine, creating a portable and optimized binary that can be loaded for deployment. This step encapsulates the model’s architecture, weights, and optimizations, ensuring it is ready for fast inference.
To ensure we measure the maximum possible performance for each point in our benchmark, we build a specialized, yet flexible, TensorRT engine for each combination of model and dataset size. The dynamic optimization profile for each of these engines is configured with a tight margin around its target dataset size (N), as detailed in Table 1.
Overall, the process of optimizing and saving models using both TorchScript and TensorRT gave us insight into balancing flexibility, accuracy, and performance. For larger batch sizes and greater computational demands, TensorRT’s dynamic engine approach in
For the actual inference speed test procedure, we implemented two distinct workflows on a single GPU for both approaches. The TorchScript-based approach allowed for a straightforward configuration, primarily requiring the definition of batch sizes and the pre-loading of data onto the GPU. It then used
In contrast, the TensorRT-based approach demanded several additional configurations. The model, after being converted into an optimized engine, was loaded using TensorRT’s
3. Results
3.1. Accuracy
We evaluate the model accuracy using two standard metrics for regression problems: the error (mean absolute error) and the error (maximum absolute error), both calculated over the entire test dataset. Table 2 summarizes the accuracy results based on and error metrics for each model variant—
The
The larger
The
Additionally, we examined the relative accuracy of the
The overall results show that TensorRT’s optimizations maintain accuracy across models when using full precision.
3.2. Inference Speed Analysis
The inference performance of various methods was evaluated using a single NVIDIA A100 GPU for neural network models and a single-threaded CPU implementation of the traditional numerical method from the RePrimAnd library. The CPUs used in this study were dual AMD 64-core 2.45 GHz Milan processors on the Delta cluster, which can support up to 128 threads. Each configuration was tested across five dataset sizes, ranging from 25,000 to 1,000,000 data points, with ten inference runs conducted per configuration to ensure result stability and consistency. For the RePrimAnd numerical solver, we set the target accuracy for the relative error in the root-finding algorithm to . This is a standard, high-precision value used in production codes. We chose to compare our ML models against this robust baseline rather than tuning the numerical solver’s accuracy to match that of the NNs, ensuring a conservative performance comparison.
The numerical method exhibited linear scaling of inference time with respect to the dataset size. In contrast, both TensorRT and TorchScript models generally maintained relatively stable inference times across the dataset sizes. Notably, the full-precision TensorRT engine for the smaller network,
The numerical method required significantly more time than the neural network-based approaches. On average, the numerical method took 103.8 ms to process 25,000 data points, with runtime scaling almost linearly to 3490 ms for 1,000,000 data points. In contrast, the neural network models demonstrated substantially faster inference times. Specifically, the mixed-precision TensorRT engine built from
A similar trend was observed for the
Figure 4 presents a theoretical performance benchmark based on ideal scaling under the assumption of perfect parallelization. This scenario assumes optimal workload distribution, minimal communication overhead, and negligible synchronization delays, representing the upper bound of scalability. For the numerical method, the figure reflects the full computational capacity of a single CPU node on the Delta cluster, utilizing 128 threads. For the neural networks, it represents the use of 8 A100 GPUs within a single GPU node. Under these ideal conditions, the processing time of the numerical method per data point is projected to decrease by a factor of 128, allowing for the processing of 8 million points in approximately 218 ms (Figure 4b). Similarly, all neural network methods are expected to achieve linear inference scaling with similar per-GPU efficiency. Under this scenario, TensorRT-based methods—particularly the mixed-precision engine for
The results presented above underscore the substantial performance gains achievable through the use of TensorRT-optimized neural networks, particularly in the context of conservative-to-primitive inversion in relativistic hydrodynamics simulations. By leveraging the parallel processing power of modern GPUs, these methods offer significant speedups compared to traditional CPU-based numerical approaches, even in large-scale simulations involving millions of data points. As demonstrated, TensorRT optimizations enable more efficient and scalable solutions, with the potential to dramatically reduce the computational cost of C2P operations. This work highlights the clear advantage of integrating ML-driven methods with GPU acceleration to address the computational challenges of high-throughput simulations. Moving forward, the next step is to incorporate these optimized approaches into full-scale hydrodynamics simulations, where their impact on both performance and scalability can be fully realized.
It is important to contextualize the comparison between the fully utilized CPU component (128 threads) and the fully utilized GPU component (8 GPUs) of a single compute node. This ‘node-to-node’ benchmark is designed to answer the practical question of how to best utilize the co-located and often cost-equivalent hardware resources of a modern heterogeneous compute node. While a formal cost-normalized analysis is complex, this approach compares the optimal-use scenario for each hardware type available to a researcher on a typical allocation. The resulting 25-fold speedup is therefore a combination of the algorithmic shift (from iterative root-finding to direct-mapping) and the architectural advantage of GPUs for the massively parallel workload presented by the neural network.
4. Conclusions
This work introduces a novel ML-driven method for accelerating C2P inversions in relativistic hydrodynamics simulations, with a focus on hybrid piecewise polytropic and tabulated equations of state. By employing feedforward neural networks optimized with TensorRT, we achieve substantial performance improvements over traditional CPU solvers, offering a compelling alternative to computationally expensive iterative methods while maintaining high accuracy. Our results demonstrate that the TensorRT-optimized neural networks can process large datasets significantly faster, achieving up to 25 times the inference speed of traditional methods. The success of this approach is rooted in the neural network’s ability to efficiently learn and represent the inherent symmetries and complex functional relationships within the EOS, effectively creating a direct mapping that bypasses iterative numerical solvers.
Future work will explore several key directions to refine and expand this approach. First, adapting the models to handle a broader range of equations of state will improve the versatility of this method across different simulation contexts. Second, exploring alternative network architectures, such as those incorporating physics-informed layers or adaptive activation functions to better handle physical discontinuities like phase transitions, could further enhance both accuracy and inference speed. Third, the models must be extended to handle full three-dimensional velocities to be fully integrated into production-level GRMHD codes. Additionally, continued optimization of TensorRT, including advanced parallelization strategies and scaling across multiple GPUs, and careful exploration of lower-precision formats like INT8, potentially with quantization-aware training, promises even greater reductions in computational time, enabling simulations of larger and more complex astrophysical systems. These improvements will be critical for advancing high-resolution simulations in numerical relativistic hydrodynamics.
We believe that ML-driven methods, particularly those incorporating TensorRT optimization, will play an essential role in advancing the field of general relativistic hydrodynamics and numerical relativity more broadly. To facilitate further validation and extension of these findings, we have made the software developed for this study publicly available at:
Conceptualization, R.H. and E.A.H.; Methodology, S.K., R.H. and E.A.H.; Software, S.K.; Validation, S.K.; Formal analysis, S.K.; Writing—original draft, S.K. and E.A.H.; Writing—review & editing, S.K., R.H. and E.A.H.; Visualization, S.K.; Supervision, R.H. and E.A.H.; Project administration, R.H. and E.A.H.; Funding acquisition, R.H. and E.A.H. All authors have read and agreed to the published version of the manuscript.
The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.
This research used the Delta advanced computing and data resource. Delta is a joint effort of the University of Illinois Urbana-Champaign and its National Center for Supercomputing Applications. This research used the DeltaAI advanced computing and data resource. DeltaAI is a joint effort of the University of Illinois Urbana-Champaign and its National Center for Supercomputing Applications. We further acknowledge the use of Matplotlib [
The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 1 Visualization of the thermodynamic relations based on the complete training data generated for the four-segment piecewise polytropic EOS-based model. From left to right: pressure (p) vs. rest-mass density (
Figure 2 Architectures of the neural networks used for conservative-to-primitive variable mapping. Top: The
Figure 3 Relative error of the
Figure 4 Ideal scaling comparison of various C2P inversion methods under the assumption of perfect parallelization. (a) Projected inference time as a function of dataset size for a traditional numerical solver (RePrimAnd utilizing 128 CPU threads on a single node of the Delta cluster) and two neural network models (
Dynamic optimization profiles used for building specialized TensorRT engines for each benchmarked dataset size (N). The profile for each engine is configured with a tight margin around its target optimal size.
| Target Dataset Size (N) | Min Batch Size (0.95 N) | Optimal Batch Size (N) | Max Batch Size (1.05 N) |
|---|---|---|---|
| 25,000 | 23,750 | 25,000 | 26,250 |
| 50,000 | 47,500 | 50,000 | 52,500 |
| 100,000 | 95,000 | 100,000 | 105,000 |
| 500,000 | 475,000 | 500,000 | 525,000 |
| 1,000,000 | 950,000 | 1,000,000 | 1,050,000 |
Accuracy results for all models.
| Model | ||
|---|---|---|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | |
Appendix A. Model Architecture Exploration and Training History
In this study, we explored a wide range of multi-layer perceptron (MLP) architectures to identify models that offer an optimal balance between predictive accuracy and inference speed. The models presented in the main text—
Our findings, summarized in
Notably, excessively deep architectures (e.g., the 10- and 13-layer models) consistently exhibited training instability or yielded worse performance, reinforcing our choice of moderately sized networks as the most effective and efficient solution for this regression task.
To demonstrate the stability of our training procedure,
Explored architectures and validation accuracy (
| Model Name | Hidden Layers (Neurons per Layer) | Total Parameters | Validation |
|---|---|---|---|
| Piecewise Polytropic EOS | |||
| | [300, 100] | ~31 k | |
| | [800] | ~3 k | |
| | [600, 200] | ~123 k | |
| | [800, 400] | ~324 k | |
| | [512, 256, 128, 64] | ~180 k | |
| | [1024, 512, 256, 128] | ~690 k | |
| | [1024, 512, 256, 128, 64] | ~707 k | |
| | [2048, 1024, 512, 256, 128] | ~2.8 M | |
| | [1024, 1024, 512, 512, 256, 128, 64] | ~2.4 M | |
| | [1024, 1024, 512, 512, 256, 256, 128, 128, 64, 64] | ~3.5 M | |
| | 13 Layers | ~5 M | Failed to Converge |
| Tabulated EOS (LS220) | |||
| | [512, 256, 128] | ~165 k | |
| | [1024, 512, 256, 128] | ~690 k | |
| | [1024, 512, 256, 128, 64] | ~707 k | |
| | [2048, 1024, 512, 256, 128] | ~2.8 M | |
| | [1024, 1024, 512, 512, 256, 128, 64] | ~2.4 M | |
| | [1024, 1024, 512, 512, 256, 256, 128, 128, 64, 64] | ~3.5 M | |
| | 13 Layers | ~5 M | Failed to Converge |
Figure A1 Training and validation loss curves for the piecewise polytropic EOS models. The smooth convergence demonstrates a stable training process for (a)
Figure A2 Training and validation loss curves for the tabulated EOS model,
1. Radice, D.; Bernuzzi, S.; Perego, A. The Dynamics of Binary Neutron Star Mergers and GW170817. Annu. Rev. Nucl. Part. Sci.; 2020; 70, pp. 95-119. [DOI: https://dx.doi.org/10.1146/annurev-nucl-013120-114541]
2. Ciolfi, R.; Kastaun, W.; Giacomazzo, B.; Endrizzi, A.; Siegel, D.M.; Perna, R. General relativistic magnetohydrodynamic simulations of binary neutron star mergers forming a long-lived neutron star. Phys. Rev. D; 2017; 95, 063016. [DOI: https://dx.doi.org/10.1103/PhysRevD.95.063016]
3. Kiuchi, K. General relativistic magnetohydrodynamics simulations for binary neutron star mergers. arXiv; 2024; arXiv: 2405.10081
4. Siegel, D.M.; Metzger, B.D. Three-Dimensional General-Relativistic Magnetohydrodynamic Simulations of Remnant Accretion Disks from Neutron Star Mergers: Outflows and r-Process Nucleosynthesis. Phys. Rev. Lett.; 2017; 119, 231102. [DOI: https://dx.doi.org/10.1103/PhysRevLett.119.231102] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/29286684]
5. Sun, L.; Ruiz, M.; Shapiro, S.L.; Tsokaros, A. Jet launching from binary neutron star mergers: Incorporating neutrino transport and magnetic fields. Phys. Rev. D; 2022; 105, 104028. [DOI: https://dx.doi.org/10.1103/PhysRevD.105.104028]
6. Tsokaros, A.; Ruiz, M.; Shapiro, S.L.; Uryū, K. Magnetohydrodynamic Simulations of Self-Consistent Rotating Neutron Stars with Mixed Poloidal and Toroidal Magnetic Fields. Phys. Rev. Lett.; 2022; 128, 061101. [DOI: https://dx.doi.org/10.1103/PhysRevLett.128.061101]
7. Fernández, R.; Tchekhovskoy, A.; Quataert, E.; Foucart, F.; Kasen, D. Long-term GRMHD simulations of neutron star merger accretion discs: Implications for electromagnetic counterparts. Mon. Not. R. Astron. Soc.; 2019; 482, pp. 3373-3393. [DOI: https://dx.doi.org/10.1093/mnras/sty2932]
8. Foucart, F.; Haas, R.; Duez, M.D.; O’Connor, E.; Ott, C.D.; Roberts, L.; Kidder, L.E.; Lippuner, J.; Pfeiffer, H.P.; Scheel, M.A. Low mass binary neutron star mergers: Gravitational waves and neutrino emission. Phys. Rev. D; 2016; 93, 044019. [DOI: https://dx.doi.org/10.1103/PhysRevD.93.044019]
9. Camilletti, A.; Chiesa, L.; Ricigliano, G.; Perego, A.; Lippold, L.C.; Padamata, S.; Bernuzzi, S.; Radice, D.; Logoteta, D.; Guercilena, F.M. Numerical relativity simulations of the neutron star merger GW190425: Microphysics and mass ratio effects. Mon. Not. Roy. Astron. Soc.; 2022; 516, pp. 4760-4781. [DOI: https://dx.doi.org/10.1093/mnras/stac2333]
10. Dietrich, T.; Hinderer, T.; Samajdar, A. Interpreting Binary Neutron Star Mergers: Describing the Binary Neutron Star Dynamics, Modelling Gravitational Waveforms, and Analyzing Detections. Gen. Rel. Grav.; 2021; 53, 27. [DOI: https://dx.doi.org/10.1007/s10714-020-02751-6]
11. Agathos, M.; Meidam, J.; Del Pozzo, W.; Li, T.G.F.; Tompitak, M.; Veitch, J.; Vitale, S.; Van Den Broeck, C. Constraining the neutron star equation of state with gravitational wave signals from coalescing binary neutron stars. Phys. Rev. D; 2015; 92, 023012. [DOI: https://dx.doi.org/10.1103/PhysRevD.92.023012]
12. Bauswein, A.; Baumgarte, T.W.; Janka, H.T. Prompt Merger Collapse and the Maximum Mass of Neutron Stars. Phys. Rev. Lett.; 2013; 111, 131101. [DOI: https://dx.doi.org/10.1103/PhysRevLett.111.131101] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/24116763]
13. Oertel, M.; Hempel, M.; Klähn, T.; Typel, S. Equations of state for supernovae and compact stars. Rev. Mod. Phys.; 2017; 89, 015007. [DOI: https://dx.doi.org/10.1103/RevModPhys.89.015007]
14. Alford, M.G.; Schmitt, A.; Rajagopal, K.; Schäfer, T. Color superconductivity in dense quark matter. Rev. Mod. Phys.; 2008; 80, pp. 1455-1515. [DOI: https://dx.doi.org/10.1103/RevModPhys.80.1455]
15. Noble, S.C.; Gammie, C.F.; McKinney, J.C.; Del Zanna, L. Primitive Variable Solvers for Conservative General Relativistic Magnetohydrodynamics. Astrophys. J.; 2006; 641, pp. 626-637. [DOI: https://dx.doi.org/10.1086/500349]
16. Faber, J.A.; Rasio, F.A. Binary neutron star mergers. Living Rev. Relativ.; 2012; 15, pp. 1-83. [DOI: https://dx.doi.org/10.12942/lrr-2012-8]
17. Duez, M.D.; Liu, Y.T.; Shapiro, S.L.; Stephens, B.C. Relativistic magnetohydrodynamics in dynamical spacetimes: Numerical methods and tests. Phys. Rev. D; 2005; 72, 024028. [DOI: https://dx.doi.org/10.1103/PhysRevD.72.024028]
18. Font, J.A. Numerical Hydrodynamics in General Relativity. Living Rev. Relativ.; 2000; 3, 2. [DOI: https://dx.doi.org/10.12942/lrr-2000-2]
19. Chang, P.; Etienne, Z. General relativistic hydrodynamics on a moving-mesh I: Static space–times. Mon. Not. Roy. Astron. Soc.; 2020; 496, pp. 206-214. [DOI: https://dx.doi.org/10.1093/mnras/staa1532]
20. Kalinani, J.V.; Ji, L.; Ennoggi, L.; Lopez Armengol, F.G.; Sanches, L.T.; Tsao, B.-J.; Brandt, S.R.; Campanelli, M.; Ciolfi, R.; Giacomazzo, B. AsterX: A new open-source GPU-accelerated GRMHD code for dynamical spacetimes. Class. Quant. Grav.; 2025; 42, 025016. [DOI: https://dx.doi.org/10.1088/1361-6382/ad9c11]
21. Zhu, H.; Fields, J.; Zappa, F.; Radice, D.; Stone, J.; Rashti, A.; Cook, W.; Bernuzzi, S.; Daszuta, B. Performance-Portable Numerical Relativity with AthenaK. arXiv; 2024; arXiv: 2409.10383[DOI: https://dx.doi.org/10.3847/1538-4365/adcf96]
22. Liebling, S.L.; Palenzuela, C.; Lehner, L. Toward fidelity and scalability in non-vacuum mergers. Class. Quant. Grav.; 2020; 37, 135006. [DOI: https://dx.doi.org/10.1088/1361-6382/ab8fcd]
23. Dieselhorst, T.; Cook, W.; Bernuzzi, S.; Radice, D. Machine Learning for Conservative-to-Primitive in Relativistic Hydrodynamics. Symmetry; 2021; 13, 2157. [DOI: https://dx.doi.org/10.3390/sym13112157]
24. Ansel, J.; Yang, E.; He, H.; Gimelshein, N.; Jain, A.; Voznesensky, M.; Bao, B.; Bell, P.; Berard, D.; Burovski, E.
25. Kastaun, W.; Kalinani, J.V.; Ciolfi, R. Robust Recovery of Primitive Variables in Relativistic Ideal Magnetohydrodynamics. Phys. Rev. D; 2021; 103, 023018. [DOI: https://dx.doi.org/10.1103/PhysRevD.103.023018]
26. Banyuls, F.; Font, J.A.; Ibáñez, J.M.; Martí, J.M.; Miralles, J.A. Numerical 3 + 1 General Relativistic Hydrodynamics: A Local Characteristic Approach. Astrophys. J.; 1997; 476, 221. [DOI: https://dx.doi.org/10.1086/303604]
27. Martí, J.M.; Müller, E. Numerical Hydrodynamics in Special Relativity. Living Rev. Relativ.; 2003; 6, 7. [DOI: https://dx.doi.org/10.12942/lrr-2003-7]
28. Font, J.A. Numerical Hydrodynamics and Magnetohydrodynamics in General Relativity. Living Rev. Relativ.; 2008; 11, 7. [DOI: https://dx.doi.org/10.12942/lrr-2008-7]
29. Janka, H.T.; Zwerger, T.; Moenchmeyer, R. Does artificial viscosity destroy prompt type-II supernova explosions?. Astron. Astrophys.; 1993; 268, pp. 360-368.
30. Read, J.S.; Lackey, B.D.; Owen, B.J.; Friedman, J.L. Constraints on a Phenomenologically Parametrized Neutron-Star Equation of State. Phys. Rev. D; 2009; 79, 124032. [DOI: https://dx.doi.org/10.1103/PhysRevD.79.124032]
31. Schneider, A.S.; Roberts, L.F.; Ott, C.D. Open-Source Nuclear Equation of State Framework Based on the Liquid-Drop Model with Skyrme Interaction. Phys. Rev. C; 2017; 96, 065802. [DOI: https://dx.doi.org/10.1103/PhysRevC.96.065802]
32. Wouters, T. Machine Learning Algorithms for the Conservative-to-Primitive Conversion in Relativistic Hydrodynamics. Master’s Thesis; KU Leuven: Leuven, Belgium, 2024.
33. Bernuzzi, S.; Breschi, M.; Daszuta, B.; Endrizzi, A.; Logoteta, D.; Nedora, V.; Perego, A.; Schianchi, F.; Radice, D.; Zappa, F.
34. Boerner, T.J.; Deems, S.; Furlani, T.R.; Knuth, S.L.; Towns, J. ACCESS: Advancing Innovation: NSF’s Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support. Proceedings of the Practice and Experience in Advanced Research Computing 2023: Computing for the Common Good; New York, NY, USA, 23–27 July 2023; pp. 173-176. [DOI: https://dx.doi.org/10.1145/3569951.3597559]
35. Hunter, J.D. Matplotlib: A 2D graphics environment. Comput. Sci. Eng.; 2007; 9, pp. 90-95. [DOI: https://dx.doi.org/10.1109/MCSE.2007.55]
36. Waskom, M.L. seaborn: Statistical data visualization. J. Open Source Softw.; 2021; 6, 3021. [DOI: https://dx.doi.org/10.21105/joss.03021]
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.