Content area
Large-scale 3D photoacoustic imaging has become increasingly important for both clinical and pre-clinical applications. Limited by cost and system complexity, only systems with sparsely-distributed sensors can be widely implemented, which necessitates advanced reconstruction algorithms to reduce artifacts. However, the high computing memory and time consumption of traditional iterative reconstruction (IR) algorithms is practically unacceptable for large-scale 3D photoacoustic imaging. Here, we propose a point cloud-based IR algorithm that reduces memory consumption by several orders, wherein the 3D photoacoustic scene is modeled as a series of Gaussian-distributed spherical sources stored in form of point cloud. During the IR process, not only are properties of each Gaussian source, including its peak intensity (initial pressure value), standard deviation (size) and mean (position) continuously optimized, but also each Gaussian source itself adaptively undergoes destroying, splitting, and duplication along the gradient direction. This method, named SlingBAG, the sliding Gaussian ball adaptive growth algorithm, enables high-quality large-scale 3D photoacoustic reconstruction with fast iteration and extremely low memory usage. We validated the SlingBAG algorithm in both simulation study and in vivo animal experiments.
Researchers present SlingBAG, an iterative reconstruction algorithm for large-scale 3D photoacoustic imaging. It uses an adaptive point cloud model to achieve high-quality imaging from sparse data, notably cutting cost in both memory and time.
Introduction
Photoacoustic imaging (PAI), which uniquely combines ultrasound detection and optical absorption contrast, is the only non-invasive optical imaging method that can image centimeters deep in living tissue with superior spatial resolution compared to other optical imaging techniques. PAI has been widely employed in various biomedical studies, including both pre-clinical studies and clinical applications1, 2, 3, 4, 5–6.
In recent years, significant progress has been made in the development of 3D PAI through the use of 2D matrix arrays, demonstrating immense potential for in vivo applications like peripheral vessel7,8, breast9,10 and small animal imaging11. Various array configurations have been explored, ranging from spherical and planar arrays12, 13, 14, 15, 16, 17, 18–19 to synthetic planar arrays that rely on scanning8,20. However, limited by cost and technical difficulties, it is a great challenge to fabricate a matrix array that satisfies the spatial Nyquist sampling requirement for real-time large-scale 3D PAI, which typically requires thousands of ultrasound transducer elements, as well as an equal number of parallel data acquisition channels. Therefore, real-time 3D PAI systems only have sparsely distributed sensors in reality, while it is well known that traditional back-projection image reconstruction methods will cause severe artifacts and low signal-to-noise ratio (SNR) for extremely sparse arrays.
To address these challenges, researchers have employed iterative reconstruction (IR) methods to improve image quality with sparse sensor configurations. An early IR approach to PAI was presented by ref. 21 where the differences between observed and simulated projections were iteratively minimized to improve image quality. Subsequent IR methods have incorporated various physical imaging factors. Deán-Ben et al.22 developed a model-based 3D reconstruction algorithm utilizing a sparse matrix forward model with reduced artifacts compared to back-projection methods. Wang et al.23,24 demonstrated that iterative penalized least-squares algorithms, based on discrete-to-discrete (D-D) imaging models using expansion functions in voxel grids, with either quadratic smoothing or total variation (TV) norm penalties, could enhance the performance of small animal 3D PAI systems. Similarly, ref. 25 introduced forward and backward operators based on k-space pseudospectral methods, and recently refs. 26,27 also use k-Wave28 as their forward model in model-based IR reconstruction for planar sensor arrays. Although these IR techniques have demonstrated great performance, for large-scale 3D PAI the spatial grid and system matrix scale is substantially increased, the computational load and memory demands make these methods less feasible.
Then, subsequent research has focused on more efficient model-based IR techniques. A notable advancement involves using semi-analytical forward models29, 30, 31–32, often based on integral geometry, combined with on-the-fly operator calculations within iterative solvers like least squares with QR factorization (LSQR) algorithm33. This strategy avoids the explicit storage of the massive system matrix, making reconstructions of larger datasets computationally tractable. However, these methods have their own limitations. The speed of semi-analytical models often stems from approximations (e.g., assuming far-field conditions and planar surface integral instead of spherical surface integral) that can compromise reconstruction accuracy, particularly for near-field or complex geometries. Besides, their reliance on solvers like LSQR restricts the use of powerful non-smooth regularization terms, such as the L1 or TV norms, which are highly effective for noise suppression and edge preservation. Furthermore, all these methods remain a uniform voxel-grid representation, inheriting its intrinsic trade-off between spatial resolution (grid size) and computational cost and its inefficiency in modeling sparse structures like vascular networks. These persistent limitations highlight the need for a paradigm shift away from traditional grid-based representations, motivating the exploration of alternative frameworks.
In addition to traditional IR, deep learning approaches for PAI reconstruction34, 35–36 offer improvements in computational efficiency and image quality. However, they heavily rely on large, specialized training datasets, which can be difficult to obtain, and lack generalizability across different imaging systems or under varying conditions. This limits their broader applicability, particularly in diverse clinical settings where imaging conditions are not consistent.
Similar challenges of reconstructing 3D images from sparse-view also exist in the computer vision field. To tackle this challenge, differentiable rendering technologies such as NeRF37 and 3D Gaussian splatting (3DGS)38 were developed. Differentiable rendering technology39, 40, 41, 42–43 enables self-supervised reconstruction of 3D scenes directly from scene observations, usually multi-view images. The photometric difference between the rendered image generated by the differentiable rendering pipeline and the actual captured image is utilized as a loss function, guiding the reconstruction of the entire scene through gradient back-propagation. NeRF, for instance, uses multi-layer perceptrons (MLPs) to map 3D spatial positions to colors and densities, while following methods like InstantNGP use hash grids to store features at different spatial scales, allowing for faster and more memory-efficient reconstructions44. Those pioneering works inspire more IR methods for PAI. A recent work reported NeRF-based paradigms to perform 2D PAI image reconstruction45. However, that method relies on dense sampling, which makes it computationally prohibitive for large-scale 3D PAI.
In contrast to NeRF, the 3D Gaussian splatting (3DGS)38 effectively addresses these computational challenges in 3D computer graphics. Based on point cloud instead of spatial grid, 3DGS applies object-order rendering by representing scenes as Gaussian ellipsoids, which are projected and composited onto a 2D screen using elliptical weighted average (EWA)46 for alpha blending. As a differentiable rendering strategy using explicit scene representation, 3DGS enables efficient, high-quality 3D reconstruction and real-time rendering.
Inspired by 3DGS, we propose the sliding Gaussian ball adaptive growth algorithm (SlingBAG) for 3D PAI reconstruction. Distinct from traditional voxel grids, this method stores information of the PA source in the form of a point cloud that undergoes iterative optimization, and the 3D PA scene is modeled as a series of Gaussian-distributed spherical sources. In order to fulfill the differentiable condition, we constructed a “differentiable rapid radiator" model based on the fact that a PA spherical source has the analytical solution for the wave equation in an acoustically non-viscous homogeneous medium. Each point cloud PA source is composed of multiple concentric spheres, with each sphere’s radius and initial PA pressure determined by the discretized Gaussian distribution. PA signals are the superposition of waves from spatially distributed Gaussian-distributed sources with specific peak intensity (pressure value), standard deviation (size), and mean value (spatial position). By minimizing the discrepancies between the predicted and the actual PA signals, we iteratively refine the point cloud and ultimately realize the 3D PAI reconstruction by converting the reconstruction result from a point cloud into a voxel grid.
This work introduces the SlingBAG algorithm, detailing its differentiable rapid radiator, point cloud optimization, and physics-based point cloud-to-voxel shader. Validated by both simulation and in vivo experiments, the proposed SlingBAG algorithm enables high-quality large-scale 3D PAI for broad potential applications with sparse sensor distributions.
Results
3D PA image reconstruction under an extremely sparse planar array
We first used simulated hand vessels as the target tissue to demonstrate the high performance of the SlingBAG algorithm. The simulated data, depicting human peripheral hand vessels, was used as the ground truth (Fig. 1a). At a grid point spacing of 0.2 mm, the total grid size of the imaging area is 400 × 520 × 200 (80 mm × 104 mm × 40 mm). To simulate 3D PA imaging, the initial dense planar matrix array has 122,500 sensors (350 × 350, with a pitch of 0.4 mm) uniformly placed on a plane covering 140 mm × 140 mm area, and the plane of the matrix array is about 15 mm below the imaging area (Fig. 1g). The simulation experiments were performed using k-Wave, with a sampling frequency of 40 MHz, 4900 sampling points, a medium density of 1 × 103 kg/m3, and a sound speed of 1500 m/s.
Fig. 1 Comparison of 3D photoacoustic reconstruction results under extremely sparse planar arrays. [Images not available. See PDF.]
a, d Top-/front-view maximum amplitude projections (MAP) and cross-sectional slices of the acoustic source and the UBP reconstruction result with 122,500 sensors. b, c, e, f Top-/front-view MAPs and cross-sectional slices of UBP, MB-PD, and SlingBAG reconstruction results with 49, 196, 576, and 4900 sensors. g Imaging setup. h PA value distribution along the yellow dashed line in the slices from the acoustic source, 576- and 4900-sensor results. (Scale bar: 10 mm).
For sparse sensor arrangements, we undersampled the original 122,500 sensors into four different sparse matrix arrays with total sensor numbers of 49, 196, 576, and 4900, respectively. The pitches between adjacent sensors in these four different configurations are 20 mm, 10 mm, 6 mm, and 2 mm, respectively. We compared the proposed SlingBAG algorithm with the universal back-projection (UBP) algorithm47, and the model-based point-detector (MB-PD) IR algorithm augmented with L2 regularization, which uses a semi-analytical forward model with integral geometry29,30.
The reconstruction results are shown in Fig. 1b–f. For each vertical group of images, the three sub-images from top to bottom respectively show the top-view maximum amplitude projection (MAP), front-view MAP, and a cross-section slice along the green line marked in the top-view MAP of the 3D reconstruction result. Within each sub-figure, the three groups of images from left to right represent the reconstruction results of UBP, MB-PD, and SlingBAG under the same number of sensors, respectively. More 3D reconstruction results using both algorithms with different array configurations can be found in Supplementary Movies 1–3.
Under extremely sparse sensor configurations, the UBP reconstruction results have serious artifacts as shown in Fig. 1b, c, e. The reconstruction results of MB-PD are better than those of UBP; however, when the number of sensors is 49, 196, or 576, noticeable artifacts remain present. Only when the number of sensors reaches 4900 is artifact suppression significantly improved (Fig. 1f), but at this point, the advantage of MB-PD over UBP becomes less pronounced. The SlingBAG algorithm demonstrates superior 3D reconstruction capability; it even reconstructs a well-recognized outline of the hand vasculature with only 49 sensors (Fig. 1b). As the sensor matrix becomes denser, the quality of SlingBAG’s reconstruction improves rapidly. With 576 sensors (Fig. 1e), SlingBAG already achieves high-quality reconstruction; in contrast, the UBP and MB-PD results remain blurry due to reconstruction artifacts, making it difficult to discern fine vascular details (marked by the white dashed box). With a total of 4900 sensors (Fig. 1f), the SlingBAG reconstruction is nearly identical to the ground truth. Notably, for UBP reconstruction, even when using all 122,500 sensors (Fig. 1d), the result still cannot match the quality of SlingBAG’s result with only 4900 sensors, which can be easily observed from the corresponding slice images. This is because the finite size of the detection matrix array has the so-called “limited view” problem. However, the SlingBAG algorithm performs exceptionally well in limited-view conditions, which is another advantage of our method.
In Fig. 1h, we plotted the PA value distribution along the yellow dashed line in the slice images for comparison. The green solid line represents the ground truth, the red dashed line represents the reconstruction result from 4900 sensors, and the blue dashed line represents the reconstruction result from 576 sensors. As the number of sensors increases from 576 to 4900, the intensity distribution along the yellow dashed line in the slice progressively approximates the true value. The differences between the SlingBAG reconstruction results and the ground truth at several peaks are also substantially diminished with 4900 sensors, indicating that the SlingBAG algorithm’s reconstruction can achieve high quantitative accuracy with very sparse sensor distributions.
Next, to quantitatively assess the reconstruction results of the UBP, MB-PD and SlingBAG algorithms, we calculated the structural similarity index (SSIM), the mean squared error (MSE) and the peak signal-to-noise ratio (PSNR) of the top-view MAP and 3D volume for all four sensor numbers (Table 1). The SlingBAG achieved optimal performance across all evaluation metrics for each sensor number. As shown in Fig. 2, the trends and clear comparisons of all metrics for the reconstruction results of the three algorithms with respect to the number of sensors can be observed. We emphasize the following points here. First, for the 3D volume reconstruction results with 4900 sensors (Fig. 2e), the MSE of SlingBAG is only 0.000084, which is much lower than that of UBP and MB-PD, directly reflecting the high fidelity of SlingBAG’s 3D reconstructions. Second, for the reconstructions with 49, 196, and 576 sensors (Fig. 2c, f), the PSNR of SlingBAG is significantly higher than that of UBP and MB-PD; for all sensor numbers, SlingBAG achieves a PSNR above 20 dB for the top-view MAP and above 30 dB for the 3D volume, with the maximum reaching 40.78 dB, which demonstrates SlingBAG’s superior capability for artifact and noise suppression in highly sparse sensor arrays. Third, the performance metrics of SlingBAG with 196 sensors even surpass those of MB-PD with 576 sensors, indicating superior reconstruction capability among IR methods.
Table 1. Quantitative evaluation of reconstruction results from different algorithms
Sensor number | Algorithm | Top-view MAP | 3D volume | ||||
|---|---|---|---|---|---|---|---|
SSIM↑ | MSE↓ | PSNR↑b | SSIM↑ | MSE↓ | PSNR↑ | ||
49 | UBP | 0.0522 | 0.04986 | 13.02 | 0.3647 | 0.002893 | 25.39 |
MB-PD | 0.0888 | 0.03443 | 14.63 | 0.5209 | 0.003560 | 24.49 | |
SlingBAG | 0.1987a | 0.00743 | 21.29 | 0.8917 | 0.000490 | 33.10 | |
196 | UBP | 0.0870 | 0.02530 | 15.97 | 0.3768 | 0.001868 | 27.29 |
MB-PD | 0.1517 | 0.01323 | 18.78 | 0.5540 | 0.001930 | 27.15 | |
SlingBAG | 0.3493 | 0.00414 | 23.83 | 0.9403 | 0.000217 | 36.63 | |
576 | UBP | 0.1306 | 0.01628 | 17.88 | 0.4174 | 0.001248 | 29.04 |
MB-PD | 0.2088 | 0.00638 | 21.95 | 0.6410 | 0.001037 | 29.84 | |
SlingBAG | 0.4015 | 0.00251 | 26.00 | 0.9488 | 0.000149 | 38.27 | |
4900 | UBP | 0.2693 | 0.00289 | 25.39 | 0.7209 | 0.000262 | 35.82 |
MB-PD | 0.4616 | 0.00190 | 27.22 | 0.8562 | 0.000245 | 36.11 | |
SlingBAG | 0.6978 | 0.00136 | 28.67 | 0.9784 | 0.000084 | 40.78 | |
aBold values indicate the best performance among compared methods.
bUnit: dB.
Fig. 2 Assessment for image quality of reconstruction results under extremely sparse planar arrays. [Images not available. See PDF.]
a–c SSIM, MSE and PSNR of the top-view MAPs. d–f SSIM, MSE, and PSNR of the 3D volumes.
We further visualized the iterative results of our reconstruction algorithm’s point cloud for the 196-sensor array in Fig. 3. We first initialized a relatively dense point cloud distribution randomly within the reconstruction region. At the coarse reconstruction stage, both the position update and point duplication were disabled, only the peak initial pressure and standard deviations of each Gaussian ball underwent updates. Then, we got a preliminary convergence result in the coarse reconstruction stage. Next, we enabled the model’s position update and point cloud duplication features. It can be seen that the converged shape of the three finger vessels in the coarse reconstruction gradually becomes denser, along with finer details showing up (Fig. 3a). Figure 3b shows the 3D visualization results of the point cloud throughout the entire reconstruction process, from random initialization to the coarse reconstruction stage and finally to the fine reconstruction stage. Furthermore, through our point cloud-to-voxel-grid shader, we transformed the fine reconstruction result from point cloud form into a voxel grid (Fig. 3a). More detailed visualizations of the intermediate iterative process can be found in Supplementary Fig. 5.
Fig. 3 Visualization of SlingBAG iterative results. [Images not available. See PDF.]
a Visualization of initialization, coarse reconstruction, fine reconstruction point cloud, and the final voxel grid result. b 3D visualization of random initialization, coarse reconstruction, and fine reconstruction point cloud.
3D PA image reconstruction of in vivo animal studies
Then, we performed image reconstruction of real in vivo animal experimental data using all three algorithms, the UBP, the MB-PD and the SlingBAG algorithms. The in vivo animal study data, including mouse brain, rat kidney and rat liver, was acquired by Professor Kim’s lab using a hemispherical ultrasound (US) transducer array with 1024 elements and a radius of 60 mm48. Each US transducer element in the array had an average center frequency of 2.02 MHz and a bandwidth of 54%. The effective field of view (FOV) was 12.8 mm × 12.8 mm × 12.8 mm, and the spatial resolutions of ~380 μm were nearly isotropic in all directions when all 1024 US transducer elements were used. All in vivo experiments were conducted with a sampling frequency of 8.33 MHz and 896 sampling points. For the mouse brain, the reconstruction region was 256 × 256 × 128; for the rat kidney and rat liver, the reconstruction region was 256 × 256 × 256. The resolution of the reconstructed 3D volume was 0.1 mm/pixel. More details regarding the 3D PAI system and animal experiments can be found in literatures49, 50–51.
First, we compared three reconstruction results for a mouse brain as shown in Fig. 4. Figure 4a presents the reconstruction results by UBP, including the MAP results in XY Plane, XZ Plane, and YZ Plane, as well as a cross-section slice from top to bottom. Figure 4b, c presents counterpart results by the MB-PD and the SlingBAG algorithms. Figure 4e is a snapshot of the 3D SlingBAG reconstruction, which is available online (Supplementary Movie 4). According to the results, the SlingBAG reconstruction exhibits minimal artifacts and reveals much more vascular details compared with UBP and MB-PD. As in the UBP result, due to artifacts induced by sparse sensors, the fine vessels are almost indiscernible in the bottom cross-section slice. Although the MB-PD reconstruction reduced artifacts to some extent, small vessels are also blurred compared to the SlingBAG result. Further analysis showed that the contrast-to-noise ratio (CNR) of the XY Plane-MAP for the UBP and the MB-PD was 31.06 and 41.05, whereas it reached 46.39 for the SlingBAG (the selection of background and signal regions for CNR calculation can be found in Supplementary Fig. 8. We also demonstrated the SlingBAG algorithm’s superior capability for reconstruction of in vivo mouse brain under both limited-view and sparse-view conditions in Supplementary Figs. 2–4.
Fig. 4 3D PAI reconstruction of a mouse brain. [Images not available. See PDF.]
a–c XY, XZ, YZ MAPs, and cross-sectional slice at green dashed line in XY MAP of the UBP, MB-PD, and SlingBAG reconstruction results. (Scale: 2 mm.) d Schematic of hemispherical array imaging. e SlingBAG reconstruction result.
Next, in Figs. 5, 6, we presented the reconstruction results of a rat kidney and a rat liver, respectively. In the cross-section slice of the reconstructed kidney image in Fig. 5a, the UBP results show severe arc-shaped artifacts caused by sparse sensor configuration and limited view issues, which significantly impair the ability to resolve the fine vascular structures. The artifacts in the MB-PD results were reduced to some extent; however, obvious arc-shaped artifacts remained, impairing vascular clarity, and a slight decrease in vascular resolution was also observed (Fig. 5b). In contrast, the SlingBAG algorithm effectively suppresses these arc-shaped artifacts, producing a much clearer and cleaner vascular cross-sectional image (Fig. 5c). Similarly, the SlingBAG algorithm renders the liver vasculature with much better continuity and much fewer artifacts. The 3D reconstruction results in Figs. 5e, 6e are provided in Supplementary Movies 5, 6. For a quantitative comparison, the CNR values of the XY plane-MAP obtained with UBP reconstruction for the kidney and liver are 20.18 and 30.62, respectively; for MB-PD, they are 27.85 and 37.28, respectively; whereas for the SlingBAG results, the corresponding values are 30.26 and 42.01 (the selection of background and signal regions for CNR calculation can be found in Supplementary Fig. 8.). These results illustrate the exceptionally high quality of image reconstruction achieved by the SlingBAG algorithm.
Fig. 5 3D PAI reconstruction results of a rat kidney. [Images not available. See PDF.]
a–c XY, XZ, YZ MAPs, and cross-sectional slice at green dashed line in XY MAP of the UBP, MB-PD, and SlingBAG reconstruction results. (Scale: 2 mm.) d Imaging-area schematic. e SlingBAG reconstruction result.
Fig. 6 3D PAI reconstruction results of a rat liver. [Images not available. See PDF.]
a–c XY, XZ, YZ MAPs, and cross-sectional slice at green dashed line in XZ MAP of the UBP, MB-PD, and SlingBAG reconstruction results. (Scale: 2 mm.) d Imaging-area schematic. e SlingBAG reconstruction result.
Computing memory consumption analysis and comparison
Besides the quality of image reconstruction, the demand for computing memory usage is also key for practical implementation. Here, we compared the computing memory consumption of the SlingBAG algorithm with that of traditional IR methods under the same experimental setup.
For the PA imaging of hand vessels using a planar matrix array with 4900 sensors, the forward simulation combined with the backward gradient propagation of the SlingBAG algorithm required only 3.56 GB of GPU memory. In comparison, the theoretical memory consumption of traditional IR methods, such as k-Wave and the fast iterative shrinkage-thresholding algorithm (FISTA)52, is estimated to be as high as 19.32 GB, which exceeds the memory capacity limits of many consumer-grade GPUs. Similarly, for the in vivo animal imaging experiment using a hemispherical array, where the sensors are relatively far away from the reconstructed region, the SlingBAG algorithm required 2.99 GB of GPU memory, while the memory usage of traditional IR methods is estimated to be 85.87 GB, making it virtually impractical for most accessible consumer-level GPU resources. More details of the theoretical analysis for memory demand by traditional IR methods are provided in the Supplementary Note 4.
Discussion
Real-time large-scale 3D PA imaging using sparsely distributed sensors has garnered significant attention. However, traditional IR reconstruction methods face great challenges due to their extremely high demand for GPU memory. This work proposes SlingBAG, a point cloud-based iterative 3D PAI reconstruction algorithm, which conducts fast iterative reconstruction with much less memory consumption. We have demonstrated its superior performance in achieving high-quality 3D PA images under extremely sparse sensor distributions using both simulated data and in vivo experimental data. With a feasibly accessible custom-level GPU resource, the SlingBAG algorithm can effectively suppress the reconstruction artifacts to provide high-quality 3D PA imaging.
Compared with traditional IR methods, the SlingBAG reconstruction method has two key advantages: much less memory consumption and fast reconstruction speed. For instance, the simulated hand vessels data spans dimensions of 40 mm × 104 mm × 80 mm in physical space. In traditional voxel grids, using a 0.2 mm grid interval results in a very large grid size of 200 × 520 × 400 (~4 × 107). With this scale of grid, using 196 sensors at a 40 MHz sampling frequency (each channel requires 4096 sampling points), the data volume size makes it extremely difficult for traditional voxel-based IR methods to handle in terms of memory usage and time consumption. However, the SlingBAG method requires less than 3 GB of GPU memory. Therefore, although traditional IR methods with powerful tools like k-Wave have shown superior performance in 2D PA imaging, they face great challenges for large-scale 3D imaging since the computing memory usage increases exponentially. Additionally, in iterative algorithms, this imposes a significant memory burden on gradient computation and optimizers, which becomes unacceptable for most research laboratories.
Moreover, concerning computational time consumption, k-Wave, as well as other tools that numerically solve PA wave function in time domain, requires computing the acoustic field at each time step over entire spatial grids, which is extremely time-consuming for large-scale 3D PA imaging. However, the SlingBAG directly calculates the far-field PA wave for the forward model using the differentiable rapid radiator, which has been demonstrated to possess high accuracy and be capable of producing simulation results almost identical to those obtained with k-Wave at a broad scale in ideal homogeneous media (detailed in Supplementary Note 7, while in general simulation scenarios, k-Wave remains a more comprehensive solution). For instance, on a single RTX 3090 Ti GPU, in the mentioned hand vascular case with 196 sensors and 4096 temporal sampling points, the SlingBAG takes only 15 s with a point cloud with 600,000 Gaussian-distributed spherical sources to finish the calculation of the forward model, and takes only 2.39 h to complete 3D PA image reconstruction with 196 sensors, while the GPU memory overflows for k-Wave at this scale. Table 2 summarizes the reconstruction times for SlingBAG in both the finger phantom simulation and all in vivo experiments. Specifically, the in vivo 3D reconstructions for the mouse brain, rat liver, and rat kidney required 1.89, 4.75, and 2.90 h, respectively. Such reconstruction times are considerably faster than those of conventional 3D iterative reconstruction methods. For the finger phantom simulation with 4900 sensors, SlingBAG completed the reconstruction within 24 h. While this may appear lengthy, such large-scale 3D reconstructions (200 × 520 × 400) with so many sensors (4900) and sampling points (4096) are uncommon. Besides k-Wave based IR method, SlingBAG is still considerably faster than the MB-PD iterative method which utilizes precomputed integral tables (MB-PD took 34.62 h for the 4900 sensor case, detailed in Supplementary Note 8). It is worth noting that our implementation of MB-PD achieves the reconstruction speed reported in the original paper for small-scale 3D reconstructions (Supplementary Note 5), further confirming that SlingBAG offers a substantial speed advantage for large-scale 3D photoacoustic reconstruction. The significant improvement in both reconstruction range and speed is largely attributable to the adaptive point cloud representation, which inherently aligns better with the sparsity characteristics in PA sources. Considering both physical model accuracy and computational cost, the point cloud representation provides a more efficient description of PA sources than the voxel representation. Additionally, our differentiable rapid radiator ensures that both the forward simulation and the backward gradient propagation are fully differentiable, enabling rapid parameter updates and convergence.
Table 2. SlingBAG reconstruction iteration analysis
Experiment | Coarse recona | Fine Reconb | Timec(s) | Time (h) |
|---|---|---|---|---|
In vivo experiments | ||||
Mouse brain | 240 | 360 | 6822 | 1.89 |
Rat liver | 440 | 360 | 17117 | 4.75 |
Rat kidney | 380 | 360 | 10455 | 2.90 |
Simulation experiments | ||||
4900-element | 200 | 200 | 86355 | 23.99 |
576-element | 240 | 120 | 14618 | 4.06 |
196-element | 240 | 120 | 8592 | 2.39 |
49-element | 290 | 120 | 7690 | 2.14 |
aCoarse reconstruction iterations.
bFine reconstruction iterations.
cTime values rounded to nearest integer (seconds) or two decimal places (hours).
It is worth emphasizing again that this performance advantage, particularly over semi-analytical methods like MB-PD, stems from a fundamental paradigm shift rather than just implementation optimizations. To elaborate, the class of conventional model-based IR techniques, which employ semi-analytical forward models and matrix-free iterative solvers like LSQR, faces a set of inherent trade-offs. First, their speed is often gained by introducing approximations into the forward model, such as treating spherical integrals as planar ones, which can degrade accuracy, particularly in near-field geometries. Second, their performance is sensitive to the ratio between the sampling rate and voxel size, potentially failing at high resolutions where the signal per voxel is insufficiently sampled. Third, the reliance on LSQR-type solvers restricts regularization to smooth functions like Tikhonov (L2) and precludes the use of powerful non-smooth priors (e.g., L1, Total Variation) common in modern signal processing. In contrast, SlingBAG’s core innovation-its shift to a point cloud framework-inherently sidesteps these challenges. This approach not only avoids the resolution-versus-memory trade-off of a grid but also introduces adaptive optimization of Gaussian balls (e.g., splitting and destroying), offering a distinct and potentially more robust approach to solving large-scale, ill-posed inverse problems in PAI.
The SlingBAG demonstrates high quantitative reconstruction precision, including both PA sources’ locations and pressure values. The 3D PAI results from SlingBAG can also be used to generate extra artificial temporal signals for sensors at new positions with high accuracy (Supplementary Fig. 1), which is very useful for the study of sensor interpolation methods for PAI. The high reconstruction accuracy is attributable to that the point cloud model allows the source to converge to finer spatial locations, because the optimization of all Gaussian balls’ positions is spatially continuous. Furthermore, the computational steps are completely differentiable, facilitating easy gradient propagation and parameter updates.
It is worth noting that the multiorder spherical discretization strategy employed in our differentiable rapid radiator is not the sole option; rather, it represents a trade-off between the required accuracy of our study and the associated memory and computational costs. The parameters for the multiorder spherical discretization are determined by a Gaussian distribution in statistics (explained in Methods), which could be extended to higher orders to enhance the model’s accuracy, albeit at the cost of increased computational demand.
It is important to point out that the learning rate during the iterative process has a significant impact on the model’s convergence speed and reconstruction quality. Based on our experiments, it is appropriate to set the learning rates for peak initial pressure and standard deviation to 0.05 and 4 × 10−5, respectively, during the coarse reconstruction phase. While in the fine reconstruction phase, the suggested learning rates for the 3D coordinates, peak initial pressure, and standard deviation are 4 × 10−6, 4 × 10−3, and 4 × 10−6, respectively. Interestingly, during the real mouse brain reconstruction process, we observed that although further reducing the learning rate could decrease the loss, the overall image quality becomes deteriorated instead, with more noise introduced. This phenomenon occurs because, under excessively small learning rates, the model tends to reconstruct the noise present in the original signal, resulting in noisier reconstructions.
Notably, in the current implementation of SlingBAG, we do not use any explicit regularization; instead, we only employ the fidelity term for iterative optimization. Both simulation and in vivo experiments demonstrate that SlingBAG is capable of achieving high-quality reconstruction results even in the absence of any explicit regularization techniques. However, in traditional voxel grid-based iterative algorithms, explicit regularization is commonly used to constrain the iterative process and facilitate image reconstruction in ill-posed inverse problems, and is usually indispensable. The reason that SlingBAG does not rely on regularization as heavily as traditional iterative methods is that, in the SlingBAG framework, representing the scene with discrete Gaussian spheres naturally encourages sparse solutions, especially for vascular and other similar structures. This is analogous to imposing an L0/L1-type sparsity constraint. Furthermore, the adaptive density optimization process, which removes points with very low p0 and sets lower (destroying) and upper (splitting) bounds on a0—implicitly regularizes model complexity and helps suppress overfitting to noise. As a result, SlingBAG inherently possesses implicit sparsity regularization. It should be emphasized that SlingBAG remains compatible with explicit regularization techniques. By introducing explicit regularization terms, one can incorporate prior constraints into the image reconstruction process to achieve reconstructions with lower noise, smoother overall appearance, or sharper edges (depending on the choice of regularization). In Supplementary Note 6, we provide a detailed analysis of SlingBAG combined with Laplacian regularization to demonstrate its compatibility with regularization techniques. With appropriate regularization parameters, the reconstruction results of SlingBAG can be further improved (see Supplementary Fig. 7). Additionally, we also examined the influence of the transducer’s electrical impulse response (EIR) on SlingBAG reconstruction (Supplementary Note 9). Our study demonstrated that the influence of EIR on SlingBAG reconstruction is minor, especially for real in vivo data.
Despite those mentioned unique advantages of SlingBAG, it is currently only valid in an acoustically homogeneous, non-viscous medium, i.e., with uniform sound speed and no attenuation due to viscosity. In future work, we will first tackle the non-uniform sound speed issue. One potential way is to use multiorder spherical harmonic (SH) functions to represent the anisotropic distribution of sound speeds. Specifically, for each Gaussian-distributed spherical source, in addition to peak value, standard deviation, and mean, coefficients of the SH function that represent the orientation-dependent average sound speed will be introduced. This orientation-dependent average sound speed will be computed using SH to model the acoustic radiation in various azimuth and elevation directions. The SH coefficients will be iteratively optimized, at the expense of more computational load.
Inherited from its unique IR-based superior characteristics, the SlingBAG algorithm is highly versatile and can be easily applied to a variety of array configurations, such as planar arrays, spherical arrays, as well as many more customized geometric setups, making it a powerful tool for 3D PAI reconstruction. Owing to exceptional imaging quality, reduced demand for computational resources, and a significant reduction in system cost enabled by the use of extremely sparse arrays, the SlingBAG algorithm will make a great contribution to fostering large-scale 3D PAI implementations, such as real-time whole-body small animal imaging, large-scale peripheral vascular imaging, and 3D PA imaging of breast or thyroid tumors.
Methods
Differentiable rapid radiator based on multiorder spherical discretization
Inspired by differentiable rendering techniques from computer graphics, we propose the concept of “differentiable acoustic radiation" for PA imaging. Our model explicitly represents the 3D PA scene as a collection of isotropic Gaussian-distributed PA sources, stored as a point cloud (Fig. 7a). Each Gaussian source is characterized by three attributes: its peak initial pressure p0, standard deviation a0, and 3D coordinates (xs, ys, zs).
Fig. 7 The overview framework of sliding Gaussian ball adaptive growth algorithm. [Images not available. See PDF.]
a The SlingBAG pipeline. b The forward simulation of a differentiable rapid radiator. c Adaptive growth optimization based on iterative point cloud.
To build an efficient and differentiable forward model, we approximate each Gaussian source as a superposition of multiorder concentric homogeneous spheres. This discretization is based on the analytical solution for PA wave propagation from a uniform spherical source47. Specifically, we use a 10th-order spherical discretization following the 3σ rule of the Gaussian distribution (Fig. 7b). The exact radii and pressure weightings for this discretization are detailed in the Supplementary Note 10.
The analytical solution involves a non-differentiable Heaviside step function u(x). To enable gradient-based optimization, we approximate it using a high-precision, differentiable error function . This yields a fully differentiable forward propagation expression p(R, t) for a single Gaussian source, which is a linear superposition of the signals from the 10 discretized spheres:
1
where and are the radius and initial pressure of the ith-order sphere (detailed in Supplementary Note 10), R is the distance from the sphere center (xs, ys, zs) to the sensor, vs is the speed of sound, and t is time.This formulation is fully differentiable with respect to all source attributes (p0, a0, xs, ys, zs). This allows us to analytically derive the gradients for end-to-end training via back-propagation. The complete derivations of the partial derivatives are provided in the Supplementary Note 11. Both forward and backward models are implemented with GPU acceleration using Taichi53 and CUDA.
Sliding Gaussian ball adaptive growth (SlingBAG) point cloud model
The overview of the SlingBAG reconstruction algorithm is shown in Fig. 7a. We first randomly initialize the Gaussian ball point cloud within the field of view. The model then iteratively updates all attributes by minimizing the L2 loss between the simulated signals from the differentiable rapid radiator and the experimentally acquired sensor data.
The optimization proceeds in two successive stages: a coarse reconstruction stage and a fine reconstruction stage. The coarse stage rapidly converges by updating only the standard deviation a0 and peak pressure p0. The fine stage then refines the reconstruction by optimizing all attributes, including the spatial coordinates (xs, ys, zs).
A key feature of SlingBAG is the adaptive density optimization of the point cloud (Fig. 7c). During optimization, points are destroyed (pruned) if their pressure or size is below a threshold, saving computational resources. Conversely, points are split into smaller spheres if their size (a0) is too large, and duplicated in regions with high position gradients during the fine stage. This adaptive process dynamically adjusts the density of the point cloud, allowing the model to refine details precisely where needed. This facilitates high-quality reconstructions even from sparse sensor data. The detailed pseudocode, optimization hyperparameters, and stopping criteria for the algorithm are provided in the Supplementary Alg. 1.
Physically-based point cloud-to-voxel grid shader
The iterative reconstruction yields a 3D point cloud (Fig. 8b), which is not ideal for standard visualization. We therefore developed a “point cloud-to-voxel grid shader" to convert the final optimized point cloud data into a conventional 3D voxel grid, which is widely used for 3D visualization of PA imaging results.
Fig. 8 The discretization and conversion of point cloud-based Gaussian balls. [Images not available. See PDF.]
a The tenth-order spherical discretization of a Gaussian-distributed spherical PA source. b Comparison of voxel grid and point cloud-based Gaussian ball representation of a PA scene. c The conversion of the Gaussian ball from point cloud-to-voxel grid.
The shader first initializes a 3D voxel grid of size (X, Y, Z) and aligns its coordinate space with the point cloud’s coordinate space. As illustrated in Fig. 8c, the shader then iterates over every voxel (vx, vy, vz) in the grid. For each voxel, its final intensity I(vx, vy, vz) is calculated by accumulating the pressure contributions from all N Gaussian balls in the final point cloud.
The contribution from a single Gaussian ball j (with center , peak pressure , and standard deviation ) to a voxel at position v = (vx, vy, vz) is based on the tenth-order spherical discretization shown in Fig. 8a. The intensity is calculated as follows:
2
where is the indicator function, which is 1 if the condition (the voxel center is within the k-th sphere) holds, and 0 otherwise. The distance between the voxel center and the point center is dj = ∥v − Pj∥2. The parameters and sk represent the pressure weights and radius scales for the ten spheres.This process effectively renders the physical pressure distribution of each Gaussian ball onto the 3D voxel grid. By accumulating these contributions, we achieve a comprehensive and detailed 3D visualization that accurately reflects the optimized point cloud data. The detailed pseudocode for this shader is available in the Supplementary Alg. 2.
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.
Acknowledgements
Changhui Li discloses support for the research of this work from the National Key R&D Program of China (2023YFC2411700 [C.L.] and 2017YFE0104200 [C.L.]), the Beijing Natural Science Foundation (7232177 [C.L.]); Yao Yao discloses support for the research of this work from the National Natural Science Foundation of China (62472213 [Y.Y.]); Chulhong Kim discloses support for the research of this work from the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2020R1A6A1A03047902 [C.K.]). We would like to express our gratitude to Professor Chao Tian and Mr. Zhijian Tan from the University of Science and Technology of China for their selfless support. We also thank Professor Liangzhong Xiang and Mr. Prabodh Kumar Pandey from the University of California for their selfless assistance. The computational resources of this work was supported by Biomedical Computing Platform of National Biomedical Imaging Center, Peking University.
Author contributions
S.L., Y.W., and J.G. conceived and designed the study. C.K. and S.C. provided the experimental data. Y.Z. and Q.C. contributed to the design of the simulation experiments. C.L. and Y.Y. supervised the study. All of the authors contributed to writing the paper.
Peer review
Peer review information
Nature Communications thanks Xiran Cai and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. A peer review file is available.
Data availability
The data supporting the findings of this study are provided within the Article and its Supplementary Information. The raw data of the simulated hand vessel experiments have been deposited at: https://github.com/JaegerCQ/SlingBAG/tree/main/data. The in vivo animal experimental data generated by Professor Kim’s lab in this study are subject to access restrictions for confidentiality reasons. Researchers interested in accessing these data may request permission from Professor Kim under the terms of the data-sharing agreement (contact: [email protected]).
Code availability
The source code for SlingBAG, along with supplementary materials and demonstration movies, has been made publicly available in the following GitHub repository: https://github.com/JaegerCQ/SlingBAG54.
Competing interests
The authors declare no competing interests.
Supplementary information
The online version contains supplementary material available at https://doi.org/10.1038/s41467-025-66855-w.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
1. Wang, LV; Hu, S. Photoacoustic tomography: in vivo imaging from organelles to organs. Science; 2012; 335, pp. 1458-1462.2012Sci..335.1458W [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/22442475][PubMedCentral: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3322413][DOI: https://dx.doi.org/10.1126/science.1216210] 1:CAS:528:DC%2BC38XktFWktrw%3D
2. Park, J et al. Clinical translation of photoacoustic imaging. Nat. Rev. Bioeng.; 2025; 3, pp. 193-212. [DOI: https://dx.doi.org/10.1038/s44222-024-00240-y] 1:CAS:528:DC%2BB2MXms1SitLg%3D
3. Assi, H et al. A review of a strategic roadmapping exercise to advance clinical translation of photoacoustic imaging: from current barriers to future adoption. Photoacoustics; 2023; 32, 100539. [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/37600964][PubMedCentral: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10432856][DOI: https://dx.doi.org/10.1016/j.pacs.2023.100539]
4. Deán-Ben, XL; Gottschalk, S; Mc Larney, B; Shoham, S; Razansky, D. Advanced optoacoustic methods for multiscale imaging of in vivo dynamics. Chem. Soc. Rev.; 2017; 46, pp. 2158-2198. [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28276544][PubMedCentral: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5460636][DOI: https://dx.doi.org/10.1039/C6CS00765A]
5. Lin, L; Wang, LV. The emerging role of photoacoustic imaging in clinical oncology. Nat. Rev. Clin. Oncol.; 2022; 19, pp. 365-384. [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35322236][DOI: https://dx.doi.org/10.1038/s41571-022-00615-3]
6. Ntziachristos, V. Addressing unmet clinical need with optoacoustic imaging. Nat. Rev. Bioeng.3, 182–184 (2024).
7. Wray, P; Lin, L; Hu, P; Wang, LV. Photoacoustic computed tomography of human extremities. J. Biomed. Opt.; 2019; 24, pp. 026003-026003.2019JBO..24b6003W [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/30784244][PubMedCentral: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6380242][DOI: https://dx.doi.org/10.1117/1.JBO.24.2.026003]
8. Li, S et al. Photoacoustic imaging of peripheral vessels in extremities by large-scale synthetic matrix array. J. Biomed. Opt.; 2024; 29, pp. S11519-S11519.2024JBO..2911519L [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/38259508][PubMedCentral: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10800540][DOI: https://dx.doi.org/10.1117/1.JBO.29.S1.S11519]
9. Wang, M et al. Functional photoacoustic/ultrasound imaging for the assessment of breast intraductal lesions: preliminary clinical findings. Biomed. Opt. Express; 2021; 12, pp. 1236-1246. [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33796349][PubMedCentral: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7984794][DOI: https://dx.doi.org/10.1364/BOE.411215]
10. Han, T et al. A three-dimensional modeling method for quantitative photoacoustic breast imaging with handheld probe. Photoacoustics; 2021; 21, 100222. [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33318929][DOI: https://dx.doi.org/10.1016/j.pacs.2020.100222]
11. Sun, Y; Wang, Y; Li, W; Li, C. Real-time dual-modal photoacoustic and fluorescence small animal imaging. Photoacoustics; 2024; 36, 100593. [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/38352643][PubMedCentral: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10862394][DOI: https://dx.doi.org/10.1016/j.pacs.2024.100593]
12. Matsumoto, Y et al. Label-free photoacoustic imaging of human palmar vessels: a structural morphological analysis. Sci. Rep.; 2018; 8, 2018NatSR..8.786M [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/29335512][PubMedCentral: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5768743][DOI: https://dx.doi.org/10.1038/s41598-018-19161-z] 1:STN:280:DC%2BC1MvgvVymsg%3D%3D 786.
13. Matsumoto, Y et al. Visualising peripheral arterioles and venules through high-resolution and large-area photoacoustic imaging. Sci. Rep.; 2018; 8, 2018NatSR..814930M [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/30297721][PubMedCentral: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6175891][DOI: https://dx.doi.org/10.1038/s41598-018-33255-8] 14930.
14. Ivankovic, I; Merčep, E; Schmedt, C-G; Deán-Ben, XL; Razansky, D. Real-time volumetric assessment of the human carotid artery: handheld multispectral optoacoustic tomography. Radiology; 2019; 291, pp. 45-50. [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/30747592][DOI: https://dx.doi.org/10.1148/radiol.2019181325]
15. Deán-Ben, XL; Razansky, D. Portable spherical array probe for volumetric real-time optoacoustic imaging at centimeter-scale depths. Opt. Express; 2013; 21, pp. 28062-28071.2013OExpr.2128062D [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/24514320][DOI: https://dx.doi.org/10.1364/OE.21.028062]
16. Nagae, K. et al. Real-time 3D photoacoustic visualization system with a wide field of view for imaging human limbs. F1000Res.7, 1813 (2018).
17. Kim, W; Choi, W; Ahn, J; Lee, C; Kim, C. Wide-field three-dimensional photoacoustic/ultrasound scanner using a two-dimensional matrix transducer array. Opt. Lett.; 2023; 48, pp. 343-346.2023OptL..48.343K [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/36638453][DOI: https://dx.doi.org/10.1364/OL.475725] 1:CAS:528:DC%2BB3sXlt1OrsLY%3D
18. Piras, D; Xia, W; Steenbergen, W; Van Leeuwen, TG; Manohar, S. Photoacoustic imaging of the breast using the twente photoacoustic mammoscope: present status and future perspectives. IEEE J. Sel. Top. Quantum Electron.; 2009; 16, pp. 730-739.2010IJSTQ.16.730P [DOI: https://dx.doi.org/10.1109/JSTQE.2009.2034870]
19. Heijblom, M et al. Visualizing breast cancer using the twente photoacoustic mammoscope: what do we learn from twelve new patient measurements?. Opt. Express; 2012; 20, pp. 11582-11597.2012OExpr.2011582H [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/22714144][DOI: https://dx.doi.org/10.1364/OE.20.011582] 1:STN:280:DC%2BC38jjtF2ruw%3D%3D
20. Wang, Y; Li, C. Comprehensive framework of gpu-accelerated image reconstruction for photoacoustic computed tomography. J. Biomed. Opt.; 2024; 29, pp. 066006-066006.2024JBO..29f6006W [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/38846677][PubMedCentral: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11155389][DOI: https://dx.doi.org/10.1117/1.JBO.29.6.066006]
21. Paltauf, G; Viator, JA; Prahl, SA; Jacques, SL. Iterative reconstruction algorithm for optoacoustic imaging. J. Acoust. Soc. Am.; 2002; 112, pp. 1536-1544.2002ASAJ.112.1536P [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/12398460][DOI: https://dx.doi.org/10.1121/1.1501898] 1:STN:280:DC%2BD38njvVymsA%3D%3D
22. Deán-Ben, XL; Buehler, A; Ntziachristos, V; Razansky, D. Accurate model-based reconstruction algorithm for three-dimensional optoacoustic tomography. IEEE Trans. Med. Imaging; 2012; 31, pp. 1922-1928.2012ITMI..31.1922D [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/23033065][DOI: https://dx.doi.org/10.1109/TMI.2012.2208471]
23. Wang, K; Su, R; Oraevsky, AA; Anastasio, MA. Investigation of iterative image reconstruction in three-dimensional optoacoustic tomography. Phys. Med. Biol.; 2012; 57, 5399. [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/22864062][PubMedCentral: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3532927][DOI: https://dx.doi.org/10.1088/0031-9155/57/17/5399]
24. Wang, K; Schoonover, RW; Su, R; Oraevsky, A; Anastasio, MA. Discrete imaging models for three-dimensional optoacoustic tomography using radially symmetric expansion functions. IEEE Trans. Med. Imaging; 2014; 33, pp. 1180-1193.2014ITMI..33.1180W [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/24770921][PubMedCentral: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4374808][DOI: https://dx.doi.org/10.1109/TMI.2014.2308478]
25. Huang, C; Wang, K; Nie, L; Wang, LV; Anastasio, MA. Full-wave iterative image reconstruction in photoacoustic tomography with acoustically inhomogeneous media. IEEE Trans. Med. Imaging; 2013; 32, pp. 1097-1110.2013ITMI..32.1097H [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/23529196][PubMedCentral: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4114232][DOI: https://dx.doi.org/10.1109/TMI.2013.2254496]
26. Zhu, J et al. Mitigating the limited view problem in photoacoustic tomography for a planar detection geometry by regularized iterative reconstruction. IEEE Trans. Med. Imaging; 2023; 42, pp. 2603-2615.2023ITMI..42.2603Z [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/37115840][DOI: https://dx.doi.org/10.1109/TMI.2023.3271390]
27. Huynh, N.T. et al. A fast all-optical 3D photoacoustic scanner for clinical vascular imaging. Nat. Biomed. Eng.9, 638−655 (2024).
28. Treeby, BE; Cox, BT. k-Wave: Matlab toolbox for the simulation and reconstruction of photoacoustic wave fields. J. Biomed. Opt.; 2010; 15, pp. 021314-021314.2010JBO..15b1314T [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/20459236][DOI: https://dx.doi.org/10.1117/1.3360308]
29. Ding, L; Deán-Ben, XL; Razansky, D. Efficient 3-D model-based reconstruction scheme for arbitrary optoacoustic acquisition geometries. IEEE Trans. Med. Imaging; 2017; 36, pp. 1858-1867.2017ITMI..36.1858D [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28504935][DOI: https://dx.doi.org/10.1109/TMI.2017.2704019]
30. Ding, L; Razansky, D; Deán-Ben, XL. Model-based reconstruction of large three-dimensional optoacoustic datasets. IEEE Trans. Med. Imaging; 2020; 39, pp. 2931-2940.2020ITMI..39.2931D [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32191883][DOI: https://dx.doi.org/10.1109/TMI.2020.2981835]
31. Rosenthal, A; Razansky, D; Ntziachristos, V. Fast semi-analytical model-based acoustic inversion for quantitative optoacoustic tomography. IEEE Trans. Med. Imaging; 2010; 29, pp. 1275-1285.2010ITMI..29.1275R [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/20304725][DOI: https://dx.doi.org/10.1109/TMI.2010.2044584]
32. Pandey, PK; Wang, S; Sun, L; Xing, L; Xiang, L. Model-based 3-d x-ray induced acoustic computerized tomography. IEEE Trans. Radiat. Plasma Med. Sci.; 2023; 7, pp. 532-543. [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/38046375][PubMedCentral: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10691826][DOI: https://dx.doi.org/10.1109/TRPMS.2023.3238017]
33. Paige, CC; Saunders, MA. Lsqr: an algorithm for sparse linear equations and sparse least squares. ACM Trans. Math. Softw.; 1982; 8, pp. 43-71.661121 [DOI: https://dx.doi.org/10.1145/355984.355989]
34. Hauptmann, A et al. Model-based learning for accelerated, limited-view 3-d photoacoustic tomography. IEEE Trans. Med. Imaging; 2018; 37, pp. 1382-1393.2018ITMI..37.1382H [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/29870367][PubMedCentral: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7613684][DOI: https://dx.doi.org/10.1109/TMI.2018.2820382]
35. Hauptmann, A; Cox, B. Deep learning in photoacoustic tomography: current approaches and future directions. J. Biomed. Opt.; 2020; 25, pp. 112903-112903.2020JBO..25k2903H [PubMedCentral: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7593654][DOI: https://dx.doi.org/10.1117/1.JBO.25.11.112903] 1:CAS:528:DC%2BB3cXisFWiurbF
36. Zheng, W et al. Deep learning enhanced volumetric photoacoustic imaging of vasculature in human. Adv. Sci.; 2023; 10, 2301277. [DOI: https://dx.doi.org/10.1002/advs.202301277]
37. Mildenhall, B et al. Nerf: representing scenes as neural radiance fields for view synthesis. Commun. ACM; 2021; 65, pp. 99-106. [DOI: https://dx.doi.org/10.1145/3503250]
38. Kerbl, B; Kopanas, G; Leimkühler, T; Drettakis, G. 3D Gaussian splatting for real-time radiance field rendering. ACM Trans. Graph.; 2023; 42, pp. 139-1. [DOI: https://dx.doi.org/10.1145/3592433]
39. Weiss, S; Westermann, Rüdiger. Differentiable direct volume rendering. IEEE Trans. Vis. Comput. Graph.; 2021; 28, pp. 562-572. [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34587023][DOI: https://dx.doi.org/10.1109/TVCG.2021.3114769]
40. Yariv, L; Gu, J; Kasten, Y; Lipman, Y. Volume rendering of neural implicit surfaces. Adv. Neural Inf. Process. Syst.; 2021; 34, pp. 4805-4815.
41. Barron, J. T. et al. Mip-nerf: a multiscale representation for anti-aliasing neural radiance fields. In Proc. IEEE/CVF International Conference on Computer Vision 5855–5864 (IEEE, 2021).
42. Chen, A., Xu, Z., Geiger, A., Yu, J. & Su, H. Tensorf: tensorial radiance fields. In European Conference on Computer Vision 333–350 (Springer Nature, 2022).
43. Yao, Y. et al. Neilf: neural incident light field for physically-based material estimation. In European Conference on Computer Vision 700–716 (Springer, 2022).
44. Müller, T; Evans, A; Schied, C; Keller, A. Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans. Graph.; 2022; 41, pp. 1-15. [DOI: https://dx.doi.org/10.1145/3528223.3530127]
45. Yao, B. et al. Sparse-view signal-domain photoacoustic tomography reconstruction method based on neural representation. Preprint at arXiv:2406.17578 (2024).
46. Zwicker, M; Pfister, H; Van Baar, J; Gross, M. Ewa splatting. IEEE Trans. Vis. Comput. Graph.; 2002; 8, pp. 223-238. [DOI: https://dx.doi.org/10.1109/TVCG.2002.1021576]
47. Xu, M; Wang, LV. Universal back-projection algorithm for photoacoustic computed tomography. Phys. Rev. E Stat. Nonlin. Soft Matter Phys.; 2005; 71, 016706.2005PhRvE.71A6706X [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/15697763][DOI: https://dx.doi.org/10.1103/PhysRevE.71.016706]
48. Choi, S et al. Deep learning enhances multiparametric dynamic volumetric photoacoustic computed tomography in vivo (dl-pact). Adv. Sci.; 2023; 10, 2202089. [DOI: https://dx.doi.org/10.1002/advs.202202089]
49. Choi, W et al. Recent advances in contrast-enhanced photoacoustic imaging: overcoming the physical and practical challenges. Chem. Rev.; 2023; 123, pp. 7379-7419. [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/36642892][DOI: https://dx.doi.org/10.1021/acs.chemrev.2c00627] 1:CAS:528:DC%2BB3sXpsl2gtA%3D%3D
50. Kim, J et al. 3d multiparametric photoacoustic computed tomography of primary and metastatic tumors in living mice. ACS Nano; 2024; 18, pp. 18176-18190. [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/38941553][PubMedCentral: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11256897][DOI: https://dx.doi.org/10.1021/acsnano.3c12551] 1:CAS:528:DC%2BB2cXhsVeis7%2FE
51. Kim, J et al. Deep learning acceleration of multiscale superresolution localization photoacoustic imaging. Light Sci. Appl.; 2022; 11, 131.2022LSA..11.131K [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/35545614][PubMedCentral: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9095876][DOI: https://dx.doi.org/10.1038/s41377-022-00820-w] 1:CAS:528:DC%2BB38Xht1ynsrzL
52. Beck, A; Teboulle, M. Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE Trans. Image Process.; 2009; 18, pp. 2419-2434.2009ITIP..18.2419B2722312 [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/19635705][DOI: https://dx.doi.org/10.1109/TIP.2009.2028250]
53. Hu, Y; Li, T-M; Anderson, L; Ragan-Kelley, J; Durand, Frédo. Taichi: a language for high-performance computation on spatially sparse data structures. ACM Trans. Graph.; 2019; 38, 201. [DOI: https://dx.doi.org/10.1145/3355089.3356506]
54. Li, S. et al. SlingBAG: point cloud-based iterative algorithm for large-scale 3D photoacoustic imaging. SlingBAG. https://doi.org/10.5281/zenodo.17440447 (2025).
© The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the "License"). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.