1. Introduction
Recent advancements in underwater acoustic technology have made the detection of complex non-uniform flow fields a key focus for researchers. Imaging sonar plays a crucial role in measuring underwater flow fields and forms the basis for real-time, efficient assessments of these environments [1,2,3,4]. More specifically, multibeam echosounder systems (MBESs), which feature wide coverage and high resolution, are commonly used to assess complex underwater environments [5,6,7].
MBESs employ phased array and beamforming technology to enable real-time measurements of underwater three-dimensional space [8,9,10]. The transmitting and receiving transducers utilize curved and linear arrays, respectively, enabling the scanning of multiple transmission angles and ensuring wide coverage. Beamforming technology is employed to capture time-domain signals from various receiving beam directions. These signals are subsequently processed to generate sonar images, offering an intuitive and detailed representation of the characteristics of the observed area [11].
In underwater monitoring applications, MBESs generally present two-dimensional slice maps in horizontal and vertical, including time history maps and echo intensity maps at various radial distances within a single emission fan [12,13,14]. However, these two-dimensional images do not adequately illustrate the benefits of wide coverage and high precision associated with MBESs, nor do they fully capture the characteristics such as underwater turbulence [15,16,17]. To effectively monitor the water disturbance field and its dynamic changes, utilizing the acquired three-dimensional echo data to generate underwater images is essential [18]. Two-dimensional visualization of three-dimensional data facilitates a comprehensive and detailed representation of underwater images from multiple perspectives. This approach effectively captures and illustrates intricate features and dynamic variations within flow fields, enabling a deeper understanding of their complex structures and behaviors.
MBESs acquire intensity images at various radial distances corresponding to each receiving beam angle across different emission-angle beam sectors in polar coordinates. However, as the radial distance increases, the resolution between transmitting and receiving beam angle decreases, leading to reduced image quality and significant challenges in target identification at extended ranges [19]. Moreover, capturing two-dimensional slice images at consistent horizontal or vertical distances is inherently impractical in polar coordinates, further limiting multidimensional visualization and data analysis capabilities. To address these challenges, interpolation methods are often employed to facilitate data analysis [20,21,22]. These methods efficiently transform data from the polar coordinate system to the Cartesian coordinate system, ensuring consistent resolution within the same dimension while enhancing the accuracy and interpretability of sonar images. Such processing steps substantially expand the applicability of MBESs in complex, non-uniform flow field environments and provide a solid foundation for subsequent image rendering and advanced processing.
The two-dimensional visualization of three-dimensional data offers an intuitive approach for monitoring underwater flow fields. Volume rendering is a crucial technique to represent three-dimensional data images as it effectively captures the details and dynamic characteristics of underwater flow fields, making it particularly suitable for visualizing complex datasets [23,24,25,26]. Among volume rendering techniques, the ray-casting and ray-marching algorithms are widely utilized. In the context of ray tracing algorithm, Liu et al. derived the relationship between the ray tracing error and the observation elevation angle and established a stochastic model for acoustic positioning, effectively reducing the impact of low-elevation-angle observations on positioning accuracy [27]. The ray-casting algorithm in particular is preferred due to its superior image quality and efficient rendering speed [28,29,30,31,32,33,34,35]. By modeling light propagation within a three-dimensional space, this algorithm effectively captures and renders fine-grained features within the data, thereby enhancing the readability and realism of the visualized images. Moreover, the ability to adjust parameters such as perspective and opacity allows for the highlighting of regions of interest or detailed information, further improving the effectiveness of data interpretation and analysis.
The transfer function plays a critical role in mapping data values to opacity and color, significantly influencing the visualization outcome. Traditional transfer functions, which rely solely on data values, face inherent limitations when dealing with complex scenarios such as intricate boundaries, multi-material interfaces, and high-dimensional data characteristics [36,37,38]. To address these challenges, histogram-based transfer functions have been employed to delineate tissue properties, enhancing the precision of overlapping tissue classification [39,40]. Moreover, multidimensional transfer functions, incorporating first- and second-order derivatives of the gradient direction, improve material classification accuracy and effectively manage complex boundaries [41,42]. A novel multidimensional transfer function was introduced for visualizing flow fields, leveraging flow field parameters such as velocity, gradient, curl, helicity, and divergence [43]. This approach enables the efficient extraction and tracking of critical flow field features. Furthermore, a semi-automatic transfer-function-creation method was developed, significantly streamlining the visualization process [44]. Additionally, the integration of ray-casting methods with GPU acceleration technology facilitates real-time extraction and visualization of dynamic and complex structures, offering enhanced capabilities for practical applications [45,46,47,48].
The original data collected by MBESs is processed using beamforming and radial layering techniques to obtain acoustic parameters, such as backscattering strength and radial flow velocity, for all measurement units within the transmission fan. During the radial layering of time-domain signals at different receiving angles, the layer thickness is determined by the pulse width of the transmitted signal. Compared to horizontal layering methods, this radial layering approach eliminates the need to consider the utilization efficiency of signals within a single layer or the overlap rate between layers.
Three-dimensional data obtained from MBESs need to be visualized in the form of images to facilitate the intuitive evaluation of water body disturbance characteristics. However, the visualization of MBES three-dimensional measurement data faces several challenges: (1) with increasing distance, the resolution between adjacent transmission and reception angles gradually decreases, leading to spatial resolution inconsistencies; (2) representation in polar coordinates struggles to accurately capture two-dimensional acoustic cross-sectional images at the same depth, horizontal distance, or vertical distance; and (3) developing a visualization method that not only comprehensively captures three-dimensional information but also effectively conveys critical data features in detail remains a significant challenge.
This paper presents a visualization method for three-dimensional non-uniform sampled data from MBESs. It aims to resolve the challenge of image representation of underwater turbulence in three-dimensional datasets. MBESs exhibit inconsistent distance resolution between the transmitting and receiving beam angles, and conventional two-dimensional images do not adequately represent three-dimensional spatial information. The proposed method effectively uses the detailed and comprehensive three-dimensional data obtained from the MBES to visualize three-dimensional information. Initially, the combination of beamforming algorithms with layered processing allows for the processing of the original echo data received by the MBES, which results in the acquisition of echo intensity for the spatial measurement block. By using the active sonar equation, it is possible to derive the backscattering strength, which represents the scattering characteristics of the measurement block. In addition, applying linear interpolation and arc length weighted interpolation algorithms addresses the problem of inconsistent distance resolution in the transmission and reception beams. Finally, we generate an opacity transfer function in the ray-casting algorithm to reveal the detailed image display and complex three-dimensional water data analysis. The contributions of the proposed method can be summarized as follows:
1. A method was proposed for converting three-dimensional original echo data into two-dimensional image visualization to address the limitations of conventional data display methods.
2. The integration of linear and arc length weighted interpolation enhances the clarity of sonar images and addresses the problem of inconsistent distance resolution.
3. The generated opacity transfer function of the ray-casting algorithm facilitates the visualization of complex water-body data, effectively separating the target from the background while revealing detailed information of the target region.
The remainder of this paper is organized as follows: Section 2 describes materials and methods, including the ray-casting method, the formulation of an opacity transfer function derived from data values, gradients, and second derivatives, the linear and arc length weighted interpolation algorithm, complexity analysis, preprocessing of MBESs, and algorithm description. Numerical simulations and test verification are covered in Section 3. Section 4 presents the discussion and conclusion of this paper.
2. Materials and Methods
2.1. Three-Dimensional Data Visualization Based on the Ray-Casting Algorithm
The ray-casting algorithm proposed by Levoy (a direct volume-rendering method utilizing image sequences) has been extensively employed in various domains, including medical imaging and computer vision [49,50]. The fundamental concept of this algorithm is the light-absorption model, articulated as follows:
(1)
where I represents the light intensity transmitted from voxel a to voxel b within the volume data, denotes the contribution of voxel light intensity in the light’s direction, and is the attenuation function governing the intensity contribution along the light direction.In the discrete case, (1) can be viewed as the compositing operation of color contributions from all sampled points along the ray direction. Therefore, the final pixel color value obtained by forward compositing along the direction of the viewing ray can be expressed as follows:
(2)
where represents the color value and denotes the opacity of the ith sampling point in the light beam direction. Figure 1 shows the discrete sampling in the direction of the ray for the absorption model. For the visualization process, rendering relies on color values, while opacity governs the visibility throughout the rendering procedure. There are two ways for this algorithm to synthesize colors (front to back and back to front), and the first one is introduced. Figure 2 shows the synthesis process. (2) can be expressed in a recursive form as follows:(3)
(4)
where represents the weighted cumulative value of opacity from the first to the ith sampling point, and denotes cumulative opacity, with initial values of and .2.2. An Opacity Transfer Function Generation Based on Data Values, Gradients, and Second Derivatives
Formulation of transfer functions is a significant area of investigation in three-dimensional data visualization with the ray-casting technique. The primary purpose of the transfer function is to convert scalar values and multidimensional data into color values and opacity. In practical applications, various transfer functions are often designed to enhance data visualization, thereby emphasizing the content of interest [41,51]. The first derivative of scalar data (the gradient) indicates the direction of maximum change of the data, while its magnitude represents the local rate of change of the scalar data. Here, denotes the magnitude of the gradient of , with signifying the scalar function of the data. Moreover, the second-order derivative, denoted as , can more precisely delineate intricate limits. Figure 3 illustrates the optimal boundary between homogenous materials. It shows a grayscale image of a homogeneous substance. We extract a line from the grayscale image and plot the curves of data value, gradient, and second derivative—Figure 3. Figure 3 reveals that the gradient amplitude in the uniform zone is comparatively small but significantly larger in the variable region; at the boundary, the gradient is high, and the second derivative is null.
The gradient derived by Jerrold E. [52] can be expressed as follows:
(5)
(6)
where and represent the first derivatives in direction and gradient direction , respectively, whereas denotes the first derivative of f.The second derivative in the direction of the gradient can be expressed as:
(7)
Alternatively, it can be derived using the Taylor expansion in the direction of gradient as follows:
(8)
where is the Hessian matrix of f, representing the matrix of f’s second-order partial derivatives. Alternatively, the Laplace approximation may be used to derive the following relation:(9)
The formulations for these three second-order derivatives possess distinct advantages and limitations. The Hessian matrix approach is the most (numerically) precise. However, other methods offer better efficiency in practice while maintaining a specific degree of precision. The Laplacian compution is direct and efficient; however, it is susceptible to noise. The second derivative method for gradient direction represents a good choice regarding precision and processing complexity.
The mapping from data values to boundaries can be expressed using the first and second derivatives of scalar data [44]:
(10)
where represents the average first-order directional derivative of all at location , while denotes the average second-order directional derivative across all v.We can determine the range of various target areas upon identifying all boundaries. This article’s research indicates that, during data visualization, it is essential to separately consider the opacity transfer functions of the background, target area, and boundary. To distinguish the target area from the background and accurately observe the intensity variations at the boundary and within the target area, the transfer function of the opacity must fulfill the following criteria: (1) The background opacity should be set to zero; (2) The boundary area thickness remains constant to enhance visual effects, while the boundary area opacity decreases as data values increase; (3) The opacity of the target area decreases as the data value increases.
Segmented multiple opacity functions can efficiently delineate the target area and emphasize intricate details via image representation. Assuming the data value range is set to D and partitioned into N non-overlapping subsets, referred to as , it can be expressed as:
(11)
In the paper, the transfer function of opacity can be expressed as:
(12)
where and represent the opacity and gradient amplitude of the chosen boundary region, denotes a constant, signifies the boundary value between the target region and the background, indicates the ratio of the established boundary, and r representes the thickness of the boundary.2.3. Meshing Based on Linear and Arc Length Weighted Interpolation Algorithms
The distance resolution of the receiving and transmitting beam angles diminishes with increasing radial distance. The examination of two-dimensional sectional images at a consistent depth, horizontal distance, and vertical distance presents challenges. Consequently, it is essential to interpolate the three-dimensional data processed by MBESs [53,54,55]. A method combining linear and arc length weighted interpolation techniques is adopted, enabling effective meshing of three-dimensional strength data. This approach facilitates the transformation from the polar coordinate system to the Cartesian coordinate system while maintaining consistent resolution within the same dimension.
The schematic diagram of the transmitting sector in the Cartesian coordinate system is shown in Figure 4. Note that the origin of the rectangular and polar coordinate systems coincide. The transmitting sector is symmetric about the plane; i.e., a beam with a receiving angle of is projected on the y-axis. A measuring block is A on the transmitting sector whose geometric center is and radial distance is r. At the same radial distance, the point on the beam that corresponds to point is . The projection of point onto the z axis is . The projection of arc onto the plane is arc . Hence, the conversion relationship between a polar coordinate system and a Cartesian coordinate system can be formulated as:
(13)
(14)
where is the transmitting angle of the sector, is the receiving beam angle of measuring block A, h is the distance from point to origin O, and R is the radial distance of measuring block A on the transmitting sector.From (13) and (14), it can be derived that the coordinate value in Cartesian coordinates is
(15)
According to (15), the polar coordinate can be derived from the Cartesian coordinate . The transmitting angle is ; within a certain transmitting angle sector, the receiving beamforming angle is ; and the radial distance of all layers is . M, N, and L represent the number of transmission angles, the number of reception beam angles, and the number of layers into which the radial distance is divided, respectively. The nearest transmitting angle , receiving beam angle , and radial distance are found, respectively, at point , as shown in Figure 5 and Figure 6. Figure 5 shows the emission sector with an angle of . Thus, the following relation can be obtained:
(16)
(17)
(18)
The interpolation process of spatial three-dimensional numbers is as follows: First, the two emission fan surfaces separately are interpolated, and then the two interpolation results are interpolated again. The interpolation diagram for a single emission fan is shown in Figure 5. and are the corresponding points of point on the sectors of the two emission angles , respectively. As can be seen from Figure 6, can be obtained through the interpolation of the surrounding four points A, B, C, and D. From the two points B and C, the acoustic magnitude at can be obtained by using arc length fixed weight interpolation as follows:
(19)
where and are the acoustic quantity values at points B and C, respectively.Similarly, the acoustic value at is
(20)
The acoustic value at point can be obtained through linear interpolation of and :
(21)
Referring to the process of , acoustic quantity at can be calculated.
The spatial positions of and are shown in Figure 6. The acoustic quantity expression at can be obtained from the arc length fixed weight interpolation as follows:
(22)
2.4. Complexity Analysis
For three-dimensional data, the first-order derivative and the second-order derivative along the gradient direction can be precomputed during the preprocessing stage. When the ray direction changes, the precomputed gradient values can be directly utilized to efficiently determine the opacity and color values at the sampling points, thereby accelerating the rendering process.
Assuming the resolution of the visualization image is , the number of rays required for rendering is . The number of sampling points along each ray is K, which is influenced by the size of the three-dimensional data and the sampling step size. As the data volume increases or the sampling step size decreases, K increases correspondingly. Assuming the computational complexity for each ray is , the overall computational complexity is .
Increasing the resolution and the number of sampling points can improve the quality of rendered images. However, this improvement comes at the cost of increased computational resource demands.
2.5. Preprocessing of MBESs
An MBES employs a multi-element transmitting and receiving array to enable wide coverage and high resolution. To minimize the sampling interval between data frames and acquire relatively precise spatial real-time data, the system exclusively gathers original data during the experiment without additional data processing. Consequently, in addition to beamforming, preprocessing is essential for the original data collected by an MBES.
2.5.1. Backscattering Strength
The backscattering cross-section per unit volume is denoted as , which reflects the scattering capability of all scatterers within a unit volume. The backscattering strength per unit volume can be derived as follows:
(23)
According to the active sonar equation, it can be written as
(24)
The formula for calculating the source level () is given as follows:
(25)
where is the root-mean-square (RMS) voltage received by the standard hydrophone, expressed in volts (V), d denotes the distance between the sound source and the hydrophone, measured in meters (m), and is the sensitivity of the standard hydrophone, expressed in decibels (dB) with a reference value of . is the transmission loss related to the transducer range (r) and attenuation coefficient (). Its calculation expression is(26)
Based on the reflection and projection of the scatterer, as discussed in [56], the backscattering strength per unit volume can be expressed as
(27)
where denotes the echo level, signifies the Dirac function, and is the backscattering coefficient of the ith layer. V denotes the volume of the measurement block. and represent the directionality of the transmitting and receiving array, respectively.2.5.2. Multilayered Methodology
The transmitting array of the MBES was configured as a curved array, which resulted in a transmitting sector characterized by wide coverage. Beamforming technology was used to acquire time-domain signals at various beam angles. Layering of time-domain signals was conducted depending on the pulse width of the transmitted signal. Equal window-length layering method is used in the proposed method, as illustrated in Figure 7. The red dashed line illustrates equal-depth layering. The window length of the beam is denoted as . As the beam angle’s absolute value increases, the window’s length increases correspondingly. In order to balance and , it is necessary to consider the overlap rate between layers and the signal utilization rate within the layers. Equal window-length layering method is used in the proposed method, as illustrated by the blue dots in Figure 7. Beam angles were classified according to uniform processing-window lengths, which facilitates the thorough use of signals within the layer.
The processing-window length depends on the pulse width of the transmitted signal , , so the layer thickness over the radial distance is . After beamforming, the layering mode of the time-domain signal is shown in Figure 8. The two coordinate axes were time t and radial distance r, respectively, and the angle between the two coordinate axes was , i.e., the time-domain signal waveform with the beam angle of . represents the interval between two adjacent transmissions, and signifies the time for the MBES to receive the echo signal from the radial distance of . The layer thickness of the unit layer is , where N denotes the number of layers.
2.5.3. Histogram Equalization
In order to highlight the target area, histogram equalization was used to enhance image contrast [57,58]. Histogram equalization redistributes the pixel values of the original image using a specific transformation. The redistributed pixel values conform to a uniform distribution and enhance the dynamic range of gray values within the original image’s pixels.
The probability density function of a digital image with the gray level range is
(28)
where denotes the k-level gray level, reprensents the number of pixels in the image whose gray level is , and N signifies the total number of pixels in the image. The cumulative distribution function can be calculated as follows:(29)
Each gray level in the original image corresponds to an equalized gray level , which can be written as:
(30)
According to probability theory, the cumulative distribution function can be written as
(31)
Therefore, the gray level of the equalized image is
(32)
2.6. Algorithm Description
In this study, an MBES is fabricated measure water flow fields. The system employs a Mills cross-array configuration, with the transmitter array designed as an arc array and the receiver array as a linear array, as shown in Figure 9. Higher frequency results in faster signal attenuation during propagation, whereas it also provides better system resolution. In designing the MBES, a trade-off between low and high frequencies is made to balance the capability for long-range measurements and high resolution. Consequently, a center frequency of 113.6 is selected.
The processing workflow of the proposed algorithm is illustrated in Figure 10. Initially, the original data collected by the MBES are processed to obtain the time-domain signals after beamforming. Preprocessing operations are then performed, including layering, backscattering strength computation, and histogram equalization. Finally, three-dimensional data are visualized using the generated opacity transfer function. Section 4 presents a detailed analysis of the simulation performance of the algorithm and the validation results based on experimental data.
3. Numerical Simulation and Test Verification
3.1. Numerical Simulation
3.1.1. Simulation Setup
In three-dimensional modeling, the space is partitioned into multiple measurement blocks, each representing a scatterer utilized to simulate the propagation and reflection properties of sound waves. This paper employs the methodology outlined in [56] to produce echo signals for each array element. The transmission signal consists of four repeated 7-bit Barker codes, comprising ten cycles at a center frequency of 113.6 . According to the Nyquist sampling theorem, the sampling frequency is 769 kHz, and with 48 receiving elements. The parameters for both the simulation and experimental processes are configured identically, with specific details provided in Table 1. The emission angle is , subdivided into 71 segments.
As shown in Figure 11, assuming the position of the scatterers is
(33)
Depending on the position of the scatterer, the four critical values of the emission angle can be obtained, which are
(34)
(35)
Here, , and the measurement block for radial distance r contains echoes only at the two measurement blocks corresponding to the beam angles at and . When , echoes are present in the measurement blocks within the receiving beam angle range of , where and , with the radial distance of the measurement blocks containing echoes at each angle . For , echoes are present in the measurement blocks within the receiving beam angle range of , specifically at and , with radial distance for the measurement blocks containing echoes at each angle. Throughout the simulation, , , , , , .
3.1.2. Echo Model
The instantaneous backscattering strength of the signals suggested by Medwin and Clay [59] is
(36)
where represents the emission sound intensity at an equivalent distance of 1 from the sound source and denotes the sound absorption coefficient. Throughout the simulation process, we assign and , and ignore the absorption loss.The echo level of a microelement can be written as:
(37)
The expression for the echo signal of the ith element, given the transmitted signal , is
(38)
where denotes the signal amplitude of the ith element, represents the time delay of the ith element of the receiving transducer, and signifies Gaussian white noise.The echo signal of the mth element is
(39)
(40)
where denotes the time delay difference between the mth element and the reference element for receiving the echo signal from the ith element; d represents the distance between adjacent array elements; and c is the sound velocity in water.3.1.3. Simulation Results
Under simulation conditions, the beam image for a single transmission angle is shown in Figure 12. Figure 12a and Figure 12b represent two-dimensional images with transmission angles of and , respectively. These images are limited to expressing only two-dimensional data information.
A three-dimensional representation of the data was created as a reference image for the algorithm’s processing results in the paper—see Figure 13. The color value denotes the normalized gray level of the backscattering strength. The three-dimensional data underwent initial preprocessing, and then critical values in the background and target regions were identified via iterative thresholding. The image was ultimately rendered in a three-dimensional coordinate system. Note that only the delineated target area is shown in the image, while the background remains empty. Figure 13 illustrates that the target area is positioned on all four sides of a rectangular prism with an approximate side length of 8 , which is consistent with the parameters established in the simulation conditions.
The simulation signal was analyzed using the algorithm delineated in the paper, with the results presented in Figure 14 and Figure 15. Figure 14a–c show the joint histogram of data values, gradient amplitude versus data values and the second derivative versus data values, respectively. The background area (A) and the target area (B) are visible in Figure 14a. It is evident from Figure 14b that, despite significant noise in the joint histogram, certain boundaries between the background and the target area (as well as internal boundaries within the target) can be seen. In region A of Figure 14a, the area with lower intensity values reprensents the section not assessed by the MBES during the meshing process. This section is characterized by minimal alterations and low gradient values, which corresponds to A in Figure 14b. The region with elevated intensity values represents the background area assessed by the MBES, while the area exhibiting minimal variations corresponds to B in Figure 14b. At the interface of regions with elevated and diminished intensity values in the background, the intensity values show significant variation and the gradient amplitude is pronounced, which aligns with curve F in Figure 14b. In the target area B of Figure 14a, regions exhibiting minimal variations in intensity values correspond to C, D, and E in Figure 14b. The region with pronounced changes in intensity corresponds to curve G in Figure 14b. The intensity disparity between background area A and target area B in Figure 14a is pronounced, exhibiting a substantial gradient amplitude, which aligns with the contour arc of the entire grayscale region in Figure 14b. Curves H, I, and J in the magnified image of Figure 14b exhibit comparable data values and gradient magnitudes, which can lead to erroneous partitioning during the coloring process. Moreover, the second derivative depicted in Figure 14c can adequately address this category of a boundary assignment problem. Figure 14c shows that curve F aligns with curve F in Figure 14b. At the local maximum gradient amplitude, the second derivative equals zero. The area of boundary attribution ambiguity, depicted in Figure 14b, is clearly shown in Figure 14c.
Figure 15 shows a two-dimensional representation of three-dimensional data from two distinct perspectives, with the orientation angles in Figure 15a being [1, 1, 1] and in Figure 15b being [1, 0, 16]. The results presented in Figure 15 show that the proposed algorithm effectively reconstructs the target area from multiple perspectives while successfully distinguishing the target from the background. A comparative analysis of Figure 15a,b reveals that the target possesses a geometric structure resembling a rectangular prism, with the primary distribution located along its four faces. The intensity profiles in Figure 15a,b further corroborate this observation, showing a peak intensity at the central region that gradually diminishes toward the periphery.
These findings are consistent with the theoretical framework and the three-dimensional reference image provided in Figure 13, thereby validating the algorithm’s performance. Additionally, the proposed algorithm offers a significant advantage by eliminating the need for repeated threshold adjustments. The generated two-dimensional image not only isolates the target area with high precision but also preserves the nuanced variations in the target’s contours, thereby ensuring the fidelity of the reconstructed imagery.
3.2. Test Validation
Test Scenario
In order to assess the efficacy of the proposed algorithm, the subsequent experiments were devised. The schematic representation of the experimental setup within an anechoic tank is illustrated in Figure 16. The utilized equipment includes an MBES, a nano microporous aeration tube, a polyurethane (PU) tube, a speed regulating valve, an air compressor, and a mounting frame. We positioned the aeration tube and MBES within the mounting frame, as illustrated in Figure 17. The MBES was centrally located within the mounting frame. In contrast, the aeration tube was secured along the bottom edge of the frame. The size of the mounting frame was , and it was situated at the base of the pool. The PU tube was linked to a central speed-regulating valve to modulate the size and quantity of bubbles. The air compressor was utilized to produce bubbles. As shown in Figure 16, we positioned the placement frame at the bottom of the pool, activated the air compressor, adjusted the speed control valve to a specified setting, and the aeration tube produced high-density microbubbles.
The emission angle is . The experimental procedure is as follows: First, start the air compressor and control the generation speed of bubbles by adjusting the speed control valve. After several minutes of operation, the quantity and motion velocity of the bubbles within the tank gradually stabilize, as shown in Figure 17. In this steady-state condition, the emission angle is altered to acquire three-dimensional echo data.
3.3. Test Results
The single-frame beam image obtained from the MBES is shown in Figure 18. Figure 18a,b represent two-dimensional images with transmission angles of and , respectively. These images are limited to expressing only two-dimensional data information.
Figure 19 shows the three-dimensional intensity maps obtained by scanning emission angles. Figure 19b–d are intensity maps obtained by splitting the grayscale levels from Figure 19a. The grayscale ranges in Figure 19b–d are from 0.4 to 0.6, from 0.6 to 0.7, and from 0.7 to 1, respectively. Figure 19 shows that, along the z-axis, the range of color blocks increase, which is consistent with the diffusion during the bubble-ascent process in the test scenario. Note that the square formed by and shows more color blocks near the edges; the closer it is to the edges of the square, the darker the color. Along the z-axis, the color becomes darker, and the range of color blocks increases. This is consistent with the result of bubbles generated by the aeration tube.
The test data were analyzed using the algorithm delineated in the paper—see Figure 20 and Figure 21. Figure 20a–c show the joint histogram of data values, gradient amplitude versus data values and the second derivative versus data values, respectively. Figure 20a shows the background area (A) and the target area (B). The regions of background area A in Figure 20a show minimal variations in intensity values correspond to A, B, and C in Figure 20b. The boundary between regions with higher and lower intensity values corresponds to curves D and E in Figure 20b. In target areas B and C in Figure 20a, the parts exhibiting substantial variations in intensity values align with curve F in Figure 20b. Futhermore, in the magnified image of Figure 20b, certain regions exhibit data value ranges and gradient sizes of curves G, H, and I that are relatively similar and can lead to partitioning errors during the coloring process. Figure 20c shows that curves D, E, and F correspond to curves D, E, and F in Figure 20b. At the local maximum gradient amplitude point, the second derivative equals zero, and the uncertain boundary attribution depicted in Figure 20b is clarified in Figure 20c.
Figure 21 shows the two-dimensional visualization of the three-dimensional water body data processed by the proposed method. It showcases two different perspectives of the three-dimensional data, with orientation angles of [1, 1, 1] and [1, 0, 16], respectively. From Figure 21, it is evident that the target region is successfully separated from the background. Combining Figure 21a,b, it can be inferred that the geometric shape of the target area is approximately that of a hollow rectangular prism, with the target distributed across the four adjacent faces. Furthermore, Figure 21a shows that the target region gradually expands, which is consistent with the upward motion of the generated bubbles, in accordance with their expected behavior. From Figure 21a,b, it can be seen that the intensity of the target is maximal at the central positions of the four faces, gradually decreasing towards the sides. This result is consistent with the phenomenon where bubbles, during their upward movement, simultaneously diffuse outward to the sides. During the test, two sides of the aerator tube that generate the bubble curtain produce relatively stronger airflow, resulting in the generation of a greater number of bubbles. Accordingly, in Figure 21a,b, it can be seen that two of the four faces where the target is located exhibit higher intensities. Some of the conclusions can be further validated by referring to Figure 19.
In summary, the two-dimensional images obtained using the method proposed in this paper not only successfully separate the bubble region from the water body, but also, from different viewpoints, reveal both the intensity variations and the contours within the target region.
4. Discussion and Conclusions
To advance underwater imaging for marine environment monitoring, this paper presents a visualization method for three-dimensional non-uniformly sampled data acquired from MBESs. The proposed method achieves accurate coordinate system transformation while effectively addressing the challenges of beam direction resolution inconsistencies. Additionally, the visualization of three-dimensional data suppresses background and non-interest regions while highlighting boundaries and target areas. Compared to the single-frame beam images obtained from the MBES, the proposed algorithm provides a more intuitive and comprehensive representation of flow field information within three-dimensional data. Furthermore, in contrast to three-dimensional imaging techniques, the proposed method not only captures more detailed and enriched features but also supports multi-perspective imaging capabilities. This study holds significant implications for marine exploration and environmental monitoring while also laying the foundation for further advancement of MBESs. Simulation and experimental results demonstrate that by combining data values with the first- and second-order derivatives in the gradient direction through a joint histogram analysis, the proposed algorithm effectively identifies the critical thresholds between bubble and background regions, as well as detailed features within the target area. The visualized images further enable the inference of the geometric characteristics of bubble regions, which exhibit an approximate hollow rectangular prism shape with bubbles primarily distributed along four adjacent sides. Moreover, the intensity variations reflect the dynamic processes of bubble generation, movement, and diffusion.
However, the algorithm requires traversing all voxels along each ray, increasing computational complexity. By optimizing the algorithm design to improve computational efficiency or reduce complexity, the performance and practicality of the algorithm can be enhanced, thereby better meeting the requirements of practical applications.
Conceptualization, W.C. and S.F.; methodology, W.C. and S.F.; software, W.C. and S.F.; investigation, W.C., S.F. and C.Z.; writing—original draft preparation, W.C., M.F. and Y.Z.; writing—review and editing, W.C., C.Z., M.F. and Y.Z.; supervision, W.C., M.F., Y.Z. and H.C.; funding acquisition, S.F. and C.Z. All authors have read and agreed to the published version of the manuscript.
The raw data supporting the conclusions of this article will be made available by the authors on request.
The authors thank the anonymous reviewers for their careful reading and valuable comments.
The authors declare no conflicts of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 2. The synthesis process of color values on the raying direction. The colors of the blocks represent the colors of the corresponding pixel points.
Figure 3. The correlation among data values [Forumla omitted. See PDF.], gradient magnitudes [Forumla omitted. See PDF.], and second-order derivatives [Forumla omitted. See PDF.] in relation to boundary.
Figure 4. The transmitting sector in the Cartesian coordinate system. The green fan-shaped area represents the transmitting sector.
Figure 5. The interpolation diagram on a single transmitting sector. The fan-shaped area represents the transmittint sector with an angle of [Forumla omitted. See PDF.].
Figure 6. Interpolation between different transmitting sectors. The purple and green fan-shaped areas represent the transmitting sectors with angles [Forumla omitted. See PDF.] and [Forumla omitted. See PDF.], respectively.
Figure 11. Three-dimensional schematic diagram of the target area during numerical simulation.
Figure 12. Single-frame beam image. (a) Transmission angle of [Forumla omitted. See PDF.]. (b) Transmission angle of [Forumla omitted. See PDF.].
Figure 13. Three-dimensional schematic diagram of the target area during numerical simulation.
Figure 14. Statistical chart. (a) Histogram of data value. (b) Gradient magnitude [Forumla omitted. See PDF.] versus data value f. (c) Second derivative [Forumla omitted. See PDF.] versus data value f.
Figure 15. Two-dimensional representation of three-dimensional data. (a) Orientation angle of [1, 1, 1]. (b) Orientation angle of [1, 0, 16].
Figure 16. Schematic representation of the experimental setup within the anechoic tank. (a) Top view. (b) Lateral view.
Figure 18. Single-frame beam image. (a) Transmission angle of [Forumla omitted. See PDF.]. (b) Transmission angle of [Forumla omitted. See PDF.].
Figure 19. Three-dimensional intensity maps obtained by scanning the emission angle. (a) All data. (b) Grayscale range is from 0.4 to 0.6. (c) Grayscale range is from 0.6 to 0.7. (d) Grayscale range is from 0.7 to 1.
Figure 20. Statistical chart. (a) Histogram of data value. (b) Gradient magnitude [Forumla omitted. See PDF.] versus data value f. (c) Second derivative [Forumla omitted. See PDF.] versus data value f.
Figure 21. Two-dimensional representation of three-dimensional data. (a) Orientation angle of [1, 1, 1]. (b) Orientation angle of [1, 0, 16].
Parameter configuration.
Parameter | Center Frequency (kHz) | Sampling Frequency (kHz) | Pulse Width (ms) | Number of Transmit Array Elements | Number of Receive Array Elements |
---|---|---|---|---|---|
Value | | 769 | | 48 | 48 |
References
1. Nylund, A.T.; Arneborg, L.; Tengberg, A.; Mallast, U.; Hassellöv, I.M. In situ observations of turbulent ship wakes and their spatiotemporal extent. Ocean Sci.; 2021; 17, pp. 1285-1302. [DOI: https://dx.doi.org/10.5194/os-17-1285-2021]
2. Murino, V.; Trucco, A. Three-dimensional image generation and processing in underwater acoustic vision. Proc. IEEE; 2000; 88, pp. 1903-1946. [DOI: https://dx.doi.org/10.1109/5.899059]
3. Tian, Y.; Lan, L.; Guo, H. A review on the wavelet methods for sonar image segmentation. Int. J. Adv. Robot. Syst.; 2020; 17, 1729881420936091. [DOI: https://dx.doi.org/10.1177/1729881420936091]
4. Pailhas, Y.; Petillot, Y.; Capus, C. High-resolution sonars: What resolution do we need for target recognition?. EURASIP J. Adv. Signal Process.; 2010; 2010, 205095. [DOI: https://dx.doi.org/10.1155/2010/205095]
5. Colbo, K.; Ross, T.; Brown, C.; Weber, T. A review of oceanographic applications of water column data from multibeam echosounders. Estuar. Coast. Shelf Sci.; 2014; 145, pp. 41-56. [DOI: https://dx.doi.org/10.1016/j.ecss.2014.04.002]
6. Dong, Z.; Liu, Y.; Yang, L.; Feng, Y.; Ding, J.; Jiang, F. Artificial reef detection method for multibeam sonar imagery based on convolutional neural networks. Remote Sens.; 2022; 14, 4610. [DOI: https://dx.doi.org/10.3390/rs14184610]
7. Zhang, W.; Zhou, T.; Li, J.; Xu, C. An efficient method for detection and quantitation of underwater gas leakage based on a 300-kHz multibeam sonar. Remote Sens.; 2022; 14, 4301. [DOI: https://dx.doi.org/10.3390/rs14174301]
8. Lanzoni, J.C.; Weber, T.C. High-resolution calibration of a multibeam echo sounder. Proceedings of the OCEANS 2010 MTS/IEEE SEATTLE; Seattle, WA, USA, 20–23 September 2010; pp. 1-7.
9. Li, H.; Li, S.; Chen, B.; Xu, C.; Zhu, J.; Du, W. Research on ship wake acoustic imaging based on multi-beam sonar. Proceedings of the 2014 Oceans—St. John’s; St. John’s, NL, Canada, 14–19 September 2014; pp. 1-5.
10. Li, H. Shallow water high resolution multi-beam echo sounder. Proceedings of the OCEANS 2008-MTS/IEEE Kobe Techno-Ocean; Kobe, Japan, 8–11 April 2008; pp. 1-5.
11. Praet, N.; Collart, T.; Ollevier, A.; Roche, M.; Degrendele, K.; De Rijcke, M.; Urban, P.; Vandorpe, T. The potential of multibeam sonars as 3D turbidity and SPM monitoring tool in the North Sea. Remote Sens.; 2023; 15, 4918. [DOI: https://dx.doi.org/10.3390/rs15204918]
12. Urban, P.; Ko, K. Processing of multibeam water column image data for automated bubble/seep detection and repeated mapping. Limnol-Ocean.-Meth; 2017; 15, pp. 1-21. [DOI: https://dx.doi.org/10.1002/lom3.10138]
13. Xu, C.; Wu, M.; Zhou, T.; Li, J.; Du, W.; Zhang, W.; White, P.R. Optical flow-based detection of gas leaks from pipelines using multibeam water column images. Remote Sens.; 2020; 12, 119. [DOI: https://dx.doi.org/10.3390/rs12010119]
14. Zhao, J.; Meng, J.; Zhang, H.; Wang, S. Comprehensive detection of gas plumes from multibeam water column images with minimisation of noise interferences. Sensors; 2017; 17, 2755. [DOI: https://dx.doi.org/10.3390/s17122755]
15. Weber, T.C.; Lyons, A.P.; Bradley, D.L. An estimate of the gas transfer rate from oceanic bubbles derived from multibeam sonar observations of a ship wake. J. Geophys. Res. Oceans; 2005; 110, C04005. [DOI: https://dx.doi.org/10.1029/2004JC002666]
16. Wilson, D.S.; Leifer, I.; Maillard, E. Megaplume bubble process visualization by 3D multibeam sonar mapping. Mar. Pet. Geol.; 2015; 68, pp. 753-765. [DOI: https://dx.doi.org/10.1016/j.marpetgeo.2015.07.007]
17. Francisco, F.; Carpman, N.; Dolguntseva, I.; Sundberg, J. Use of multibeam and dual-beam sonar systems to observe cavitating flow produced by ferryboats: In a marine renewable energy perspective. J. Mar. Sci. Eng.; 2017; 5, 30. [DOI: https://dx.doi.org/10.3390/jmse5030030]
18. Tang, Q.; Li, J.; Ding, D.; Ji, X.; Li, N.; Yang, L.; Sun, W. Deep-sea seabed sediment classification using finely processed multibeam backscatter intensity data in the southwest Indian Ridge. Remote Sens.; 2022; 14, 2675. [DOI: https://dx.doi.org/10.3390/rs14112675]
19. Eleftherakis, D.; Berger, L.; Le Bouffant, N.; Pacault, A.; Augustin, J.M.; Lurton, X. Backscatter calibration of high-frequency multibeam echosounder using a reference single-beam system, on natural seafloor. Mar. Geophys. Res.; 2018; 39, pp. 55-73. [DOI: https://dx.doi.org/10.1007/s11001-018-9348-5]
20. Roszkowiak, L.; Korzynska, A.; Zak, J.; Pijanowska, D.; Swiderska-Chadaj, Z.; Markiewicz, T. Survey: Interpolation methods for whole slide image processing. J. Microsc.; 2017; 265, pp. 148-158. [DOI: https://dx.doi.org/10.1111/jmi.12477]
21. Nürnberger, G.; Zeilfelder, F. Developments in bivariate spline interpolation. J. Comput. Appl. Math.; 2000; 121, pp. 125-152. [DOI: https://dx.doi.org/10.1016/S0377-0427(00)00346-0]
22. Wang, Y.; Liu, X.; Liu, R.; Zhang, Z. Research Progress on Spatiotemporal Interpolation Methods for Meteorological Elements. Water; 2024; 16, 818. [DOI: https://dx.doi.org/10.3390/w16060818]
23. Berger, M.; Li, J.; Levine, J.A. A generative model for volume rendering. IEEE Trans. Vis. Comput. Graph.; 2019; 25, pp. 1636-1650. [DOI: https://dx.doi.org/10.1109/TVCG.2018.2816059] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/29993811]
24. De Boer, M.; Hesser, J.; Gröpl, A.; Günther, T.; Poliwoda, C.; Reinhart, C.; Männer, R. Evaluation of a real-time direct volume rendering system. Comput. Graph.; 1997; 21, pp. 189-198. [DOI: https://dx.doi.org/10.1016/S0097-8493(96)00082-9]
25. Mady, A.S.; Abou El-Seoud, S. An overview of volume rendering techniques for medical imaging. Int. J. Onl. Eng.; 2020; 16, pp. 95-106. [DOI: https://dx.doi.org/10.3991/ijoe.v16i06.13627]
26. Semwal, S.K.; Swann, P.G. Applying ray casting to the flow visualization data using linear and B-spline hyperpatch interpolation. J. Visual. Comput. Animat.; 1995; 6, pp. 33-47. [DOI: https://dx.doi.org/10.1002/vis.4340060105]
27. Liu, Y.; Xue, S.; Qu, G.; Lu, X.; Qi, K. Influence of the ray elevation angle on seafloor positioning precision in the context of acoustic ray tracing algorithm. Appl. Ocean Res.; 2020; 105, 102403. [DOI: https://dx.doi.org/10.1016/j.apor.2020.102403]
28. Goel, V.; Mukherjee, A. An optimal parallel algorithm for volume ray casting. Vis. Comput.; 1996; 12, pp. 26-39. [DOI: https://dx.doi.org/10.1007/BF01782217]
29. Binyahib, R.; Peterka, T.; Larsen, M.; Ma, K.L.; Childs, H. A scalable hybrid scheme for ray-casting of unstructured volume data. IEEE Trans. Vis. Comput. Graph.; 2019; 25, pp. 2349-2361. [DOI: https://dx.doi.org/10.1109/TVCG.2018.2833113] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/29994004]
30. Ke, H.R.; Chang, R.C. Sample buffer: A progressive refinement ray-casting algorithm for volume rendering. Comput. Graph.; 1993; 17, pp. 277-283. [DOI: https://dx.doi.org/10.1016/0097-8493(93)90076-L]
31. Ledergerber, C.; Guennebaud, G.; Meyer, M.; Bacher, M.; Pfister, H. Volume MLS ray casting. IEEE Trans. Vis. Comput. Graph.; 2008; 14, pp. 1372-1379. [DOI: https://dx.doi.org/10.1109/TVCG.2008.186] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/18988986]
32. Thomas, A.R.; Kotadiya, N.B.; Wang, B.; Dhakal, T.P. Physical vapor deposition simulator by graphical processor unit ray casting. J. Vac. Sci. Technol. B; 2023; 41, 064201. [DOI: https://dx.doi.org/10.1116/6.0003045]
33. Udupa, J.; Odhner, D. Shell rendering. IEEE Comput. Grap. Appl.; 1993; 13, pp. 58-67. [DOI: https://dx.doi.org/10.1109/38.252558]
34. Knoll, A.M.; Wald, I.; Hansen, C.D. Coherent multiresolution isosurface ray tracing. Vis. Comput.; 2009; 25, pp. 209-225. [DOI: https://dx.doi.org/10.1007/s00371-008-0215-2]
35. Levoy, M. Efficient ray tracing of volume data. ACM Trans. Graph.; 1990; 9, pp. 245-261. [DOI: https://dx.doi.org/10.1145/78964.78965]
36. Huang, C.; Lan, Y.; Zhang, G.; Xu, G.; Jiang, L.; Zeng, N.; Tan, J.; Ng, E.Y.K.; Cheng, Y.; Han, N. et al. A new transfer function for volume visualization of aortic stent and its application to virtual endoscopy. ACM Trans. Multimed. Comput. Commun. Appl.; 2020; 16, pp. 1-14. [DOI: https://dx.doi.org/10.1145/3373358]
37. Ljung, P.; Krüger, J.; Groller, E.; Hadwiger, M.; Hansen, C.D.; Ynnerman, A. State of the art in transfer functions for direct volume rendering. Comput. Graph. Forum; 2016; 35, pp. 669-691. [DOI: https://dx.doi.org/10.1111/cgf.12934]
38. Mehaboobathunnisa, R.; Thasneem, A.H.; Sathik, M.M. Fuzzy mutual information-based intraslice grouped ray casting. J. Intell. Syst.; 2019; 28, pp. 77-86. [DOI: https://dx.doi.org/10.1515/jisys-2016-0263]
39. Lundstrom, C.; Ljung, P.; Ynnerman, A. Local histograms for design of transfer functions in direct volume rendering. IEEE Trans. Vis. Comput. Graph.; 2006; 12, pp. 1570-1579. [DOI: https://dx.doi.org/10.1109/TVCG.2006.100]
40. Sereda, P.; Bartroli, A.; Serlie, I.; Gerritsen, F. Visualization of boundaries in volumetric data sets using LH histograms. IEEE Trans. Vis. Comput. Graph.; 2006; 12, pp. 208-218. [DOI: https://dx.doi.org/10.1109/TVCG.2006.39] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/16509380]
41. Kniss, J.; Kindlmann, G.; Hansen, C. Interactive volume rendering using multi-dimensional transfer functions and direct manipulation widgets. Proceedings of the Proceedings Visualization, 2001, VIS ’01; San Diego, CA, USA, 21–26 October; 2001; pp. 255-562.
42. Kniss, J.; Kindlmann, G.; Hansen, C. Multidimensional transfer functions for interactive volume rendering. IEEE Trans. Vis. Comput. Graph.; 2002; 8, pp. 270-285. [DOI: https://dx.doi.org/10.1109/TVCG.2002.1021579]
43. Park, S.; Budge, B.; Linsen, L.; Hamann, B.; Joy, K. Multi-dimensional transfer functions for interactive 3D flow visualization. Proceedings of the 12th Pacific Conference on Computer Graphics and Applications; Seoul, Republic of Korea, 6–8 October 2004; pp. 177-185.
44. Kindlmann, G.; Durkin, J. Semi-automatic generation of transfer functions for direct volume rendering. Proceedings of the IEEE Symposium on Volume Visualization (Cat. No.989EX300); Research Triangle Park, NC, USA, 19–20 October 1998; pp. 79-86.
45. Bozorgi, M.; Lindseth, F. GPU-based multi-volume ray casting within VTK for medical applications. Int. J. Comput. Assist. Radiol. Surg.; 2015; 10, pp. 293-300. [DOI: https://dx.doi.org/10.1007/s11548-014-1069-x]
46. Ping, L.; Tiechang, M.; Xiangzhao, X.; Tianbao, M. A GPU parallel staircase finite difference mesh generation algorithm based on the ray casting method. Explos. Shock Waves; 2020; 40, 024201-1. [DOI: https://dx.doi.org/10.11883/bzycj-2019-0344]
47. Roessler, F.; Botchen, R.P.; Ertl, T. Dynamic shader generation for GPU-based multi-volume ray casting. IEEE Comput. Graph. Appl.; 2008; 28, pp. 66-77. [DOI: https://dx.doi.org/10.1109/MCG.2008.96] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/18753036]
48. Sweezy, J.E. A Monte Carlo volumetric-ray-casting estimator for global fluence tallies on GPUs. J. Comput. Phys.; 2018; 372, pp. 426-445. [DOI: https://dx.doi.org/10.1016/j.jcp.2018.06.032]
49. Levoy, M. Display of surfaces from volume data. IEEE Comput. Grap. Appl.; 1988; 8, pp. 29-37. [DOI: https://dx.doi.org/10.1109/38.511]
50. Levoy, M. A hybrid ray tracer for rendering polygon and volume data. IEEE Comput. Grap. Appl.; 1990; 10, pp. 33-40. [DOI: https://dx.doi.org/10.1109/38.50671]
51. Max, N. Optical models for direct volume rendering. IEEE Trans. Vis. Comput. Graph.; 1995; 1, pp. 99-108. [DOI: https://dx.doi.org/10.1109/2945.468400]
52. Marsden, J.E.; Tromba, A.J. Vector Calculus; Macmillan: New York, NY, USA, 2003.
53. Ali, I.; Amat, S.; Trillo, J.C. Point values hermite multiresolution for non-smooth noisy signals. Computing; 2006; 77, pp. 223-236. [DOI: https://dx.doi.org/10.1007/s00607-005-0159-6]
54. Plata, S.A. A review on the piecewise polynomial harmonic interpolation. Appl. Numer. Math.; 2008; 58, pp. 1168-1185. [DOI: https://dx.doi.org/10.1016/j.apnum.2007.06.001]
55. Schafer, R.; Rabiner, L. A digital signal processing approach to interpolation. Proc. IEEE; 1973; 61, pp. 692-702. [DOI: https://dx.doi.org/10.1109/PROC.1973.9150]
56. Cao, W.; Fang, S.; Gu, Z.; Feng, M.; Cao, H. Measurement of scattering properties of water body by using a multibeam echosounder system. IEEE Trans. Instrum. Meas.; 2023; 72, 6501112. [DOI: https://dx.doi.org/10.1109/TIM.2023.3246521]
57. Abdullah-Al-Wadud, M.; Kabir, M.H.; Dewan, M.A.A.; Chae, O. A dynamic histogram equalization for image contrast enhancement. IEEE Trans. Consum. Electron.; 2007; 53, pp. 593-600. [DOI: https://dx.doi.org/10.1109/TCE.2007.381734]
58. Stark, J.A. Adaptive image contrast enhancement using generalizations of histogram equalization. IEEE Trans. Image Process.; 2000; 9, pp. 889-896. [DOI: https://dx.doi.org/10.1109/83.841534]
59. Medwin, H.; Clay, C.S. Fundamentals of Acoustical Oceanography; Academic Press: San Diego, CA, USA, 1997.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
This paper proposes a method for visualizing three-dimensional non-uniformly sampled data from multibeam echosounder systems (MBESs), aimed at addressing the requirements of monitoring complex and dynamic underwater flow fields. To tackle the challenges associated with spatially non-uniform sampling, the proposed method employs linear interpolation along the radial direction and arc length weighted interpolation in the beam direction. This approach ensures consistent resolution of three-dimensional data across the same dimension. Additionally, an opacity transfer function is generated to enhance the visualization performance of the ray casting algorithm. This function leverages data values and gradient information, including the first and second directional derivatives, to suppress the rendering of background and non-interest regions while emphasizing target areas and boundary features. The simulation and experimental results demonstrate that, compared to conventional two-dimensional beam images and three-dimensional images, the proposed algorithm provides a more intuitive and accurate representation of three-dimensional data, offering significant support for the observation and analysis of spatial flow field characteristics.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer