1. Introduction
Outdoors object visualization under bad weather conditions, such as in the presence of rain, fog, smoke, or under extreme background illumination conditions normally caused by the sun’s glare, is a fundamental computer vision problem to be solved. Over the last decade, the increased efforts in the development of autonomous robots, including self-driving vehicles and Unmanned Aerial Vehicles (UAV) [1], boosted the evolution of vision system technologies used for autonomous navigation and object recognition [2]. However, one of the remaining challenges to be solved is object recognition and 3D spatial reconstruction in fog, rain, or smoke-rich environments [3]. In such scenarios, the performance of the vision system based on RGB (Red–Green–Blue) is limited, usually producing low-contrast images. Depending on the diameter D of water droplets present in the scene to be depicted, compared to the wavelength of light to be detected, three regimes for their interaction have been defined: (1) if D the Rayleigh scattering effects occur where photons get scattered almost isotropically, (2) if D, then Mie scattering occurs where the photons are asymetrically, (3) if D, the ray’s optics occurs and photons are mostly forward scattered. In this work, Rayleigh scattering will be neglected, since typical diameters of fog and rain are larger than the wavelength of the light.
Enhancing the visibility in foggy conditions is an area of great interest. Various studies have been conducted, posing solutions based on processing algorithms and integration technologies in other spectral bands. These include “defogging” algorithms based on the physical scattering model [4,5], detection algorithms based on the ratio photons residual energy [6], and using deep learning algorithms [7,8]. Other solutions use the redundancy of multiple sensor modalities integrated with RGB camera [9] such as the Light Detection and Ranging (LIDAR) technology [10], the Radio Detection and Ranging (RADAR) technology [11], Time-of-Flight (ToF) [12], or using multispectral (MSI) and hyperspectral imaging technologies [13,14]. In the area of application of single-pixel imaging (SPI) with scattering scenarios, some works focused on improving quality 2D images [15], using high-pass filters by suppressing the effects of temporal variations caused by fog. In 3D reconstruction applications based on compressive ghost imaging, random patterns and photometric stereo vision have been implemented [16].
SPI offers a high capacity of integration with other technologies, such as, for example, Time-of-Flight (ToF), and it can be adapted to operate using the NIR spectral band (800–2000 μm) that exhibits lower loss on foggy conditions [17], offering better performance over the visible spectrum. Therefore, based on the advantages provided by SPI, we propose an approach for 3D image reconstruction under foggy conditions that combines NIR-based SPI using the Shape-from-Shading (SFS) method to generate 3D information, in combination with the indirect Time-of-Flight (iToF) method applied on four reference points, the information of which is finally embedded into the final 3D generated image using a mapping method. The solution proposed in this work, unlike others based on, e.g., ghost imaging (GI) that needs a high number of patterns and high processing time [15], will make use of a 3D mesh robust algorithm that works with space-filling protection and CS-SRCNN, using active illumination with 1550 nm wavelength.
To evaluate the performance of the 3D NIR-SPI imaging system proposed, we performed three analyses. Firstly, we developed a theoretical model to estimate the maximum distance at which different objects in a scene (under controlled and simulated conditions in a laboratory) could still be distinguished, yielding the maximum measurement range. The model was experimentally validated through the estimation of the extinction coefficient . In the second analysis, we compared the different figures of merit obtained for the images reconstructed under different experimental conditions, and finally, we characterized the system carrying out an evaluation in terms of the maximum image reconstruction time required if different space-filling methods are to be used. To summarize, the main contributions and limitations of this paper are as follows:
The work presents an experimentally validated theoretical model of the system proposed for Single-Pixel Imaging (SPI) if operating in foggy conditions, considering Mie scattering (in environments rich in 3 μm diameter particles), calculating the level of irradiance reaching the photodetector, and the amount of light being reflected from objects for surfaces with different reflection coefficients.
Experimental validation of the SPI model presented thorough measurement of the extinction coefficient [18] to calculate the maximum imaging distance and error.
A system based on a combination of NIR-SPI and iToF methods is developed for imaging in foggy environments. We demonstrate an improvement in image recovery using different space-filling methods.
We fabricated a test chamber to generate water droplets with 3 μm average diameter and different background illumination levels.
We experimentally demonstrated the feasibility of our 3D NIR-SPI system for 3D image reconstruction. To evaluate the image reconstruction quality, the Structural Similarity Index Measure (SSIM), the Peak Signal-to-Noise Ratio (PSNR), Root Mean Square Error (RMSE), and skewness were implemented.
2. Single-Pixel Image Reconstruction
Single-pixel imaging is based on the projection of spatially structured light patterns over an object, which are generated by either a Spatial Light Modulators (SLM) or Digital Micro-Mirror Devices (DMD), and the reflected light is focused on a photodetector with no spatial information, as shown in Figure 1. The correlations between the patterns and the object O are determined by intensity measurements shown in Equation (1), which is provided by the photodetector as [19], where denote the spatial coordinate, is the single-pixel measurement corresponding to pattern , and is a factor that depends on the optoelectronic response of the photodetector.
(1)
The image resolution defined as the number of columns multiplied by the number of rows (or an array of virtual pixels), and therefore the number of projected patterns, is M × N. Knowing the structure of the illumination patterns and the electrical signal from the single-pixel photodetector, it is possible recover the image of the objects using several computational algorithms. One of them is expressed by Equation (2) [19], where the reconstructed image is obtained as the product of the measured single and the corresponding structured pattern that originated it.
(2)
2.1. Generation of the Hadamard Active Illumination Pattern Sequence
To generate the illumination patterns, we employ Hadamard patterns, which consist of a square matrix H its components defined as +1 or −1 with two distinct rows agreeing in exactly n/2 positions [21]. This matrix H should satisfy the condition , where is the transposition of the matrix H, and I stands for the identity matrix. A matrix of order N can be generated using the product defined through Equation (3).
(3)
(4)
The matrix size is defined as m×n, with m = 1, 2, 3, …, M, and n = 1, 2, 3, …, N. Here, we consider M = N. Once the matrix H is defined, the Hadamard sequence is constructed using Sylvester’s recursive matrix generation principle defined through Equations (3) and (4) [21] to obtain the final Hadamard matrix . It is important to take into consideration that if less than 20% of the required m × n Hadamard patterns is used for image reconstruction (see Figure 2a), then the quality of the reconstructed image will be poor. Therefore, if the sampling rate is reduced, and good image reconstruction is required, then different types of image reconstruction methods based on different space-filling curves such as Hilbert trajectory (see Figure 2b) [22], Zig–Zag (see Figure 2c) [23], or Spiral (see Figure 2d) [24] space-filling curves, must be implemented.
3. NIR-SPI System Test Architecture
In this work, we propose an NIR-SPI vision system based on the structured illumination scheme depicted in Figure 1b, but instead of using an SLM or a DMD to generate the structured illumination patterns, an array of 8 × 8 NIR LEDs is used, emitting radiation with the wavelength = 1550 nm. The NIR-SPI system architecture is divided into two stages: the first one controls the elements used to generate images by applying the already explained single-pixel imaging principle: an InGaAs photodetector (diode FGA015 @ 1550 nm), accompanied by an array of 8 × 8 NIR LEDs. Nevertheless, the spatial resolution of the objects in the scene is achieved by applying the Shape-From-Shading (SFS) [25] method and the unified reflectance model [26], additionally applying mesh enhancement algorithms, is still very much away from the aimed goal of below 10 mm at a distance of 3 m. Thus, four control spots were incorporated into the system illumination array, consisting of NIR lasers with controlled variable light intensity emulating an illumination sinusoidal signal modulated in time and four additional InGaAs photodiode pairs to measure the distance to the objects in the depicted scene with much higher precision, using the indirect Time-of-Flight (iTOF) ranging method (see Figure 3a). The second stage of the system is responsible for processing the captured signals by the photodiode module through the use of an analog-to-digital converter (ADC), which is controlled by a Graphics Processing Unit (GPU) (see Figure 3b). The GPU unit (–) is responsible for generating the patterns and processing the converted data by the ADC. The 2D/3D image reconstruction is performed using the OMP-GPU algorithm [27].
iTOF System Architecture
The iTOF system consists of four pulsed lasers emitting at 1550 nm peak wavelengths (ThorLabs @ L1550P5DFB), all located at an angle of 90º from each other, emitting a pulsewidth of 65 ns at the optical power of 5 mW (allowed by the IEC Eye Safety regulation IEC62471 [28]). For time-modulation, we are using a Direct Digital Synthesis (DDS) to generate a sinusoidal signal (CW-iToF). The signal modulation is controlled by laser biasing with an amplitude of between 0 and 10 V. Each laser is emitting a time-modulated signal within time windows of 100 μs. The signal reflected by the objects in the scene is detected by the InGaAs photodetector using an integration time of = 150 s. The voltage signal generated by the photodetectors is then converted via an ADC into a digital signal, which is finally processed by the GPU unit. Table 1 shows the different parameters of evaluation such as: frequency modulation equivalent allows calculating the spatial resolution [29], the Correlated Power Responsivity , [29] that defines the maximum amplitude power with respect to the phase delay, the Uncorrelated Power Responsivity [29] that defines the average power density detected on the photodetector with respect to the background irradiation noise, and Background Light Rejection Ratio (BLRR), which is the ratio between the sensor’s (uncorrelated) responsivity to background light on the one side and the photodetector’s responsivity to correlated time-modulated light on the other. A high level of is required in order to obtain a distance error smaller than the intrinsic distance noise (the constraint is that [29]). Regarding our proposed system, the BLRR obtained is in the order of −50 dB; i.e., the system can operate in outdoor conditions with 40 kLux of background illumination, achieving a maximum distance of 3 m and a spatial resolution of 10 mm.
4. Fog Chamber Fabrication and Characterization
The chamber used to simulate the fog-rich environment is shown in Figure 4. The chamber has dimensions of 30 cm × 30 cm × 35 cm and has a system that controls the size of droplets based on a piezoelectric humidifier that operates with a frequency of 1.7 MHz to create water droplets with a diameter of 3 m, following the relation shown by Equation (5) [30].
(5)
Equation (5) describes the droplet diameter as a function of the piezoelectric frequency, where stands for the surface tension (in N/m), stands for the density of the liquid used , and f is the electrical frequency applied to the piezoelectric (Figure 5 shows particles diameters water vs frequency piezoelectric). The scattering produced by these droplets is given by Equation (6) [31], where is the scattering coefficient (calculate using matlab [32]), is the density of particles suspended in the medium, and r is the particles’ radius. The chamber allows us to properly test the NIR-SPI system prototype in a controlled environment, simulating the scattering effects under foggy conditions.
(6)
The light attenuation caused by a scattering medium can be modeled using the Beer–Lambert–Bouguet law [33], which defines the transmittance as , where z is the propagation distance, and k is the extinction coefficient. (Figure 6 shown change contrast image with the distance). The extinction coefficient takes into account the absorption () and scattering () coefficients, respectively, i.e., k = + . The effect of the absorption will be the neglected, and the scattering coefficient is determined by measuring the transmittance for different distances inside the chamber by displacing a mirror.
5. Modeling the Visibility and Contrast
Koschmieder’s law describes the radiance attenuation caused by the surrounding media between the observer (the sensor) and the objects. Koschmieder’s law allows us to estimate the apparent contrast of an object under different environmental conditions. The total radiance L reaching the observer after being reflected from an the object at a distance z is defined by Equation (7) [34].
(7)
In Equation (7), is the radiance of the object at close range, and is the background radiance (noise). The term corresponds to the amount of light being reflected by the object and detected at a distance z, and the term corresponds to the amount of light detected at a distance z. Thus, as the distance between the observer and the depicted object increases, the observer will see less light being reflected from the object and more of the scattered light, causing a loss of the image contrast C defined by Equation (8) [35], where is the contrast at close range. Since the human eye can distinguish an object until a contrast threshold of 5%, the distance z at which the threshold contrast occurs is given by Equation (9) [36].
(8)
(9)
Modeling the NIR-SPI System in Presence of Fog
To model the NIR-SPI system performance in foggy conditions (see Appendix A.1 Algorithm A1), we will need to determinate the number of photons impinging on the photodetector photoactive area determinated by Equation (10) [37].
(10)
In Equation (10), is the photodetector’s quantum efficiency, is the photodetector integration time, is the effective photosensitive area, is the photodetector’s fill-factor, the number is defined as , where is the focal length of the lenses used and is the focal distance/opening distance, h is Planck’s constant, z is the measured distance, c is the speed of light, is the lens transmittance, R is the material reflection index, is the focal aperture angle of the emitting LED array, is the irradiation level of the sun illumination received on the photoactive area of the photodetector in Equation (11), and is the reflectivity of the photodetector surfaces.
(11)
is the level of irradiation captured by the photodetector, is the transversal function that depends on the geometrical characteristics of the object, the distance is z, and is the backscattering contribution to the pixel signal defined by Equation (12) [31], where is a conversion factor of the sensor, is the effective aperture, and is the effective irradiance.
(12)
To estimate the maximum theoretical operation of the NIR-SPI system, we calculated the point of intersection between the , given by Equation (10), and the overall noise floor [38], in order to calculate the maximum distance at which the NIR-SPI system might still operate (see Table 2).
6. 3D Using Unified Shape-From-Shading Model (USFSM) and iToF
For the 3D reconstruction of the object captured by an NIR-SPI system (see Figure 7a,b), we applied the unified Shape-From-Shading model (USFSM), which builds 3D images from spatial intensity variations of the 2D recovered image [39] (see Appendix A.2 Algorithm A2). However, the obtained mesh yields insufficient quality, and it presents outliers and missing parts (see Figure 7c). To improve the mesh, we applied to it a mapping iToF depth information (see Appendix A.3 Algorithm A3), generating a new mesh that will be processed by applying a heat diffusion filter [40] to remove the mentioned outliers (see Figure 7d) and also a power crust algorithm [41] (re-compilate C++ in Python) (see Figure 7e) to generate an improved mesh (see Figure 7f). For mapping iToF over the points SFS depth, we use a four-point iTOF system that consists of four laser modules (see Figure 8a) to measure four reference depth points of the depicted scene. These reference points allow us to create a reference image depth mesh that can be combined with the NIR-SPI 2D image point cloud generated using the SFS reconstruction (see Algorithm A2). We can generate an initial 3D mesh using the method described in the previous subsection. To generate the final 3D mesh, a method based on ray tracing used in TOF scanning with a laser beam [42] is applied. For this, a strategy based on voxelization [43] is followed, where a method of choice for the 3D mesh generation is based on surface fragmentation and coverage. Combining the point cloud obtained by the SFS method for NIR-SPI and the scene depth information obtained from four reference points, a semi-even 3D point distribution [44] is obtained over the original mesh with a distance (pitch) between each pair of points within the mesh = 5 mm. The defined vertices of the 3D mesh generated (see Algorithm A3) are used to divide the point cloud into four different regions: each region corresponding to each depth reference point defined through an independent iTOF measurement (see Figure 8b), where the vertices of the mesh become the iTOF reference normalized depth points. Here, and define the neighboring points in the point cloud (see Figure 8c). In the manner described, more additional points are defined to form part of the final point cloud, as the positions of the points covering the triangles defined by Equation (13) [44] are included, which form an angle between the vectors defined in Equation (14) [44]) that are used to reduce the number of separate triangles (remove the remaining space between adjacent meshes). In this way, after the voxelization [45] is applied, all triangles with the same voxel form part of the final mesh shown in Equation (15), creating a new final 3D mesh of the scene considering the iTOF originated depth reference points (see Figure 7f).
(13)
(14)
(15)
7. Experimental Results
To evaluate the capabilities of the 3D NIR-SPI system, we used a semi-direct light source to simulate background illumination in outdoor conditions [46] with an optical power between 5 and 50 kLux. The scattering is provided by water droplets of 3 m diameter (see Figure 4). We reconstructed images of four different types of objects placed 20 cm from the camera: a sphere with a 50 mm diameter, a torus-shaped object with an external diameter of 55 mm and an internal diameter of 25 mm, a cube with dimensions of 40 mm × 40 mm × 40 mm, and a U-shaped object with dimensions of 65 mm × 40 mm × 17 mm. The objects were placed inside the test chamber (see Figure 4). The NIR-SPI images were reconstructed using four space-filling projections, as discussed in Section 2.1.
We determine the extinction coefficient and the maximum distance for the contrast Equation (9) using three materials with different reflection coefficients (see Table 2).
-
2D reconstruction: Two-dimensional (2D) image reconstruction with the NIR-SPI camera using respectively the , , -, and scanning methods in combination with the GPU-OMP algorithm [27] and the Fast Super-Resolution Convolutional Neural Network (FSRCNN) method with four upscaling factors [47]. For the reconstruction of 2D single-pixel images, we decided to use 100% of the illumination patterns projected. We generated the following different outdoor conditions and background light scenarios using the described test bench: (1) very cloudy conditions (5 klux), (2) half-cloudy conditions (15 klux), midday (30 klux), and clean-sky sun-glare (40–50 kLux). To evaluate the quality of the reconstructed 2D images, we used the Structural Similarity Index (SSIM) [48] and the Peak Signal-to-Noise Ratio (PSNR) [49] as fuction background illumination (see Figure 9).
For the highest background illumination level, the scanning provided better reconstructed quality (see Figure 9a), reaching PSNR = 28 dB (see Figure 9b).
-
3D reconstruction: We carried out a 3D image reconstruction from a 2D NIR-SPI image (see Figure 10) and iTOF information using Algorithms A2 and A3 under different background illumination conditions (very cloudy conditions (5 klux) and half-cloudy conditions (15 K Lux). The 3D images are shown in Figure 11. In the test, we calculated the level of RMSE, defined by Equation (16), and skewness, which defines the symmetry of the 3D shapes. A value near 0 indicates a best mesh and a value close to 1 indicates a completely degenerate mesh [50] (see Figure 12), while , as shown in Equation (17), indicates the percentage of improving the 3D image reconstruction in terms of RMSE (see Table 3).
(16)
(17)
We can observe an improvement in the obtained 3D mesh compared to the first 3D reconstructions carried out using the SFS method (see Figure 12), mostly related to surface smoothing, correction of imperfections, and removal of outlying points. The space-filling method yields the best performance, with an improvement factor of 29.68%, followed by the - method, reaching an improvement of 28.68% (see Table 3). On the other hand, in case the background illumination reaches 15 Klux, the method reached 34.14% improvement, while the method reached 28.24% (see Table 3). Applying the SFS method, the Skewness and the mesh present an increase in a fog scenario from 0.6–0.7 (cell quality fair, see Table 4) to 0.8–1 (cell quality poor, see Table 5); with that, the cell quality degrades (see Figure 12a–c). For improving these values, using the power crust algorithm integrated with iToF for reaching a best range of skewness, for the case without fog, the range of skewness obtained was from 0.02 to 0.2 (cell quality excellent, see Table 4), which are the values of skewness recommended [50]. In the fog condition, we will seek to obtain a cell quality level mesh <0.5, which is considered a good mesh quality (see Table 5). Using the scanning method delivered the lowest skewness level, which was lower than if other space-filling methods were used, which indicates its sensitivity to noise.
-
Evaluation of the image reconstruction time: An important parameter regarding the 3D reconstruction in vision systems is the processing time required for this task. For that, we search the method with the lowest reconstruction time (see Table 6) considering a trade-off between the image overall quality and the time required for its reconstruction.
Image reconstruction using the NIR-SPI camera when placing the object 20 cm from the lens, using different scanning techniques in foggy conditions, and varying the background illumination between 5 and 50 kLux: (a) SSIM and (b) PSNR.
[Figure omitted. See PDF]
Figure 10Reconstruction using the 2D NIR-SPI camera with active illumination at wavelength of = 1550 nm and object placed 20 cm from the camera for different scanning techniques under foggy conditions with particles diameter of 3 μm and background light of 5 and 15 kLux, respectively: (a) 50 mm diameter sphere, (b) cube with dimensions of 40 mm × 40 mm × 40 mm, (c) torus (ring-like object) with an external diameter of 55 mm and an internal diameter of 25 mm, and (d) U-shaped object with dimensions of 65 mm × 40 mm × 17 mm.
[Figure omitted. See PDF]
Figure 11Reconstructed3D mesh improving at a distance of 20 cm from the focal lens, using different scanning techniques under foggy conditions with particles’ size of 3 m and background light of 5 and 15 kLux, respectively: (a) 50 mm diameter spherical, (b) cube with dimensions of 40 mm × 40 mm × 40 mm, (c) torus (ring-like object) with an external diameter of 55 mm and an internal diameter of 25 mm, and (d) U-shaped object with dimensions of 65 mm × 40 mm × 17 mm.
[Figure omitted. See PDF]
Figure 12Three-dimensional (3D) mesh sphere without/with fog conditions: (a) without fog mesh using SFS with Skewness = 0.6, (b) mesh improving power crust and iToF with Skewness = 0.09, (c) with fog mesh using SFS with Skewness = 0.8, and (d) mesh improving power crust and iToF with Skewness = 0.2.
[Figure omitted. See PDF]
Table 3Improvement rate expressed through RMSE Equation (17) of the reconstructed 3D image under foggy conditionss with particle diameter of 3 m and background light of 5 and 15 kLux, respectively, after Algorithm A3 has been applied.
Scanning Method | 5 kLux | 15 kLux |
---|---|---|
27.58% | 9.67% | |
27.52% | 28.24% | |
28.68% | 19.2% | |
29.68% | 32.14% |
Three-dimensional (3D) images perception of surface qualities without fog conditions calculating the skewness.
Scanning Method | ||
---|---|---|
0.65 | 0.09 | |
0.52 | 0.02 | |
0.66 | 0.2 | |
0.69 | 0.12 |
Three-dimensional (3D) images perception of surface qualities fog conditions calculating the skewness.
Scanning Method | ||
---|---|---|
0.82 | 0.2 | |
0.73 | 0.11 | |
1.06 | 0.34 | |
0.81 | 0.17 |
Three-dimensional (3D) image reconstruction processing time using SFS and Algorithm A3.
Scanning Method | |||
---|---|---|---|
19.83 | 147.69 | 167.53 | |
19.18 | 127.36 | 146.54 | |
21.69 | 130.89 | 152.58 | |
24.95 | 133.53 | 158.49 |
Finally, we calculated the 3D reconstruction time (see Table 6), applying at first the SFS method and subsequently applying Algorithm A3 to improve the 3D mesh (See Figure 11). Following, we compared the reconstruction time to the 3D mesh improvement rate, and the skewness of the reconstructed 3D images (see Table 7) was reached if different scanning methods were used for image reconstruction. It is important to take into consideration that in order to reach a higher 3D reconstruction quality, longer processing times must be taken into account. In the cases where the scanning was used, yielding the best performance as far as the 3D mesh improvement rate and skewness are concerned, the reconstruction times required were in the order of 146 ms.
8. Conclusions
This paper presents an NIR-SPI system prototype capable of generating 2D/3D images of depicted scenes in the presence of coarse fog. For the evaluation of the performance of the built system, a theoretical model of the entire NIR-SPI system operating under foggy conditions was firstly developed, which was used to quantify the light-scattering effects of the foggy environment on the quality of the 3D images generated by the system. This model was validated in the laboratory using a test bench that simulates the outdoor conditions considering the presence of coarse fog with a droplet of 3 m diameter and variable background illumination conditions. The maximum detection range between 18 and 30 cm was assessed, reaching spatial resolutions between 4 and 6 mm, with a measuring accuracy between 95% and 97%, depending on the reflection index of the material used.
The 3D NIR-SPI system image reconstruction is based on the combination of iToF and photometric (SFS) approaches. For this, we defined a methodology that initially evaluates the 2D SPI image quality through SSIM and PSNR parameters, using four different space-filling (scanning) methods. We showed that and scanning methods, respectively, offered the best performances if adapted to the SFS algorithm, which was mainly due to the fact that the SFS method strongly depends on the level of background illumination present. Thus, we proposed an algorithm in which we map the measured distances of four defined test points in the depicted scene obtained by the four implemented iToF modules to improve the final 3D image and overcome the limitation of the SFS method. The system complements the missing points at the surface of the depicted objects through a post-processing step based on thermal filtering and the the Power Crust algorithm. By applying the described method, we reach a mesh quality of 0.2 to 0.3 in terms of skewness under fog conditions (see Table 7), which is a result comparable with the performance of similar vision systems operating in fog-free environments.
Finally, we evaluated the 3D reconstruction in terms of the required computational time. The results indicate that the projection method without changes defined as yielded the worst performance, and it was outperformed mainly by the and methods. Based on the experimental evaluation performed, we can conclude that in outdoor scenarios in the presence of fog, with a variable illumination background, the NIR-SPI system built delivered a quite acceptable performance, applying different space-filling (scanning) strategies such as the or methods, respectively, reaching good contrast levels and quite acceptable 2D image spatial resolutions of <30 mm, on which the 3D reconstruction is based. Due to the scattering effects, a method of robust 3D reconstruction was proposed and proven to be quite effective. This study provides a new field of research for SPI vision systems for application in outdoor scenarios, e.g., for the cases where they could be integrated into the navigation systems of Unmanned Flight Vehicles (UFVs), as a primary or redundant sensor, with applications such as surface mapping or obstacle avoidance operating in fog or low-visibility environments [51,52].
9. Patents
Daniel Durini Romero, Carlos Alexander Osorio Quero, José de Jesús Rangel Magdaleno, José Martínez Carranza “Sistema híbrido de creación de imágenes 3D”, File-No.: MX/a/2020/012197, Priority date: 13 November 2020.
All authors contributed to the article. C.O.Q. proposed the concept, designed the optical system, and wrote the paper under the supervision of D.D., J.R.-M., J.M.-C. and R.R.-G., who supervised the experiments, analyzed the data, reviewed the manuscript, and provided valuable suggestions. All authors have read and agreed to the published version of the manuscript.
Not applicable.
Not applicable.
Not applicable.
The first author is thankful to Consejo Nacional de Ciencia y Tecnología (CONACYT) for his scholarship with No. CVU: 661331. The authors wish to thank Humberto García Flores, Head of the Illumination and Energy Efficiency (LIEE) laboratory of INAOE for the most appreciated help provided for developing the test bench and performing the experimental testing of the NIR SPI system.
The authors declare no conflict of interest.
The following abbreviations are used in this manuscript:
ADC | Analog to Digital Converters |
BLRR | Background Light Rejection Ratio |
Equivalent integration capacitance | |
CS-SRCNN | Compressive Sensing-Super-Resolution Convolutional Neural Network |
CW-iTOF | Continuous-wave-Time-of-Flight |
DDS | Direct digital synthesis |
DMD | Digital micromirror device |
Fmod-eq | Frequency modulation equivalent |
FSRCNN | Fast Super-Resolution Convolutional Neural Network |
GPU | Graphics processing unit |
iTOF | Indirect Time-of-Flight |
InGaAs | Indium Gallium Arsenide |
NIR | Near infrared imaging |
OMP | Orthogonal matching pursuit |
PSNR | Peak Signal-to-Noise Ratio |
PDE | Partial differential equation |
RGB | Red–Green–Blue |
RMSE | Root mean square error |
SSIM | Structural Similarity Index Measure |
SFS | Shape-from-Shading |
SLM | Spatial light modulator |
SNR | Signal-to-noise ratio |
SPD | Pixel Detector System |
SPI | Single-Pixel Imaging |
UFV | unmanned flight vehicles |
USFSM | Unified Shape-From-Shading model |
VIS | Visible wavelengths |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Figure 1. Two Different configurations for SPI: (a) Structured detection: the object illuminated by a light source and the light reflected by it gets directed through a lens onto an SLM, and captured by the SPD, (b) Structured illumination: the SLM device projects a sequence of patterns on the object and reflected light that is captured by the SPD. Representation of SPI based on published [20].
Figure 2. Example Hadamard sequence [Forumla omitted. See PDF.] scanning scheme applying different space-filling curves: (a) basic [Forumla omitted. See PDF.] sequence, (b) [Forumla omitted. See PDF.] scan [22], (c) [Forumla omitted. See PDF.]–[Forumla omitted. See PDF.] scan [23], (d) [Forumla omitted. See PDF.] scan [24].
Figure 3. Proposed 2D/3D NIR-SPI camera system: (a) the sequence used for projection of active illumination patterns and reconstruction of 2D/3D images using the SPI approach; (b) The NIR-SPI system proposed and its subsystems: dimension is of 11 × 12 × 13 cm, weight 1.3 kg, and power consumption of 25 W module photodiode InGaAs, active illumination source, photodetector diode InGaAs FGA015, graphics processing unit (GPU) and Analog to Digital Converters (ADC).
Figure 4. Experimental setup for the NIR-SPI system prototype built. The test bench has a control system to emulate fog and background illumination. The test object is placed inside the glass box.
Figure 5. The operating range (108.3 kHz to 1.7 MHz) of the piezoelectric generates fog particles with mean diameters between 3 and 180 [Forumla omitted. See PDF.]m.
Figure 6. Simulationof image contrast attenuation or degradation using Matlab, due to the presence of fog with two different scattering coefficients (absorption was set to zero), shown as a function of the light propagation distance.
Figure 7. Three-dimensional (3D) reconstruction schematic: (a) original image of the object, (b) reconstructed 2D image obtained using the SPI NIR system prototype, (c) 3D SFS with imperfections, gaps and outliers in the surface, (d) 3D image obtained after filtering, (e) 3D mesh obtained after using the power crust algorithm, and (f) the final and improved 3D image with iToF.
Figure 8. Three-dimensional (3D) final mesh generation using CW-iTOF reference: (a) laser array and InGaAs photodetector, (b) defining reference regions, and (c) method of distribution of points of the mesh (d distance (pitch), [Forumla omitted. See PDF.], [Forumla omitted. See PDF.] and [Forumla omitted. See PDF.] vertices, [Forumla omitted. See PDF.] points triangles).
Figures of merit of the proposed CW-iTOF system working at 1550 nm peak wavelength.
Parameters | Value |
---|---|
|
0.8 @ 1550 nm |
C |
19 fF |
A |
235 |
FF | 0.38 |
T |
65 ns |
F |
4.8 MHz |
T |
150 |
|
1 cm |
|
10º |
NED | 1 |
PR |
11.84 |
SNR |
20–30 dB |
BLRR | −50 dB |
Theoretically obtained maximum distance at which the measurement can still be performed vs. that experimentally obtained under the same conditions.
Reflection Coefficient | 0.2 | 0.5 | 0.8 |
---|---|---|---|
Theoretically calculated maximum measurement |
22.4 | 35 | 44 |
Theoretically calculated maximum measurement |
18 | 27 | 30.8 |
Experimentally obtained maximum measurement |
22 | 34.2 | 43.4 |
Experimentally obtained maximum measurement |
17.6 | 26.21 | 30.18 |
Three-dimensional (3D) NI-SPI performance summary under foggy conditions.
Scanning Method | Skewness | Improvement (%) |
|
---|---|---|---|
|
0.2 | 19 | 167.53 |
|
0.1 | 28 | 146.54 |
|
0.34 | 24 | 152.58 |
|
0.17 | 31 | 158.49 |
Appendix A
Appendix A.1. Pseudocode for Estimating the Maximum Capture Distance of NIR-SPI Vision System
From conditions radiance of the object in scene (
Algorithm A1: Estimatemaximum distance NIR-SPI |
[Image omitted. Please see PDF.] |
Appendix A.2. 3D Reconstruction of USFSM Using Fast Sweeping Algorithm
Using the fast sweeping method that obtains the depth information for the objects depicted in a scene, from an SPI image that corresponds to the surface point of the scene, we defined a surface
Algorithm A2: Fast sweeping algorithm for H–J based on the Lax–Friedrichs method [ |
[Image omitted. Please see PDF.] |
Appendix A.3. iTOF Algorithm
Algorithm mapping iTOF is proposed, which is based on the scanning surface method proposed by [
Algorithm A3: Finding the points of contact of the iTOF ray to generate mesh [ |
1 Function Generation-to-Mesh Input: Vectors with information distance, Output: 2Initialization: 3 4 5 6 7 8 9 10 11 12 13 14 15 16 return |
Algorithm A4: Semi-even distribution of points on a single triangle [ |
[Image omitted. Please see PDF.] |
References
1. Moon, H.; Martinez-Carranza, J.; Cieslewski, T.; Faessler, M.; Falanga, D.; Simovic, A.; Scaramuzza, D.; Li, S.; Ozo, M.; De Wagter, C. et al. Challenges and implemented technologies used in autonomous drone racing. Intell. Serv. Robot.; 2019; 12, pp. 137-148. [DOI: https://dx.doi.org/10.1007/s11370-018-00271-6]
2. Valenti, F.; Giaquinto, D.; Musto, L.; Zinelli, A.; Bertozzi, M.; Broggi, A. Enabling Computer Vision-Based Autonomous Navigation for Unmanned Aerial Vehicles in Cluttered GPS-Denied Environments. Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC); Maui, HI, USA, 4–7 November 2018; pp. 3886-3891. [DOI: https://dx.doi.org/10.1109/ITSC.2018.8569695]
3. Fujimura, Y.; Iiyama, M.; Hashimoto, A.; Minoh, M. Photometric Stereo in Participating Media Using an Analytical Solution for Shape-Dependent Forward Scatter. IEEE Trans. Pattern Anal. Mach. Intell.; 2020; 42, pp. 708-719. [DOI: https://dx.doi.org/10.1109/TPAMI.2018.2889088] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/30582528]
4. Jiang, Y.; Sun, C.; Zhao, Y.; Yang, L. Fog Density Estimation and Image Defogging Based on Surrogate Modeling for Optical Depth. IEEE Trans. Image Process.; 2017; 26, pp. 3397-3409. [DOI: https://dx.doi.org/10.1109/TIP.2017.2700720] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28475053]
5. Narasimhan, S.; Nayar, S. Removing weather effects from monochrome images. Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2001; Kauai, HI, USA, 8–14 December 2001; Volume 2, II. [DOI: https://dx.doi.org/10.1109/CVPR.2001.990956]
6. Chen, Z.; Ou, B. Visibility Detection Algorithm of Single Fog Image Based on the Ratio of Wavelength Residual Energy. Math. Probl. Eng.; 2021; 2021, 5531706. [DOI: https://dx.doi.org/10.1155/2021/5531706]
7. Liu, W.; Hou, X.; Duan, J.; Qiu, G. End-to-End Single Image Fog Removal Using Enhanced Cycle Consistent Adversarial Networks. Trans. Img. Proc.; 2020; 29, pp. 7819-7833. [DOI: https://dx.doi.org/10.1109/TIP.2020.3007844]
8. Palvanov, A.; Giyenko, A.; Cho, Y. Development of Visibility Expectation System Based on Machine Learning. Proceedings of the 17th International Conference, CISIM 2018; Olomouc, Czech Republic, 27–29 September 2018; pp. 140-153. [DOI: https://dx.doi.org/10.1007/978-3-319-99954-8_13]
9. Katyal, S.; Kumar, S.; Sakhuja, R.; Gupta, S. Object Detection in Foggy Conditions by Fusion of Saliency Map and YOLO. Proceedings of the 2018 12th International Conference on Sensing Technology (ICST); Limerick, Ireland, 4–6 December 2018; pp. 154-159. [DOI: https://dx.doi.org/10.1109/ICSensT.2018.8603632]
10. Dannheim, C.; Icking, C.; Mader, M.; Sallis, P. Weather Detection in Vehicles by Means of Camera and LIDAR Systems. Proceedings of the 2014 Sixth International Conference on Computational Intelligence, Communication Systems and Networks; Bhopal, India, 27–29 May 2014; pp. 186-191. [DOI: https://dx.doi.org/10.1109/CICSyN.2014.47]
11. Guan, J.; Madani, S.; Jog, S.; Gupta, S.; Hassanieh, H. Through Fog High-Resolution Imaging Using Millimeter Wave Radar. Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); Seattle, WA, USA, 13–19 June 2020; pp. 11461-11470. [DOI: https://dx.doi.org/10.1109/CVPR42600.2020.01148]
12. Kijima, D.; Kushida, T.; Kitajima, H.; Tanaka, K.; Kubo, H.; Funatomi, T.; Mukaigawa, Y. Time-of-flight imaging in fog using multiple time-gated exposures. Opt. Express; 2021; 29, pp. 6453-6467. [DOI: https://dx.doi.org/10.1364/OE.416365]
13. Kang, X.; Fei, Z.; Duan, P.; Li, S. Fog Model-Based Hyperspectral Image Defogging. IEEE Trans. Geosci. Remote. Sens.; 2021; 60, pp. 1-12. [DOI: https://dx.doi.org/10.1109/TGRS.2021.3101491]
14. Thornton, M.P.; Judd, K.M.; Richards, A.A.; Redman, B.J. Multispectral short-range imaging through artificial fog. Proceedings of the Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XXX; Holst, G.C.; Krapels, K.A. International Society for Optics and Photonics, SPIE: Bellingham, WA, USA, 2019; Volume 11001, pp. 340-350. [DOI: https://dx.doi.org/10.1117/12.2520451]
15. Bashkansky, M.; Park, S.D.; Reintjes, J. Single pixel structured imaging through fog. Appl. Opt.; 2021; 60, pp. 4793-4797. [DOI: https://dx.doi.org/10.1364/AO.425281]
16. Soltanlou, K.; Latifi, H. Three-dimensional imaging through scattering media using a single pixel detector. Appl. Opt.; 2019; 58, pp. 7716-7726. [DOI: https://dx.doi.org/10.1364/AO.58.007716] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/31674452]
17. Zeng, X.; Chu, J.; Cao, W.; Kang, W.; Zhang, R. Visible–IR transmission enhancement through fog using circularly polarized light. Appl. Opt.; 2018; 57, pp. 6817-6822. [DOI: https://dx.doi.org/10.1364/AO.57.006817]
18. Tai, H.; Zhuang, Z.; Jiang, L.; Sun, D. Visibility Measurement in an Atmospheric Environment Simulation Chamber. Curr. Opt. Photon.; 2017; 1, pp. 186-195.
19. Gibson, G.M.; Johnson, S.D.; Padgett, M.J. Single-pixel imaging 12 years on: A review. Opt. Express; 2020; 28, pp. 28190-28208. [DOI: https://dx.doi.org/10.1364/OE.403195] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32988095]
20. Osorio Quero, C.A.; Durini, D.; Rangel-Magdaleno, J.; Martinez-Carranza, J. Single-pixel imaging: An overview of different methods to be used for 3D space reconstruction in harsh environments. Rev. Sci. Instrum.; 2021; 92, 111501. [DOI: https://dx.doi.org/10.1063/5.0050358] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34852525]
21. Zhang, Z.; Wang, X.; Zheng, G.; Zhong, J. Hadamard single-pixel imaging versus Fourier single-pixel imaging. Opt. Express; 2017; 25, pp. 19619-19639. [DOI: https://dx.doi.org/10.1364/OE.25.019619]
22. Ujang, U.; Anton, F.; Azri, S.; Rahman, A.; Mioc, D. 3D Hilbert Space Filling Curves in 3D City Modeling for Faster Spatial Queries. Int. J. 3D Inf. Model. (IJ3DIM); 2014; 3, pp. 1-18. [DOI: https://dx.doi.org/10.4018/ij3dim.2014040101]
23. Ma, H.; Sang, A.; Zhou, C.; An, X.; Song, L. A zigzag scanning ordering of four-dimensional Walsh basis for single-pixel imaging. Opt. Commun.; 2019; 443, pp. 69-75. [DOI: https://dx.doi.org/10.1016/j.optcom.2019.02.041]
24. Cabreira, T.M.; Franco, C.D.; Ferreira, P.R.; Buttazzo, G.C. Energy-Aware Spiral Coverage Path Planning for UAV Photogrammetric Applications. IEEE Robot. Autom. Lett.; 2018; 3, pp. 3662-3668. [DOI: https://dx.doi.org/10.1109/LRA.2018.2854967]
25. Zhang, R.; Tsai, P.S.; Cryer, J.; Shah, M. Shape-from-shading: A survey. IEEE Trans. Pattern Anal. Mach. Intell.; 1999; 21, pp. 690-706. [DOI: https://dx.doi.org/10.1109/34.784284]
26. Wang, G.; Zhang, X.; Cheng, J. A Unified Shape-From-Shading Approach for 3D Surface Reconstruction Using Fast Eikonal Solvers. Int. J. Opt.; 2020; 2020, 6156058. [DOI: https://dx.doi.org/10.1155/2020/6156058]
27. Quero, C.O.; Durini, D.; Ramos-Garcia, R.; Rangel-Magdaleno, J.; Martinez-Carranza, J. Hardware parallel architecture proposed to accelerate the orthogonal matching pursuit compressive sensing reconstruction. Proceedings of the Computational Imaging V; Tian, L.; Petruccelli, J.C.; Preza, C. International Society for Optics and Photonics, SPIE: Bellingham, WA, USA, 2020; Volume 11396, pp. 56-63. [DOI: https://dx.doi.org/10.1117/12.2558937]
28. Laser Safety Facts. Available online: https://www.lasersafetyfacts.com/laserclasses.html (accessed on 28 April 2021).
29. Perenzoni, M.; Stoppa, D. Figures of Merit for Indirect Time-of-Flight 3D Cameras: Definition and Experimental Evaluation. Remote Sens.; 2011; 3, pp. 2461-2472. [DOI: https://dx.doi.org/10.3390/rs3112461]
30. Rajan, R.; Pandit, A. Correlations to predict droplet size in ultrasonic atomisation. Ultrasonics; 2001; 39, pp. 235-255. [DOI: https://dx.doi.org/10.1016/S0041-624X(01)00054-3]
31. Oakley, J.; Satherley, B. Improving image quality in poor visibility conditions using a physical model for contrast degradation. IEEE Trans. Image Process.; 1998; 7, pp. 167-179. [DOI: https://dx.doi.org/10.1109/83.660994] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/18267391]
32. Matzler, C. MATLABfunctions for Mie scattering and absorption. IAP Res. Rep.; 2002; 8, Available online: http://www.atmo.arizona.edu/students/courselinks/spring09/atmo656b/maetzler_mie_v2.pdf (accessed on 28 April 2021).
33. Lee, Z.; Shang, S. Visibility: How Applicable is the Century-Old Koschmieder Model?. J. Atmos. Sci.; 2016; 73, pp. 4573-4581. [DOI: https://dx.doi.org/10.1175/JAS-D-16-0102.1]
34. Middleton, W.E.K. Vision through the Atmosphere. Geophysik II / Geophysics II; Bartels, J. Springer: Berlin/Heidelberg, Germany, 1957; pp. 254-287. [DOI: https://dx.doi.org/10.1007/978-3-642-45881-1_3]
35. Hautière, N.; Tarel, J.P.; Didier, A.; Dumont, E. Blind Contrast Enhancement Assessment by Gradient Ratioing at Visible Edges. Image Anal. Stereol.; 2008; 27, pp. 87-95. [DOI: https://dx.doi.org/10.5566/ias.v27.p87-95]
36. International Lighting Vocabulary = Vocabulaire International de L’éclairage. 1987; p. 365 Available online: https://cie.co.at/publications/international-lighting-vocabulary (accessed on 28 April 2021).
37. Süss, A. High Performance CMOS Range Imaging: Device Technology and Systems Considerations; Devices, Circuits, and Systems; CRC Press: Boca Raton, FL, USA, 2016.
38. Osorio Quero, C.A.; Romero, D.D.; Ramos-Garcia, R.; de Jesus Rangel-Magdaleno, J.; Martinez-Carranza, J. Towards a 3D Vision System based on Single-Pixel imaging and indirect Time-of-Flight for drone applications. Proceedings of the 2020 17th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE); Mexico City, Mexico, 11–13 November 2020; pp. 1-6. [DOI: https://dx.doi.org/10.1109/CCE50788.2020.9299125]
39. Tozza, S.; Falcone, M. Analysis and Approximation of Some Shape-from-Shading Models for Non-Lambertian Surfaces. J. Math. Imaging Vis.; 2016; 55, pp. 153-178. [DOI: https://dx.doi.org/10.1007/s10851-016-0636-x]
40. Peyré, G. NumericalMesh Processing. Course Notes; Available online: https://hal.archives-ouvertes.fr/hal-00365931 (accessed on 28 April 2021).
41. Amenta, N.; Choi, S.; Kolluri, R.K. The Power Crust. Proceedings of the Sixth ACM Symposium on Solid Modeling and Applications; Association for Computing Machinery: New York, NY, USA, 2001; pp. 249-266. [DOI: https://dx.doi.org/10.1145/376957.376986]
42. Möller, T.; Trumbore, B. Fast, Minimum Storage Ray-Triangle Intersection. J. Graph. Tools; 1997; 2, pp. 21-28. [DOI: https://dx.doi.org/10.1080/10867651.1997.10487468]
43. Kaufman, A.; Cohen, D.; Yagel, R. Volume graphics. Computer; 1993; 26, pp. 51-64. [DOI: https://dx.doi.org/10.1109/MC.1993.274942]
44. Kot, T.; Bobovský, Z.; Heczko, D.; Vysocký, A.; Virgala, I.; Prada, E. Using Virtual Scanning to Find Optimal Configuration of a 3D Scanner Turntable for Scanning of Mechanical Parts. Sensors; 2021; 21, 5343. [DOI: https://dx.doi.org/10.3390/s21165343]
45. Huang, J.; Yagel, R.; Filippov, V.; Kurzion, Y. An accurate method for voxelizing polygon meshes. Proceedings of the IEEE Symposium on Volume Visualization (Cat. No.989EX300); Research Triangle Park, NC, USA, 19–20 October 1998; pp. 119-126. [DOI: https://dx.doi.org/10.1109/SVV.1998.729593]
46. Ravi, S.; Kurian, C. White light source towards spectrum tunable lighting—A review. Proceedings of the 2014 International Conference on Advances in Energy Conversion Technologies (ICAECT); Manipal, India, 23–25 January 2014; pp. 203-208. [DOI: https://dx.doi.org/10.1109/ICAECT.2014.6757088]
47. Dong, C.; Loy, C.C.; Tang, X. Accelerating the Super-Resolution Convolutional Neural Network. Proceedings of the European Conference on Computer Vision; Munich, Germany, 8–14 September 2016.
48. Zhu, Q.; Mai, J.; Shao, L. A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior. IEEE Trans. Image Process.; 2015; 24, pp. 3522-3533. [DOI: https://dx.doi.org/10.1109/TIP.2015.2446191]
49. Chen, T.; Liu, M.; Gao, T.; Cheng, P.; Mei, S.; Li, Y. A Fusion-Based Defogging Algorithm. Remote Sens.; 2022; 14, 425. [DOI: https://dx.doi.org/10.3390/rs14020425]
50. Budd, C.J.; McRae, A.T.; Cotter, C.J. The scaling and skewness of optimally transported meshes on the sphere. J. Comput. Phys.; 2018; 375, pp. 540-564. [DOI: https://dx.doi.org/10.1016/j.jcp.2018.08.028]
51. Rojas-Perez, L.O.; Martinez-Carranza, J. Metric monocular SLAM and colour segmentation for multiple obstacle avoidance in autonomous flight. Proceedings of the 2017 Workshop on Research, Education and Development of Unmanned Aerial Systems (RED-UAS); Linköping, Sweden, 3–5 October 2017; pp. 234-239.
52. Dionisio-Ortega, S.; Rojas-Perez, L.O.; Martinez-Carranza, J.; Cruz-Vega, I. A deep learning approach towards autonomous flight in forest environments. Proceedings of the 2018 International Conference on Electronics, Communications and Computers (CONIELECOMP); Cholula, Mexico, 21–23 February 2018; pp. 139-144.
53. Kao, C.Y.; Osher, S.; Qian, J. Lax–Friedrichs sweeping scheme for static Hamilton–Jacobi equations. J. Comput. Phys.; 2004; 196, pp. 367-391. [DOI: https://dx.doi.org/10.1016/j.jcp.2003.11.007]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
In the last decade, the vision systems have improved their capabilities to capture 3D images in bad weather scenarios. Currently, there exist several techniques for image acquisition in foggy or rainy scenarios that use infrared (IR) sensors. Due to the reduced light scattering at the IR spectra it is possible to discriminate the objects in a scene compared with the images obtained in the visible spectrum. Therefore, in this work, we proposed 3D image generation in foggy conditions using the single-pixel imaging (SPI) active illumination approach in combination with the Time-of-Flight technique (ToF) at 1550 nm wavelength. For the generation of 3D images, we make use of space-filling projection with compressed sensing (CS-SRCNN) and depth information based on ToF. To evaluate the performance, the vision system included a designed test chamber to simulate different fog and background illumination environments and calculate the parameters related to image quality.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details





1 Electronics Department, Instituto Nacional de Astrofísica, Óptica y Electrónica—INAOE, Calle Luis Enrique Erro 1, Puebla 72840, Mexico;
2 Computer Science Department, Instituto Nacional de Astrofísica, Óptica y Electrónica—INAOE, Calle Luis Enrique Erro 1, Puebla 72840, Mexico;
3 Optics Department, Instituto Nacional de Astrofísica, Óptica y Electrónica—INAOE, Calle Luis Enrique Erro 1, Puebla 72840, Mexico;