It appears you don't have support to open PDFs in this web browser. To view this file, Open with your PDF reader
Abstract
Effectively imaging within volumetric scattering media is of great importance and challenging especially in macroscopic applications. Recent works have demonstrated the ability to image through scattering media or within the weak volumetric scattering media using spatial distribution or temporal characteristics of the scattered field. Here, we focus on imaging Lambertian objects embedded in highly scattering media, where signal photons are dramatically attenuated during propagation and highly coupled with background photons. We address these challenges by providing a time-to-space boundary migration model (BMM) of the scattered field to convert the scattered measurements in spectral form to the scene information in the temporal domain using all of the optical signals. The experiments are conducted under two typical scattering scenarios: 2D and 3D Lambertian objects embedded in the polyethylene foam and the fog, which demonstrate the effectiveness of the proposed algorithm. It outperforms related works including time gating in terms of reconstruction precision and scattering strength. Even though the proportion of signal photons is only 0.75%, Lambertian objects located at more than 25 transport mean free paths (TMFPs), corresponding to the round-trip scattering length of more than 50 TMFPs, can be reconstructed. Also, the proposed method provides low reconstruction complexity and millisecond-scale runtime, which significantly benefits its application.
Imaging in scattering media is challenging due to signal attenuation and strong coupling of scattered and signal photons. The authors present a boundary migration model of the scattered field, converting scattered measurements in spectral form to scene information in temporal domain, and image Lambertian objects in highly scattering media.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details







1 Tsinghua University, Shenzhen International Graduate School, Shenzhen, China (GRID:grid.12527.33) (ISNI:0000 0001 0662 3178)
2 Tsinghua University, Department of Automation, Beijing, China (GRID:grid.12527.33) (ISNI:0000 0001 0662 3178); Tsinghua University, Institute for Brain and Cognitive Sciences, Beijing, China (GRID:grid.12527.33) (ISNI:0000 0001 0662 3178); Tsinghua University, Beijing National Research Center for Information Science and Technology, Beijing, China (GRID:grid.12527.33) (ISNI:0000 0001 0662 3178)
3 Tsinghua Innovation Center in Zhuhai, Zhuhai, China (GRID:grid.12527.33)