Abstract

In this research we have discussed how modern techniques use Deep Neural Network for processing low-light images. It involves de-noising and exposure correction which greatly improves the usability of these images.

De-Noising and exposure-correction both are well studied in the field of computer vision. Significant progress has been made by researchers to handle both of the problems separately. The ultimate goal of our work is to combine these into a single problem as most realistic low-light images inherently come with noise. Traditionally tailor-made solutions like custom de-noising filters were used to image enhancement. Recent applications have replaced these with DNN models due to their generalizability in different noise and exposure scenarios. But these models need a sheer amount of realistic data for training, which is an issue as it requires human labor and it’s infeasible to capture images that model all types of irregularities.

Therefore, our discussions extend towards investigating ways for generating realistic noisy low-light images considering both denoising and exposure-correction factors, which can be used for training any suitable DNN model. The data generation process requires a detailed understanding of the noise architecture with a digital camera’s image acquisition process. Our experimentations conclude that irrespective of the DNN model being used the performance gets better when trained on our synthetic noisy dataset. We have utilized an already existing SOTA (state of the art) DNN model named LLFLOW, and its generalizability in recovering the images has been greatly improved in terms of metrics like PNSR (Peak Signal to Noise Ratio), SSIM and LPIPS. It’s also able to avoid over-exposure circumstances during our testing which is clearly visible for our lab images. We have also tested the model on EarthCam images of New York City between June 6th & 8th, when the city was clouded with Canadian Wildfire Smoke. Though our model was not provided with that kind of data for training, it performed fairly well in which it’s supposed to i.e., improving the illumination of the images. This proves the immense importance of data on which we train our models.

Details

Title
Image Enhancement for Unconstrained Environments
Author
Bagchi, Sougato  VIAFID ORCID Logo 
Publication year
2023
Publisher
ProQuest Dissertations & Theses
ISBN
9798380349611
Source type
Dissertation or Thesis
Language of publication
English
ProQuest document ID
2866002308
Copyright
Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.

Supplemental files

Document includes 1 supplemental file(s).

Special programs or plug-ins may be required to view some files.

thesis_presentation.pdf (4.06 MB)