Imaging through degraded visual environments is a challenging task in remote sensing missions. Image degradation may come from the loss of contrast due to particle scattering and/or distortion due to turbulence-induced effects. The problem is especially challenging when imaging from moving platforms such as autonomous underwater vehicles. One potential approach to address these issues is to use multiple images and employ a multi-frame image fusion technique to aid in the recovery of corrupted image detail and quality. A machine learning (ML)-based image enhancement and fusion technique is investigated to restore images distorted by underwater turbulence. The main contributions include the incorporation of a ML-based image weight predictor to predict the ideal weight maps to be used in an image fusion process. This network is trained using a generative adversarial network framework and a synthetically generated image dataset, which is created according to an analytical image degradation model. In addition, the image loss function for the weight map predictor is determined by the final fused image, resulting in a balanced fusion technique that can reduce image distortion, recover crisp image detail, and reduce the overall noise figure. Another key contribution of this paper is the adoption of an image loss function that incorporates an innovative combined correntropy and Fourier space loss function to reinforce the network in both the spatial and frequency domain. The performance of the proposed algorithm is evaluated using a synthetic validation dataset of images and several real datasets captured at the Naval Research Lab’s Simulated Turbulence and Turbidity Environment under various controlled turbulence intensities. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
CITATIONS
Cited by 3 scholarly publications.
Image fusion
Turbulence
Image restoration
Image enhancement
Windows
Machine learning
Image processing