The visual quality of nighttime photographs diminishes greatly due to low contrast and high noise. We need a robust image enhancement methodology to improve such low-light images close to standard daylight images. Due to deteriorated conditions of uneven light and noise, this image enhancement problem becomes ill-posed. Our paper has proposed a Densely Residual Attention Network (DRANet), an end-to-end attention base densely residual network. The architecture of DRANet consists of sub-modules convolution block (CB) and densely residual feature - convolutional block attention module (DRF-CBAM). DFR-CBAM also has sub-modules, deep residual feature block (DRFB), and convolutional block attention module (CBAM). Using the most recent results from attention and deep residual-based convolution networks in number of computer vision problems, we have used DRFB to enhance the features in-depth by using its dense and residual skip connections. Similarly, features in both spatial and channel axis have been extracted by using a lightweight CBAM attention module. Contrast, luminosity, and noise of the enhanced images have been balanced by additionally implementing a color balancing function at the end of the proposed network. Furthermore, we have used a combination of LLab, LSSIM and LMAE loss functions to make the proposed network stable and recover both contextual and local details while training. MAE, PSNR, SSIM, MS-SSIM, FSIM, Cosine Similarity, and deltaE2000 have been used as referenced while NIQE as nonreferenced based image quality assessment (IQA) metric. Experiment results showed that our proposed methodology is very effective with higher referenced and lower non-referenced image IQA metrics values. Furthermore, the effectiveness of our method has also demonstrated by the visual and perceptual quality of enhanced images.
Underwater image processing and analysis have been a hotspot of study in recent years, as more emphasis has been focused to underwater monitoring and usage of marine resources. Compared with the open environment, underwater image encountered with more complicated conditions such as light abortion, scattering, turbulence, nonuniform illumination and color diffusion. Although considerable advances and enhancement techniques achieved in resolving these issues, they treat low-frequency information equally across the entire channel, which results in limiting the network's representativeness. We propose a deep learning and feature-attention-based end-to-end network (FA-Net) to solve this problem. In particular, we propose a Residual Feature Attention Block (RFAB), containing the channel attention, pixel attention, and residual learning mechanism with long and short skip connections. RFAB allows the network to focus on learning high-frequency information while skipping low-frequency information on multi-hop connections. The channel and pixel attention mechanism considers each channel's different features and the uneven distribution of haze over different pixels in the image. The experimental results shows that the FA-Net propose by us provides higher accuracy, quantitatively and qualitatively and superiority to previous state-of-the-art methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.