Due to the complex nature of the underwater environment, underwater images often suffer from degradation issues such as low contrast, blurring, and color distortion. Obtaining clear underwater images is crucial for advancements in marine development. While existing convolution-based methods for underwater image enhancement have shown efficient improvement in visual quality, they still exhibit deficiencies in two key aspects: the ability to capture contextual information and the issue of information redundancy during image reconstruction. In this work, we propose MAGAN-UIE, a novel generative adversarial network for underwater image enhancement. MAGAN-UIE leverages dilated convolutions and depth-wise convolutions to enable efficient extraction of local features and contextual information. Furthermore, the model incorporates multiple attention mechanisms to mitigate information redundancy. Our extensive experiments demonstrate that the proposed method achieves significant improvements in underwater image enhancement, as evidenced by both visual inspection and quantitative evaluation metrics.
|