Single-image super-resolution (SISR) in unconstrained environments is challenging due to various illuminations, occlusions, and complex environments. Recent research has achieved remarkable progress on super-resolution tasks with the development of deep learning in computer vision. We aim to improve the performance of deep-learning-based super-resolution tasks in different scenarios. Existing works are inherently limited to defective transfer structures such as the pooling method and the one-way information transfer between convolutional layers. To address this limitation, we propose a dense U-Net with the shuffle pooling method, a scalable method that can perform a down-sampling method for extracting features from shallow layers. The modified U-Net with dense blocks for SISR introduces a multi-way information transfer between high-to-low and low-to-high convolutional paths. Meanwhile, an innovative shuffle pooling module is designed to retain all information in the feature map of the previous layer, which is effective and easy-to-port. To minimize structural similarity and gradient error, a hybrid loss function was developed that combined mean square error, structural similarity index, and mean gradient error. The proposed method achieves superior accuracy on previous state-of-the-arts on the three benchmark datasets: SET14, Berkeley Segmentation Dataset 300, and International Conference on Document Analysis and Recognition Dataset 2003. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
CITATIONS
Cited by 1 scholarly publication.
Super resolution
Convolution
Image quality
Image restoration
Image segmentation
Network architectures
RGB color model