KEYWORDS: Cameras, Lawrencium, Education and training, Data modeling, Visualization, Displays, Super resolution, Visual process modeling, Color, Photography, Image processing algorithms and systems
In real-world applications, there are many scenarios where people need to capture digital screens. While the quality of digital images captured by cameras and mobile phones is constantly being improved, taking high-resolution images of digital screens remains challenging. Except for the camera sensor, the display screen also involves more complicated degradations, such as noise, color distortion, etc. However, few studies of single image super-resolution (SISR) have focused on the camera-screen degradation. In this paper, we build the first camera-screen degraded dataset (Cam- ScreenSR), where HR images are original ground truths from the previous DIV2K dataset and corresponding LR images are camera-captured versions of HRs displayed on the screen. Moreover, we propose a joint two-stage model which consists of the downsampling degradation GAN(DD-GAN) and the dual residual channel attention network (DuRCAN). Specifically, DD-GAN first learns the real degradation to synthesize more various LR images, and then DuRCAN learns to recover the mixed real and synthetic LR images supervised with paired HR ground-truths. We also use a Laplacian loss to sharpen the high-frequency edges. Extensive experiments validate that our proposed method outperforms existing SOTA models in both synthetic and real-degraded datasets. Moreover, in real captured photographs, our model also delivers the best visual quality with sharper edge, fewer artifacts, and especially appropriate color enhancement, which has not been accomplished by previous methods.
Human gait, as a soft biometric, helps to recognize people by walking. To further improve the recognition performance under cross-view condition, we propose Joint Bayesian to model the view variance. We evaluated our prosed method with the largest population (OULP) dataset which makes our result reliable in a statically way. As a result, we confirmed our proposed method significantly outperformed state-of-the-art approaches for both identification and verification tasks. Finally, sensitivity analysis on the number of training subjects was conducted, we find Joint Bayesian could achieve competitive results even with a small subset of training subjects (100 subjects). For further comparison, experimental results, learning models, and test codes are available.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.