The Yi lacquerware pattern is abundant in content and exquisite in design. It has extremely strong regional characteristics and value connotations. Through field investigation, literature research and physical research, this study analyzes and extracts the pattern theme, morphological structure and typical color of Yi lacquerware from the perspective of design. We use morphological displacement method to redesign the extracted basic pattern and combine the arrangement rules to create a derivative pattern. We apply the derivative pattern to a modern women’s leather bag design, modeling production and simulation presentation through 3D digital software. Finally, we evaluate the design results through eye-movement experiments. The results show that the design method proposed in this research can produce leather bag products with ethnic characteristics while meeting the aesthetic perception of modern women. The cross-domain migration of Yi lacquerware patterns on leather bags can bring new artistic connotations and cultural meanings to contemporary leather products.
Single-image super-resolution (SISR) in unconstrained environments is challenging due to various illuminations, occlusions, and complex environments. Recent research has achieved remarkable progress on super-resolution tasks with the development of deep learning in computer vision. We aim to improve the performance of deep-learning-based super-resolution tasks in different scenarios. Existing works are inherently limited to defective transfer structures such as the pooling method and the one-way information transfer between convolutional layers. To address this limitation, we propose a dense U-Net with the shuffle pooling method, a scalable method that can perform a down-sampling method for extracting features from shallow layers. The modified U-Net with dense blocks for SISR introduces a multi-way information transfer between high-to-low and low-to-high convolutional paths. Meanwhile, an innovative shuffle pooling module is designed to retain all information in the feature map of the previous layer, which is effective and easy-to-port. To minimize structural similarity and gradient error, a hybrid loss function was developed that combined mean square error, structural similarity index, and mean gradient error. The proposed method achieves superior accuracy on previous state-of-the-arts on the three benchmark datasets: SET14, Berkeley Segmentation Dataset 300, and International Conference on Document Analysis and Recognition Dataset 2003.
Deep-learning-based approaches to depth estimation are rapidly advancing, offering superior performance over existing methods. To estimate the depth in real-world scenarios, depth estimation models require the robustness of various noise environments. We propose a pyramid frequency network (PFN) with spatial attention residual refinement module (SARRM) to deal with the weak robustness of existing deep-learning methods. To reconstruct depth maps with accurate details, the SARRM constructs a residual fusion method with an attention mechanism to refine the blur depth. The frequency division strategy is designed, and the frequency pyramid network is developed to extract features from multiple frequency bands. With the frequency strategy, PFN achieves better visual accuracy than state-of-the-art methods in both indoor and outdoor scenes on Make3D, KITTI depth, and NYUv2 datasets. Additional experiments on the noisy NYUv2 dataset demonstrate that PFN is more reliable than existing deep-learning methods in high-noise scenes.
Monocular depth estimation, which provides a critical method for understanding 3D scene geometry, is an ill-posed problem. Recent research studies have achieved significant progress by reliable network architecture and optimized constraints, such as spatial propagation network and depth metrics. We propose an effective generative adversarial network for fast and accurate monocular depth estimation. Our approach demonstrates the feasibility of applying a dense-connected UNet for reducing information transmission loss and then fine-tuning the blur depth by the high-order convolutional spatial propagation network (CSPN) that used a modified loss function of discriminator. Furthermore, we modify the loss function of discriminator by adding the correlation loss that is used to measure the similarity of real and fake labels. Compared with the original CSPN, the high-order CSPN reduces the computation complexity and accelerates the convergence of the generator network by increasing the order of kernel, which emphasizes the weight of kernels in the update formula. With these modifications, our generative adversarial second-order convolutional spatial propagation network (GA-CSPN) achieves more accurate results against state-of-the-art methods in both indoor and outdoor scenes on Make3D, KITTI 2015, and NYUv2 datasets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.