Achieving image-to-image style transfer using various types of generative adversarial networks is a popular question in computer vision, with numerous works being proposed. However, the current typical research on the use of neural networks to achieve image-to-image style transfer is to improve the output performance through the improvement of the model, and the intermediate process of style transfer has not been paid much attention. In this paper, based on the implementation of CycleGAN style translation, the gradients of non-leaf nodes are calculated and extracted by using the autograd function in PyTorch; and finally, the aging images of faces at different ages are dynamically generated by using the middle layer gradient. The experimental results, based on the AGFW-v2 dataset, tell us that this is capable for the initial research purpose; and the presentation of the process of style transfer intermediate enriches the stage changes of age. The processing of intermediate layer features in this study can also be used in areas such as photo enhancement, image dehazing and face swapping to achieve controllability of model processing results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.