User Experience Design (UX Design) comes from focusing on how products, in reality, affect the user's experience. In particular, the design of multi-modal interfaces for blind people facilitates the flexible and natural product or service capacity and improves blind people's interaction by overcoming the various existing constraints associated with any particular interaction. There have been various attempts to help visually impaired people appreciation of visual artwork, including multi-modal associations. However, these methods can only provide general information in terms of edge and pattern recognition by the sense of touch and restrained by the availability and number of specially developed artworks. We propose a novel method explaining visual artworks through image caption generation using artificial intelligence (AI) to improve artwork accessibility. This method can objectively describe any impressionism artwork used as a standalone description of art interpretation for blind people or can aide tactile-based methods. Based on end-to-end learning with a deep neural network, an encoder-decoder architecture model is adopted, and comprehensive experiments perform to confirm the stability of generated image captioning for stylized MS-COCO datasets with impressionism.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.