This paper presents an intention understanding method for intelligent chemical experiment system. The innovations of this method mainly include, (a) Constructing an intention database according to the standardization of chemical experiment operation, (b) Quantifying and fusing the behavior information of different modes, constructing mathematical models, and calculate behavior intention probability of experimenters in real time, and (c) According to the predicted intention, the machine carries out active human-machine cooperation. The method in this paper can effectively obtain rich intention information from the experimenter's operation behavior which will help to establish a natural human-computer interaction environment and realize the active cooperative work between human and machine. The method in this paper has been evaluated and verified in an intelligent chemistry experiment system based on Unity.
Image retrieval based on deep learning of hash has made great progress. The hash method increases retrieval speed greatly while saving storage space. However, some problems exist, such as the distinctiveness of image feature still needs to be improved and some image features are still redundant. We propose a new deep learning to hash method, namely, deep multiscale divergence hashing, which provides a high diversity and compact image feature for image retrieval. The discriminative features from deep neural networks are identified by the importance criterion in network pruning techniques and the feature redundancy is reduced. Then, the selected features across different layers are fused in a certain proportion to increase the diversity of features for image retrieval. We also present a hybrid loss function in hash space, which consists of the weighted pairwise cross-entropy loss function and the KL-divergence. It tends to minimize the hamming distance between similar images and maximize the hamming distance between different images, which helps improve the accuracy. Massive experimental results show that our method achieves better feature distinguishability and more advanced image retrieval accuracy, and surpasses HashNet by 11.46%, 7.58%, and 13.86% on MS COCO, NUS-WIDE, and CIFAR-10 datasets, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.