The representation learning of knowledge graph aims to represent the semantic information of entities and relations as dense low-dimensional real-valued vectors and map them to the same low-dimensional space. Existing methods often focus on the single-modal information of the text and ignore the information of the image modality, resulting in the ineffective use of entity feature information in the image. And there is entity-related descriptive information in most knowledge graphs, which are not well used in current multimodal knowledge representation learning methods. In this regard, a multimodal knowledge representation learning method combining description information is proposed. This method combines multimodal (image, text) data to construct a knowledge representation learning model and combines its corresponding brief description information to improve the representation effect of multimodal data. Experimental results show that the method performs well on triple classification and link prediction tasks on the constructed WI-D dataset.
Event extraction is a key research direction in the field of information extraction. In order to improve the effect of event extraction and solve the problem that the general event extraction method cannot make full use of the text feature information, an event extraction method integrating trigger word features is proposed. By building a remote trigger thesaurus, we can provide additional feature information for the event type classification model, enhance the ability of discovering event trigger words. Then the event arguments extraction model integrates the event type and trigger distance features to improve the representation learning ability. Finally, connecting the event type classification model and the event arguments extraction model in series to complete event extraction. Experiments are carried out on the DuEE dataset and the result shows that our model has more outstanding performance than other models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.