Extracting Multiple-Relations in One-Pass with Pre-Trained Transformers. Anthology ID: P19-1132 Volume: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics Month: July Year: 2019 Address: Florence, Italy Venue: ACL SIG: Publisher: Association for Computational Linguistics Note: Pages: 1371–1377 Language: URL: DOI: 10.18653/v1/P19-1132 Bibkey: wang-etal-2019-extracting Cite (ACL): Haoyu Wang, Ming Tan, Mo Yu, Shiyu Chang, Dakuo Wang, Kun Xu, Xiaoxiao Guo, and Saloni Potdar. We show that our approach is not only scalable but can also perform state-of-the-art on the standard benchmark ACE 2005. We build our solution upon the pre-trained self-attentive models (Transformer), where we first add a structured prediction layer to handle extraction between multiple entity pairs, then enhance the paragraph embedding to capture multiple relational information associated with each entity with entity-aware attention. In this work, we focus on the task of multiple relation extractions by encoding the paragraph only once. ![]() In practice, multiple passes are computationally expensive and this makes difficult to scale to longer paragraphs and larger text corpora. Abstract Many approaches to extract multiple relations from a paragraph require multiple passes over the paragraph.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |