Knowledge Base Embedding By Cooperative Knowledge Distillation - Recherche d’Information et Synthèse d’Information Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

Knowledge Base Embedding By Cooperative Knowledge Distillation

Résumé

Knowledge bases are increasingly exploited as gold standard data sources which benefit various knowledge-driven NLP tasks. In this paper, we explore a new research direction to perform knowledge base (KB) representation learning grounded with the recent theoretical framework of knowledge distillation over neural networks. Given a set of KBs, our proposed approach KD-MKB, learns KB embeddings by mutually and jointly distilling knowledge within a dynamic teacher-student setting. Experimental results on two standard datasets show that knowledge distillation between KBs through entity and relation inference is actually observed. We also show that cooperative learning significantly outperforms the two proposed baselines, namely traditional and sequential distillation.
Fichier principal
Vignette du fichier
2020.coling-main.489.pdf (1.23 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03172074 , version 1 (17-03-2021)

Licence

Paternité - Pas d'utilisation commerciale - Pas de modification

Identifiants

Citer

Raphaël Sourty, Jose G. Moreno, Francois-Paul Servant, Lynda Tamine. Knowledge Base Embedding By Cooperative Knowledge Distillation. International Conference on Computational Linguistics (COLING 2020), Dec 2020, Barcelone (on line), Spain. pp.5579-5590, ⟨10.18653/v1/2020.coling-main.489⟩. ⟨hal-03172074⟩
102 Consultations
79 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More