Skip to Main content Skip to Navigation
Conference papers

A Data-Efficient End-to-End Spoken Language Understanding Architecture

Abstract : End-to-end architectures have been recently proposed for spoken language understanding (SLU) and semantic parsing. Based on a large amount of data, those models learn jointly acoustic and linguistic-sequential features. Such architectures give very good results in the context of domain, intent and slot detection, their application in a more complex semantic chunking and tagging task is less easy. For that, in many cases, models are combined with an external a language model to enhance their performance. In this paper we introduce a data efficient system which is trained end-to-end, with no additional, pre-trained external module. One key feature of our approach is an incremental training procedure where acoustic, language and semantic models are trained sequentially one after the other. The proposed model has a reasonable size and achieves competitive results with respect to state-of-the-art while using a small training dataset. In particular, we reach 24.02% Concept Error Rate (CER) on MEDIA/test while training on MEDIA/train without any additional data.
Complete list of metadatas
Contributor : Marco Dinarelli <>
Submitted on : Monday, January 4, 2021 - 2:25:49 PM
Last modification on : Sunday, January 17, 2021 - 3:17:05 AM


Files produced by the author(s)


  • HAL Id : hal-03094850, version 1


Marco Dinarelli, Nikita Kapoor, Bassam Jabaian, Laurent Besacier. A Data-Efficient End-to-End Spoken Language Understanding Architecture. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), May 2020, Barcellone, Spain. ⟨hal-03094850⟩



Record views


Files downloads