Exploiting Text Matching Techniques for Knowledge-Grounded Conversation

Information

Title Exploiting Text Matching Techniques for Knowledge-Grounded Conversation
Authors
Yeonchan Ahn, Sang-goo Lee, Jaehui Park
Year 2020 / 7
Keywords Dialogue, knowledge-grounded conversation, knowledge selection, text matching
Acknowledgement Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education under Grant NRF-2018R1D1A1B07044633
Publication Type International Journal
Publication IEEE Access, Volume 8, pp. 126201 - 126214
Link url doi

Abstract

Knowledge-grounded conversation models aim at generating informative responses for the given dialogue context, based on external knowledge. To generate an informative and context-coherent response, it is important to conjugate dialogue context and external knowledge in a balanced manner. However, existing studies have paid less attention to finding appropriate knowledge sentences from external knowledge sources than to generating proper sentences with correct dialogue acts. In this paper, we propose two knowledge selection strategies: 1) Reduce-Match and 2) Match-Reduce and explore several neural knowledge-grounded conversation models based on each strategy. Models based on Reduce-Match strategy first distill the whole dialogue context into a single vector with salient features preserved and then compare this context vector with the representation of knowledge sentences to predict a relevant knowledge sentence. Models based on Match-Reduce strategy first match every turn of the context with knowledge sentences to capture fine-grained interactions and aggregate them while minimizing information loss to predict the knowledge sentence. Experimental results show that conversation models using each of our knowledge selection strategies outperform the competitive baselines not only in terms of knowledge selection accuracy but also in response generation performance. Our best model based on Match-Reduce outperforms the baselines in the comparative studies with the Wizard of Wikipedia dataset. Also, our best model based on Reduce-Match outperforms them with the CMU Document Grounded Conversations dataset.