Contrastive Representation Learning for Conversational Question Answering over Knowledge Graphs
Publication date
2022-10-17
Document type
Konferenzbeitrag
Author
Organisational unit
Universität Siegen
Conference
31st ACM International Conference on Information and Knowledge Management (CIKM 2022) : Atlanta, GA, USA, October 17 - 21, 2022
Book title
CIKM '22: Proceedings of the 31st ACM International Conference on Information & Knowledge Management
First page
925
Last page
934
Part of the university bibliography
Nein
Keyword
Contrastive learning
Conversations
Knowledge graphs
Question answering
Abstract
This paper addresses the task of conversational question answering (ConvQA) over knowledge graphs (KGs). The majority of existing ConvQA methods rely on full supervision signals with a strict assumption of the availability of gold logical forms of queries to extract answers from the KG. However, creating such a gold logical form is not viable for each potential question in a real-world scenario. Hence, in the case of missing gold logical forms, the existing information retrieval-based approaches use weak supervision via heuristics or reinforcement learning, formulating ConvQA as a KG path ranking problem. Despite missing gold logical forms, an abundance of conversational contexts, such as entire dialog history with fluent responses and domain information, can be incorporated to effectively reach the correct KG path. This work proposes a contrastive representation learning-based approach to rank KG paths effectively. Our approach solves two key challenges. Firstly, it allows weak supervision-based learning that omits the necessity of gold annotations. Second, it incorporates the conversational context (entire dialog history and domain information) to jointly learn its homogeneous representation with KG paths to improve contrastive representations for effective path ranking. We evaluate our approach on standard datasets for ConvQA, on which it significantly outperforms existing baselines on all domains and overall. Specifically, in some cases, the Mean Reciprocal Rank (MRR) and Hit@5 ranking metrics improve by absolute 10 and 18 points, respectively, compared to the state-of-the-art performance. © 2022 ACM.
Version
Published version
Access right on openHSU
Metadata only access