Abstract

Direct decoding for task-oriented dialogue is known to suffer from the explaining-away effect, manifested in models that prefer short and generic responses. Here we argue for the use of Bayes’ theorem to factorize the dialogue task into two models, the distribution of the context given the response, and the prior for the response itself. This approach, an instantiation of the noisy channel model, both mitigates the explaining-away effect and allows the principled incorporation of large pretrained models for the response prior. We present extensive experiments showing that a noisy channel model decodes better responses compared to direct decoding and that a two-stage pretraining strategy, employing both open-domain and task-oriented dialogue data, improves over randomly initialized models.

Details

Title
Pretraining the Noisy Channel Model for Task-Oriented Dialogue
Author
Liu, Qi; Yu, Lei; Rimell, Laura; Blunsom, Phil
Pages
657-674
Publication year
2021
Publication date
2021
Publisher
MIT Press Journals, The
ISSN
2307387X
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2893885766
Copyright
© 2021. This work is published under https://creativecommons.org/licenses/by/4.0/legalcode (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.