Abstract

We present a memory-augmented approach to condition an autoregressive language model on a knowledge graph. We represent the graph as a collection of relation triples and retrieve relevant relations for a given context to improve text generation. Experiments on WikiText-103, WMT19, and enwik8 English datasets demonstrate that our approach produces a better language model in terms of perplexity and bits per character. We also show that relational memory improves coherence, is complementary to token-based memory, and enables causal interventions. Our model provides a simple yet effective way to combine an autoregressive language model and a knowledge graph for more coherent and logical generation.

Details

Title
Relational Memory-Augmented Language Models
Author
Liu, Qi; Yogatama, Dani; Blunsom, Phil
Pages
555-572
Publication year
2022
Publication date
2022
Publisher
MIT Press Journals, The
ISSN
2307387X
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2893948256
Copyright
© 2022. This work is published under https://creativecommons.org/licenses/by/4.0/legalcode (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.