Semantic Transformations across Natural Languages and Abstract Meaning Representation

Cai, Deng.   The Chinese University of Hong Kong (Hong Kong) ProQuest Dissertations Publishing,  2022. 30292087.

Abstract (summary)

As the carrier of information, language is the principal and most natural communication system used by humans. However, language is not directly understandable and executable for machines. Therefore, a key task in artificial intelligence is to create a map between natural language text and machine-interpretable meaning representations. Abstract Meaning Representation (AMR) is one such representation with a wide range of applications. AMR encodes the meaning of a natural language sentence as a rooted, directed, and labeled graph, where nodes represent concepts and edges represent relations.

In the first part of the thesis, we present an algorithm for automatically transforming natural language texts into AMR (i.e., AMR Parsing). This task is challenging for it encompasses a rich set of traditional tasks such as Named Entity Recognition (NER), Semantic Role Labeling (SRL), Word Sense Disambiguation (WSD), and Coreference Resolution. Our proposed method constructs a parse graph incrementally in a top-down fashion. The output graph spans the nodes by the distance to the root, following the intuition of first grasping the main ideas then digging into more details. The core semantic first principle emphasizes capturing the main ideas of a sentence, which is of great interest. Experiments show that our parser is especially good at obtaining the core semantics. The proposed method is then further enhanced by an iterative inference design. We explicitly characterize each spanning step as the efforts for finding which part of the input sequence to abstract, and where in the partially constructed output graph to construct. The iterative process helps achieve better answers to both questions, leading to greatly improved parsing accuracy.

In the second part of the thesis, we present an algorithm for mapping AMR to natural language text (i.e., AMR-to-text Generation). The algorithm uses a new neural network architecture, named Graph Transformer, for graph representation learning. Unlike traditional graph neural networks that restrict the information exchange between immediate neighborhoods, Graph Transformer uses explicit relation encoding and allows direct communication between two distant nodes. It provides a more efficient way for global graph structure modeling. Experiments show that our method largely outperforms previous state-of-the-art methods for AMR-to-text generation. We also show that the algorithm can be used to improve syntax-based machine translation.

In the third part of the thesis, we explore multilingual AMR parsing. To date, most resources of AMR are associated with English. The annotations of AMR for other languages can be very expensive. It is therefore challenging to develop a multilingual AMR parser. We tackle this problem from the perspective of knowledge distillation, where the aim is to learn and improve a multilingual AMR parser by using an existing English parser as its teacher. The complete training process consists of multiple pre-training and fine-tuning stages. As a result, we obtain one multilingual AMR parser whose performances surpass all previously published results on four different languages, including German, Spanish, Italian, and Chinese, by large margins.

This thesis also discusses the semantic transformations between different natural languages (i.e., machine translation). We propose a new framework for neural machine translation (NMT) that uses monolingual data in the target language as translation memory (TM) and performs learnable cross-lingual memory retrieval. First, the cross-lingual memory retriever allows abundant monolingual data to be TM. Second, the memory retriever and NMT model can be jointly optimized for the ultimate translation goal. Experiments show that the proposed model even outperforms strong TM-augmented NMT baselines using bilingual TM. Moreover, our framework also demonstrates effectiveness in low-resource and domain adaptation scenarios.

Alternate abstract:


在论文的第一部分,我们提出了一种将自然语言文本自动转换为AMR语义图(即 AMRParsing)的算法。这项任务极具挑战性,因为它包含许多传统任务,例如命名实体识别(NER)、语义角色标签(SRL)、词义消歧(WSD)和共指解析等。我们提出的方法以自上而下的方式逐步构建语义图,遵循首先掌握主要思想然后挖掘更多细节的核心语义优先原则。核心语义优先原则强调捕捉句子的主要思想,这在实践中是非常有意义的。实验表明,我们的解析器特别擅长获取核心语义。我们还提出了通过概念-关系迭代推理算法来进一步增强解析精度。我们明确地将每个解析步骤分解成两个问题(1)此时应该将输入文本的哪个部分抽象成什么概念(2)此时应该填补语义图的什么空缺以及它与已有节点有什么关系。这两个问题的答案互为因果。实验表明迭代过程有助于更好地回答这两个问题,从而大大提高解析准确度。

在论文的第二部分,我们提出了一种将AMR语义图映射到自然语言文本(即AMR-to-text Generation)的算法。该算法使用一种新的神经网络架构,名为 Graph Transformer,用于图表示学习。与传统图神经网络将信息交换限制在直接邻域的做法不同,GraphTransformer 使用显式关系编码并允许两个远距离节点直接通信。它为全局图结构建模提供了一种更高效的方法。实验表明,我们的方法在很大程度上优于先前最先进的AMR到文本生成方法。我们还发现该算法可用于改进基于语法的机器翻译。

在论文的第三部分,我们探讨了多语言AMR语义图解析。迄今为止,AMR的 大部分资源都与英语相关联。其他语言的AMR标注非常昂贵。因此,开发多语言AMR解析器极具挑战性。我们从知识蒸馏的角度来解决这个问题。具体来说,我们使用现有的英语解析器作为老师,把多语言AMR解析器作为学生,来改进多语言AMR解析器。完整的训练过程包括多个预训练和微调阶段。实验结果表明我们的多语言AMR解析器可以同时解析多种语言,在四种不同语言(包括德语、西班牙语、意大利语和中文)上其性能大大超过了以往的最好结果。


Indexing (details)

Business indexing term
Systems science
0454: Management
0537: Engineering
0790: Systems science
Identifier / keyword
Communication systems; Abstract Meaning Representation; Named Entity Recognition
Semantic Transformations across Natural Languages and Abstract Meaning Representation
Cai, Deng
Number of pages
Publication year
Degree date
School code
DAI-A 84/6(E), Dissertation Abstracts International
Place of publication
Ann Arbor
Country of publication
United States
Lam, Wai
Committee member
Meng, Mei Ling Helen
The Chinese University of Hong Kong (Hong Kong)
University location
Hong Kong
Source type
Dissertation or Thesis
Document type
Dissertation/thesis number
ProQuest document ID
Copyright ProQuest Dissertations Publishing 2022
Document URL