Are Pre-trained Language Models Aware of Phrases? Simple but Strong Baselines for Grammar Induction

Information

Title Are Pre-trained Language Models Aware of Phrases? Simple but Strong Baselines for Grammar Induction
Authors
Taeuk Kim, Jihun Choi, Daniel Edmiston, Sang-goo Lee
Year 2020 / 4
Keywords natural language processing, grammar induction, unsupervied parsing, pre-trained language models, Transformer
Acknowledgement HPC
Publication Type International Conference
Publication International Conference on Learning Representations 2020 (ICLR 2020)
Link url

Abstract

With the recent success and popularity of pre-trained language models (LMs) in natural language processing, there has been a rise in efforts to understand their inner workings. In line with such interest, we propose a novel method that assists us in investigating the extent to which pre-trained LMs capture the syntactic notion of constituency. Our method provides an effective way of extracting constituency trees from the pre-trained LMs without training. In addition, we report intriguing findings in the induced trees, including the fact that some pre-trained LMs outperform other approaches in correctly demarcating adverb phrases in sentences.