Enhancing Out-of-Distribution Detection in Natural Language Understanding via Implicit Layer Ensemble

Information

Title Enhancing Out-of-Distribution Detection in Natural Language Understanding via Implicit Layer Ensemble
Authors
Hyunsoo Cho, Choonghyun Park, Jaewook Kang, Kang Min Yoo, Taeuk Kim, Sang-goo Lee
Year 2022 / 10
Keywords NLP, machine learning, anomaly detection, contrastive learning
Acknowledgement SNU-Naver Hyperscale AI Center, AIIH
Publication Type International Conference
Publication Findings of the Association for Computational Linguistics (Findings of EMNLP 2022)
Link url

Abstract

Out-of-distribution (OOD) detection aims to discern outliers from the intended data distribution, which is crucial to maintaining high reliability and a good user experience. Most recent studies in OOD detection utilize the information from a single representation that resides in the penultimate layer to determine whether the input is anomalous or not. Although such a method is straightforward, the potential of diverse information in the intermediate layers is overlooked. In this paper, we propose a novel framework based on contrastive learning that encourages intermediate features to learn layer-specialized representations and assembles them \textit{implicitly} into a single representation to absorb rich information in the pre-trained language model. Extensive experiments in various intent classification and OOD datasets demonstrate that our approach is significantly more effective than other works.