CELDA: Leveraging Black-box Language Model as Enhanced Classifier without Labels

Information

Title CELDA: Leveraging Black-box Language Model as Enhanced Classifier without Labels
Authors
Hyunsoo Cho, Youna Kim, Sang-goo Lee
Year 2023 / 7
Keywords NLP, machine learning, language models, ICL
Publication Type International Conference
Publication The 61st Annual Meeting of the Association for Computational Linguistics (ACL 2023)

Abstract

Utilizing language models (LMs) without internal access is becoming an attractive paradigm in the field of NLP as many cutting-edge LMs are released through APIs and boast a massive scale. The de-facto method in this type of \textit{black-box} scenario is known as \textit{prompting}, which has shown progressive performance enhancements in situations where data labels are scarce or unavailable. Despite their efficacy, they still fall short in comparison to fully supervised counterparts and are generally brittle to slight modifications. In this paper, we propose Clustering-enhanced Linear Discriminative Analysis (\celda), a novel approach that improves the text classification accuracy with a very weak-supervision signal (i.e., name of the labels). Our framework draws a precise decision boundary without accessing weights or gradients of the LM model or data labels. The core ideas of \celda\ are twofold: (1) extracting a refined pseudo-labeled dataset from an unlabeled dataset, and (2) training a lightweight and robust model on the top of LM, which learns an accurate decision boundary from an extracted noisy dataset. Throughout in-depth investigations on various datasets, we demonstrated that \celda\ reaches new state-of-the-art in weakly-supervised text classification and narrows the gap with a fully-supervised model. Additionally, our proposed methodology can be applied universally to any LM and has the potential to scale to larger models, making it a more viable option for utilizing large LMs.