Unsupervised Domain Clusters in Pretrained Language Models

Roee Aharoni, Yoav Goldberg


Abstract
The notion of “in-domain data” in NLP is often over-simplistic and vague, as textual data varies in many nuanced linguistic aspects such as topic, style or level of formality. In addition, domain labels are many times unavailable, making it challenging to build domain-specific systems. We show that massive pre-trained language models implicitly learn sentence representations that cluster by domains without supervision – suggesting a simple data-driven definition of domains in textual data. We harness this property and propose domain data selection methods based on such models, which require only a small set of in-domain monolingual data. We evaluate our data selection methods for neural machine translation across five diverse domains, where they outperform an established approach as measured by both BLEU and precision and recall with respect to an oracle selection.
Anthology ID:
2020.acl-main.692
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7747–7763
Language:
URL:
https://aclanthology.org/2020.acl-main.692
DOI:
10.18653/v1/2020.acl-main.692
Bibkey:
Cite (ACL):
Roee Aharoni and Yoav Goldberg. 2020. Unsupervised Domain Clusters in Pretrained Language Models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7747–7763, Online. Association for Computational Linguistics.
Cite (Informal):
Unsupervised Domain Clusters in Pretrained Language Models (Aharoni & Goldberg, ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.692.pdf
Video:
 http://slideslive.com/38929165
Code
 roeeaharoni/unsupervised-domain-clusters