Unraveling Antonym’s Word Vectors through a Siamese-like Network

Mathias Etcheverry, Dina Wonsever


Abstract
Discriminating antonyms and synonyms is an important NLP task that has the difficulty that both, antonyms and synonyms, contains similar distributional information. Consequently, pairs of antonyms and synonyms may have similar word vectors. We present an approach to unravel antonymy and synonymy from word vectors based on a siamese network inspired approach. The model consists of a two-phase training of the same base network: a pre-training phase according to a siamese model supervised by synonyms and a training phase on antonyms through a siamese-like model that supports the antitransitivity present in antonymy. The approach makes use of the claim that the antonyms in common of a word tend to be synonyms. We show that our approach outperforms distributional and pattern-based approaches, relaying on a simple feed forward network as base network of the training phases.
Anthology ID:
P19-1319
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3297–3307
Language:
URL:
https://aclanthology.org/P19-1319
DOI:
10.18653/v1/P19-1319
Bibkey:
Cite (ACL):
Mathias Etcheverry and Dina Wonsever. 2019. Unraveling Antonym’s Word Vectors through a Siamese-like Network. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3297–3307, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Unraveling Antonym’s Word Vectors through a Siamese-like Network (Etcheverry & Wonsever, ACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/P19-1319.pdf