Approximate Dynamic Oracle for Dependency Parsing with Reinforcement Learning

Xiang Yu, Ngoc Thang Vu, Jonas Kuhn


Abstract
We present a general approach with reinforcement learning (RL) to approximate dynamic oracles for transition systems where exact dynamic oracles are difficult to derive. We treat oracle parsing as a reinforcement learning problem, design the reward function inspired by the classical dynamic oracle, and use Deep Q-Learning (DQN) techniques to train the oracle with gold trees as features. The combination of a priori knowledge and data-driven methods enables an efficient dynamic oracle, which improves the parser performance over static oracles in several transition systems.
Anthology ID:
W18-6021
Volume:
Proceedings of the Second Workshop on Universal Dependencies (UDW 2018)
Month:
November
Year:
2018
Address:
Brussels, Belgium
Editors:
Marie-Catherine de Marneffe, Teresa Lynn, Sebastian Schuster
Venue:
UDW
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
183–191
Language:
URL:
https://aclanthology.org/W18-6021
DOI:
10.18653/v1/W18-6021
Bibkey:
Cite (ACL):
Xiang Yu, Ngoc Thang Vu, and Jonas Kuhn. 2018. Approximate Dynamic Oracle for Dependency Parsing with Reinforcement Learning. In Proceedings of the Second Workshop on Universal Dependencies (UDW 2018), pages 183–191, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Approximate Dynamic Oracle for Dependency Parsing with Reinforcement Learning (Yu et al., UDW 2018)
Copy Citation:
PDF:
https://aclanthology.org/W18-6021.pdf