%0 Conference Proceedings %T Improving Neural Machine Translation by Incorporating Hierarchical Subword Features %A Morishita, Makoto %A Suzuki, Jun %A Nagata, Masaaki %Y Bender, Emily M. %Y Derczynski, Leon %Y Isabelle, Pierre %S Proceedings of the 27th International Conference on Computational Linguistics %D 2018 %8 August %I Association for Computational Linguistics %C Santa Fe, New Mexico, USA %F morishita-etal-2018-improving %X This paper focuses on subword-based Neural Machine Translation (NMT). We hypothesize that in the NMT model, the appropriate subword units for the following three modules (layers) can differ: (1) the encoder embedding layer, (2) the decoder embedding layer, and (3) the decoder output layer. We find the subword based on Sennrich et al. (2016) has a feature that a large vocabulary is a superset of a small vocabulary and modify the NMT model enables the incorporation of several different subword units in a single embedding layer. We refer these small subword features as hierarchical subword features. To empirically investigate our assumption, we compare the performance of several different subword units and hierarchical subword features for both the encoder and decoder embedding layers. We confirmed that incorporating hierarchical subword features in the encoder consistently improves BLEU scores on the IWSLT evaluation datasets. %U https://aclanthology.org/C18-1052 %P 618-629