Hitoshi Manabe


2018

pdf bib
Neural Tensor Networks with Diagonal Slice Matrices
Takahiro Ishihara | Katsuhiko Hayashi | Hitoshi Manabe | Masashi Shimbo | Masaaki Nagata
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)

Although neural tensor networks (NTNs) have been successful in many NLP tasks, they require a large number of parameters to be estimated, which often leads to overfitting and a long training time. We address these issues by applying eigendecomposition to each slice matrix of a tensor to reduce its number of paramters. First, we evaluate our proposed NTN models on knowledge graph completion. Second, we extend the models to recursive NTNs (RNTNs) and evaluate them on logical reasoning tasks. These experiments show that our proposed models learn better and faster than the original (R)NTNs.

pdf bib
Reduction of Parameter Redundancy in Biaffine Classifiers with Symmetric and Circulant Weight Matrices
Tomoki Matsuno | Katsuhiko Hayashi | Takahiro Ishihara | Hitoshi Manabe | Yuji Matsumoto
Proceedings of the 32nd Pacific Asia Conference on Language, Information and Computation

2017

pdf bib
Adversarial Training for Cross-Domain Universal Dependency Parsing
Motoki Sato | Hitoshi Manabe | Hiroshi Noji | Yuji Matsumoto
Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies

We describe our submission to the CoNLL 2017 shared task, which exploits the shared common knowledge of a language across different domains via a domain adaptation technique. Our approach is an extension to the recently proposed adversarial training technique for domain adaptation, which we apply on top of a graph-based neural dependency parsing model on bidirectional LSTMs. In our experiments, we find our baseline graph-based parser already outperforms the official baseline model (UDPipe) by a large margin. Further, by applying our technique to the treebanks of the same language with different domains, we observe an additional gain in the performance, in particular for the domains with less training data.