Ahmad Aghaebrahimian


2019

pdf bib
Towards Integration of Statistical Hypothesis Tests into Deep Neural Networks
Ahmad Aghaebrahimian | Mark Cieliebak
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

We report our ongoing work about a new deep architecture working in tandem with a statistical test procedure for jointly training texts and their label descriptions for multi-label and multi-class classification tasks. A statistical hypothesis testing method is used to extract the most informative words for each given class. These words are used as a class description for more label-aware text classification. Intuition is to help the model to concentrate on more informative words rather than more frequent ones. The model leverages the use of label descriptions in addition to the input text to enhance text classification performance. Our method is entirely data-driven, has no dependency on other sources of information than the training data, and is adaptable to different classification problems by providing appropriate training data without major hyper-parameter tuning. We trained and tested our system on several publicly available datasets, where we managed to improve the state-of-the-art on one set with a high margin and to obtain competitive results on all other ones.

2018

pdf bib
Deep Neural Networks at the Service of Multilingual Parallel Sentence Extraction
Ahmad Aghaebrahimian
Proceedings of the 27th International Conference on Computational Linguistics

Wikipedia provides an invaluable source of parallel multilingual data, which are in high demand for various sorts of linguistic inquiry, including both theoretical and practical studies. We introduce a novel end-to-end neural model for large-scale parallel data harvesting from Wikipedia. Our model is language-independent, robust, and highly scalable. We use our system for collecting parallel German-English, French-English and Persian-English sentences. Human evaluations at the end show the strong performance of this model in collecting high-quality parallel data. We also propose a statistical framework which extends the results of our human evaluation to other language pairs. Our model also obtained a state-of-the-art result on the German-English dataset of BUCC 2017 shared task on parallel sentence extraction from comparable corpora.

pdf bib
Linguistically-Based Deep Unstructured Question Answering
Ahmad Aghaebrahimian
Proceedings of the 22nd Conference on Computational Natural Language Learning

In this paper, we propose a new linguistically-based approach to answering non-factoid open-domain questions from unstructured data. First, we elaborate on an architecture for textual encoding based on which we introduce a deep end-to-end neural model. This architecture benefits from a bilateral attention mechanism which helps the model to focus on a question and the answer sentence at the same time for phrasal answer extraction. Second, we feed the output of a constituency parser into the model directly and integrate linguistic constituents into the network to help it concentrate on chunks of an answer rather than on its single words for generating more natural output. By optimizing this architecture, we managed to obtain near-to-human-performance results and competitive to a state-of-the-art system on SQuAD and MS-MARCO datasets respectively.

2016

pdf bib
Open-domain Factoid Question Answering via Knowledge Graph Search
Ahmad Aghaebrahimian | Filip Jurčíček
Proceedings of the Workshop on Human-Computer Question Answering