Angelina Ivanova


2016

pdf bib
Towards Comparability of Linguistic Graph Banks for Semantic Parsing
Stephan Oepen | Marco Kuhlmann | Yusuke Miyao | Daniel Zeman | Silvie Cinková | Dan Flickinger | Jan Hajič | Angelina Ivanova | Zdeňka Urešová
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

We announce a new language resource for research on semantic parsing, a large, carefully curated collection of semantic dependency graphs representing multiple linguistic traditions. This resource is called SDP~2016 and provides an update and extension to previous versions used as Semantic Dependency Parsing target representations in the 2014 and 2015 Semantic Evaluation Exercises. For a common core of English text, this third edition comprises semantic dependency graphs from four distinct frameworks, packaged in a unified abstract format and aligned at the sentence and token levels. SDP 2016 is the first general release of this resource and available for licensing from the Linguistic Data Consortium in May 2016. The data is accompanied by an open-source SDP utility toolkit and system results from previous contrastive parsing evaluations against these target representations.

2015

pdf bib
Proceedings of the ACL-IJCNLP 2015 Student Research Workshop
Kuan-Yu Chen | Angelina Ivanova | Ellie Pavlick | Emily Bender | Chin-Yew Lin | Stephan Oepen
Proceedings of the ACL-IJCNLP 2015 Student Research Workshop

2014

pdf bib
SemEval 2014 Task 8: Broad-Coverage Semantic Dependency Parsing
Stephan Oepen | Marco Kuhlmann | Yusuke Miyao | Daniel Zeman | Dan Flickinger | Jan Hajič | Angelina Ivanova | Yi Zhang
Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014)

pdf bib
Treelet Probabilities for HPSG Parsing and Error Correction
Angelina Ivanova | Gertjan van Noord
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

Most state-of-the-art parsers take an approach to produce an analysis for any input despite errors. However, small grammatical mistakes in a sentence often cause parser to fail to build a correct syntactic tree. Applications that can identify and correct mistakes during parsing are particularly interesting for processing user-generated noisy content. Such systems potentially could take advantage of linguistic depth of broad-coverage precision grammars. In order to choose the best correction for an utterance, probabilities of parse trees of different sentences should be comparable which is not supported by discriminative methods underlying parsing software for processing deep grammars. In the present work we assess the treelet model for determining generative probabilities for HPSG parsing with error correction. In the first experiment the treelet model is applied to the parse selection task and shows superior exact match accuracy than the baseline and PCFG. In the second experiment it is tested for the ability to score the parse tree of the correct sentence higher than the constituency tree of the original version of the sentence containing grammatical error.

2013

pdf bib
On Different Approaches to Syntactic Analysis Into Bi-Lexical Dependencies. An Empirical Comparison of Direct, PCFG-Based, and HPSG-Based Parsers
Angelina Ivanova | Stephan Oepen | Rebecca Dridan | Dan Flickinger | Lilja Øvrelid
Proceedings of the 13th International Conference on Parsing Technologies (IWPT 2013)

pdf bib
Survey on parsing three dependency representations for English
Angelina Ivanova | Stephan Oepen | Lilja Øvrelid
51st Annual Meeting of the Association for Computational Linguistics Proceedings of the Student Research Workshop

2012

pdf bib
Who Did What to Whom? A Contrastive Study of Syntacto-Semantic Dependencies
Angelina Ivanova | Stephan Oepen | Lilja Øvrelid | Dan Flickinger
Proceedings of the Sixth Linguistic Annotation Workshop

pdf bib
Extracting Context-Rich Entailment Rules from Wikipedia Revision History
Elena Cabrio | Bernardo Magnini | Angelina Ivanova
Proceedings of the 3rd Workshop on the People’s Web Meets NLP: Collaboratively Constructed Semantic Resources and their Applications to NLP