Difference between revisions of "Textual Entailment Resource Pool"
(added ASSIN) |
(→RTE data sets: Adding MultiNLI) |
||
Line 30: | Line 30: | ||
* [http://www.nist.gov/tac/2011/RTE/index.html RTE7 dataset] - provided by [http://www.nist.gov/index.html NIST] - freely available upon request. For details see [http://www.nist.gov/tac/data/forms/index.html TAC User Agreements] | * [http://www.nist.gov/tac/2011/RTE/index.html RTE7 dataset] - provided by [http://www.nist.gov/index.html NIST] - freely available upon request. For details see [http://www.nist.gov/tac/data/forms/index.html TAC User Agreements] | ||
* [http://www.cs.york.ac.uk/semeval-2013/task7/ The Joint Student Response Analysis and 8th Recognizing Textual Entailment Challenge] at SemEval 2013 | * [http://www.cs.york.ac.uk/semeval-2013/task7/ The Joint Student Response Analysis and 8th Recognizing Textual Entailment Challenge] at SemEval 2013 | ||
− | + | * [http://www.nyu.edu/projects/bowman/multinli/ The MultiGenre NLI Corpus] (433k examples, used in the [https://repeval2017.github.io/shared/ RepEval 2017 Shared Task]) | |
=== RTE data sets translated in other languages === | === RTE data sets translated in other languages === |
Latest revision as of 08:31, 29 May 2017
Textual Entailment > Resources:
Textual entailment systems rely on many different types of NLP resources, including term banks, paraphrase lists, parsers, named-entity recognizers, etc. With so many resources being continuously released and improved, it can be difficult to know which particular resource to use when developing a system.
In response, the Recognizing Textual Entailment (RTE) shared task community initiated a new activity for building this Textual Entailment Resource Pool. RTE participants and any other member of the NLP community are encouraged to contribute to the pool.
In an effort to determine the relative impact of the resources, RTE participants are strongly encouraged to report, whenever possible, the contribution to the overall performance of each utilized resource. Formal qualitative and quantitative results should be included in a separate section of the system report as well as posted on the talk pages of this Textual Entailment Resource Pool.
Adding a new resource is very easy. See how to use existing templates to do this in Help:Using Templates.
Complete RTE Systems
- VENSES (from Ca' Foscari University of Venice, Italy)
- Nutcracker (available for download)
- Entailment Demo (from the University of Illinois at Urbana-Champaign) - INACTIVE (as of 2010-12-22)
- EDITS - Edit Distance Textual Entailment Suite (open source software developed by Human Language Technology (HLT) group at FBK-Irst)
- BIUTEE - Bar Ilan University Textual Entailment Engine (open source)
- EXCITEMENT Open Platform (EOP) - A generic multi-lingual platform for textual inference made available to the scientific and technological communities by the EU project EXCITEMENT
- TIFMO (from National Institute of Informatics, Japan)
RTE data sets
Past campaigns data sets
- RTE1 dataset - provided by PASCAL
- RTE2 dataset - provided by PASCAL
- RTE3 dataset - provided by PASCAL
- RTE4 dataset - provided by NIST - freely available upon request. For details see TAC User Agreements
- RTE5 dataset - provided by NIST - freely available upon request. For details see TAC User Agreements
- RTE6 dataset - provided by NIST - freely available upon request. For details see TAC User Agreements
- RTE7 dataset - provided by NIST - freely available upon request. For details see TAC User Agreements
- The Joint Student Response Analysis and 8th Recognizing Textual Entailment Challenge at SemEval 2013
- The MultiGenre NLI Corpus (433k examples, used in the RepEval 2017 Shared Task)
RTE data sets translated in other languages
- RTE3 dataset translated in German - provided by EXCITEMENT
- RTE3 dataset translated in Italian - provided by EXCITEMENT
Other data sets
- The Stanford Natural Language Inference (SNLI) corpus, a 570k example manually-annotated TE dataset with accompanying leaderboard.
- FrameNet manually annotated RTE 2006 Test Set. Provided by SALSA project, Saarland University.
- Manually Word Aligned RTE 2006 Data Sets. Provided by the Natural Language Processing Group, Microsoft Research.
- RTE data sets annotated for a 3-way decision: entails, contradicts, unknown. Provided by Stanford NLP Group.
- BPI RTE data set - 250 pairs, focusing on world knowledge. Provided jointly by Boeing, Princeton, and ISI.
- Textual Entailment Specialized Data Sets - 90 RTE-5 Test Set pairs annotated with linguistic phenomena + 203 monothematic pairs (i.e. pairs where only one linguistic phenomenon is relevant to the entailment relation) created from the 90 annotated pairs. Provided jointly by FBK-Irst, and CELCT.
- RTE-5 Search Pilot Data Set annotated with anaphora and coreference information - RTE-5 Search Data Set annotated with anaphora/coreference information + Augmented RTE-5 Search Data Set, where all the referring expressions which need to be resolved in the entailing sentences are substituted by explicit expressions on the basis of the anaphora/coreference annotation. Provided by CELCT and distributed by NIST at the Past TAC Data web page (2009 Search Pilot, annotated test/dev data).
- RTE-3-Expanded, RTE-4-Expanded, RTE-5-Expanded. RTE data set expanded in the two and three way task, at least 2000 pairs in each data set.
- Explanation-Based Analysis annotation of RTE 5 Main Task subset described in this ACL 2010 paper
- Wiki Entailment Corpus A RTE-like set of entailment pairs extracted from Wikipedia revisions described in this paper
- The Guardian Headlines Entailment Training Dataset An automatically generated dataset of 32,000 pairs similar to the RTE-1 dataset.
- Answer Validation Exercise at CLEF 2006 (AVE 2006)
- The Textual Entailment Task for Italian at EVALITA 2009 An evaluation exercise on TE for Italian.
- Cross-Lingual Textual Entailment for Content Synchronization The Cross-Lingual Textual Entailment task at SemEval 2012.
- Cross-Lingual Textual Entailment for Content Synchronization The Cross-Lingual Textual Entailment task at SemEval 2013.
- ASSIN a shared task on TE for Portuguese with 10,000 pairs.
Knowledge Resources
The RTE Knowledge Resources page presents:
- a call for resources, inviting system developers to share the resources used by their own TE engines, to both help improve the TE technology and further test and evaluate such resources;
- the ablation tests carried out in the RTE challenges in order to evaluate the impact of knowledge resources and tools on TE system performances;
- lists of knowledge resources, both publicly available and unpublished, used by systems participating in the last RTE challenges.
Projects
- CoSyne EU project The Cross-Lingual Multilingual Content Synchronization with Wikis.
- EXCITEMENT EU project EXploring Customer Interactions through Textual EntailMENT.
- QALL-ME EU project Question Answering Learning technologies in a multiLingual and Multimodal Environment.
Tools
Parsers
- C&C parser for Combinatory Categorial Grammar
- Minipar
- Shallow Parser - from the University of Illinois at Urbana-Champaign, see a web demo of this tool
Role Labelling
- ASSERT
- Shalmaneser
- Semantic Role Labeler - from the University of Illinois at Urbana-Champaign, see a web demo of this tool
Entity Recognition Tools
- Illinois Named Entity Tagger - see a web demo of this tool
- Illinois Multi-lingual Named Entity Discovery Tool - see a web demo of this tool
Similarity / Relatedness Tools
- UKB: Open source WordNet-based similarity/relatedness tool, includes also pre-computed semantic vectors for all words
Corpus Readers
- NLTK provides a corpus reader for the data from RTE Challenges 1, 2, and 3 - see the Corpus Readers Guide for more information.
Related Libraries
- PyPES general purpose library containing evaluation environment for RTE and McPIET text inference engine based on the ERG (English Resource Grammar)
Text Normalizers
Java number normalizer (Beta) A tool for converting textual representations of numbers to a standard numerical string.