Difference between revisions of "Textual Entailment Resource Pool"

From ACL Wiki
Jump to navigation Jump to search
Line 19: Line 19:
 
* [http://www-nlp.stanford.edu/projects/contradiction/ RTE data sets annotated for a 3-way decision: entails, contradicts, unknown.] Provided by Stanford NLP Group.
 
* [http://www-nlp.stanford.edu/projects/contradiction/ RTE data sets annotated for a 3-way decision: entails, contradicts, unknown.] Provided by Stanford NLP Group.
 
* [http://www.cs.utexas.edu/~pclark/bpi-test-suite/ BPI RTE data set] - 250 pairs, focusing on world knowledge. Provided jointly by [http://www.boeing.com/phantom/math_ct/index.html Boeing], [http://wordnet.cs.princeton.edu/ Princeton], and [http://www.isi.edu ISI].
 
* [http://www.cs.utexas.edu/~pclark/bpi-test-suite/ BPI RTE data set] - 250 pairs, focusing on world knowledge. Provided jointly by [http://www.boeing.com/phantom/math_ct/index.html Boeing], [http://wordnet.cs.princeton.edu/ Princeton], and [http://www.isi.edu ISI].
 +
* [http://hlt.fbk.eu/en/TE_Specialized_Data Textual Entailment Specialized Data Sets] - 90 RTE-5 Test Set pairs annotated with linguistic phenomena + 203 monothematic pairs (i.e. pairs where only one linguistic phenomenon is relevant to the entailment relation) created from the 90 annotated pairs. Provided jointly by [http://hlt.fbk.eu/en/home FBK-Irst], and [http://www.celct.it/ CELCT].
  
 
== Knowledge Resources ==
 
== Knowledge Resources ==

Revision as of 08:47, 28 April 2010

Textual entailment systems rely on many different types of NLP resources, including term banks, paraphrase lists, parsers, named-entity recognizers, etc. With so many resources being continuously released and improved, it can be difficult to know which particular resource to use when developing a system.

In response, the Recognizing Textual Entailment (RTE) shared task community initiated a new activity for building this Textual Entailment Resource Pool. RTE participants and any other member of the NLP community are encouraged to contribute to the pool.

In an effort to determine the relative impact of the resources, RTE participants are strongly encouraged to report, whenever possible, the contribution to the overall performance of each utilized resource. Formal qualitative and quantitative results should be included in a separate section of the system report as well as posted on the talk pages of this Textual Entailment Resource Pool.

Adding a new resource is very easy. See how to use existing templates to do this in Help:Using Templates.

Complete RTE Systems

RTE data sets

Knowledge Resources

RTE Knowledge Resources

Tools

Parsers

Role Labelling

Entity Recognition Tools

Corpus Readers

  • NLTK provides a corpus reader for the data from RTE Challenges 1, 2, and 3 - see the Corpus Readers Guide for more information.

Related Libraries

  • PyPES general purpose library containing evaluation environment for RTE and McPIET text inference engine based on the ERG (English Resource Grammar)

Links