Difference between revisions of "WordNet - RTE Users"
Jump to navigation
Jump to search
m |
|||
Line 24: | Line 24: | ||
|- bgcolor="#ECECEC" align="left" | |- bgcolor="#ECECEC" align="left" | ||
− | | | + | | AUEB |
| RTE4 | | RTE4 | ||
− | | | + | | |
− | | | + | | |
− | | | + | | ''Data taken from the RTE4 proceedings. Participants are recommended to add further information.'' |
|- bgcolor="#ECECEC" align="left" | |- bgcolor="#ECECEC" align="left" | ||
− | | | + | | BIU |
| RTE4 | | RTE4 | ||
| 3.0 | | 3.0 | ||
− | | | + | | Derived on the fly lexical entailment rules, using synonyms, hypernyms (up to two levels) and derivations. Also used as part of our novel lexical-syntactic resource |
− | | | + | | 0.8% improvement in ablation test on RTE-4. Potential contribution is higher since this resource partially overlaps with the novel [[Lexical-Syntactic rule base - RTE Users|lexical-syntactic rule base]] |
|- bgcolor="#ECECEC" align="left" | |- bgcolor="#ECECEC" align="left" | ||
− | | | + | | Boeing |
| RTE4 | | RTE4 | ||
− | | | + | | 2.0 |
− | | | + | | Semantic relation between words |
− | | | + | | No formal evaluation. Plays a role in most entailments found |
|- bgcolor="#ECECEC" align="left" | |- bgcolor="#ECECEC" align="left" | ||
Line 52: | Line 52: | ||
|- bgcolor="#ECECEC" align="left" | |- bgcolor="#ECECEC" align="left" | ||
− | | | + | | CERES |
| RTE4 | | RTE4 | ||
| 3.0 | | 3.0 | ||
− | | | + | | Hypernyms, antonyms, indexWords (N,V,Adj,Adv) |
− | | | + | | Used, but no evaluation performed |
|- bgcolor="#ECECEC" align="left" | |- bgcolor="#ECECEC" align="left" | ||
− | | | + | | DFKI |
| RTE4 | | RTE4 | ||
| 3.0 | | 3.0 | ||
− | | | + | | Semantic relation between words |
− | | No | + | | No separate evaluation |
|- bgcolor="#ECECEC" align="left" | |- bgcolor="#ECECEC" align="left" | ||
− | | | + | | DLSIUAES |
| RTE4 | | RTE4 | ||
| | | | ||
Line 73: | Line 73: | ||
|- bgcolor="#ECECEC" align="left" | |- bgcolor="#ECECEC" align="left" | ||
− | | | + | | EMORY |
| RTE4 | | RTE4 | ||
| | | | ||
Line 80: | Line 80: | ||
|- bgcolor="#ECECEC" align="left" | |- bgcolor="#ECECEC" align="left" | ||
− | | | + | | FbkIrst |
| RTE4 | | RTE4 | ||
− | | | + | | 3.0 |
− | | | + | | Lexical similarity |
− | | | + | | No precise evaluation of the resource has been carried out. In our second run we used a combined system (EDITSneg + EDITSallbutneg), and we had an improvement of 0.6% in accuracy with respect to the first run in which only EDITSneg was used. EDITSallbutneg exploits lexical similarity (WordNet similarity), but we can’t affirm with precision that the improvement is due only to the use of WordNet |
|- bgcolor="#ECECEC" align="left" | |- bgcolor="#ECECEC" align="left" | ||
Line 176: | Line 176: | ||
| Synonymy resolution | | Synonymy resolution | ||
| Replacing the words of H with their synonyms in T: on RTE3 data sets 2% improvement | | Replacing the words of H with their synonyms in T: on RTE3 data sets 2% improvement | ||
+ | |||
+ | |- bgcolor="#ECECEC" align="left" | ||
+ | | UIUC | ||
+ | | RTE3 | ||
+ | | | ||
+ | | Semantic distance between words | ||
+ | | | ||
|- bgcolor="#ECECEC" align="left" | |- bgcolor="#ECECEC" align="left" | ||
Line 183: | Line 190: | ||
| Semantic relation between words | | Semantic relation between words | ||
| No evaluation of the resource | | No evaluation of the resource | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
|- bgcolor="#ECECEC" align="left" | |- bgcolor="#ECECEC" align="left" |
Revision as of 08:15, 26 November 2009
When not otherwise specified, the data about version, usage and evaluation of the resource have been provided by participants themselves.
Participants* | Campaign | Version | Specific usage description | Evaluations / Comments |
---|---|---|---|---|
AUEB | RTE5 | During the calculation of the similarity measures we treat words from T and H that are synonyms according to WordNet as identical. | Ablation test performed. Negative impact of the resource: -2% accuracy on two-way, -2.67% on three-way task. | |
BIU | RTE5 | 3.0 | Synonyms, hyponyms (2 levels away from the original term), the hyponym_instance relation and derivations. | Ablation test performed. Positive impact of the resource: 2.5% accuracy on two-way task. |
AUEB | RTE4 | Data taken from the RTE4 proceedings. Participants are recommended to add further information. | ||
BIU | RTE4 | 3.0 | Derived on the fly lexical entailment rules, using synonyms, hypernyms (up to two levels) and derivations. Also used as part of our novel lexical-syntactic resource | 0.8% improvement in ablation test on RTE-4. Potential contribution is higher since this resource partially overlaps with the novel lexical-syntactic rule base |
Boeing | RTE4 | 2.0 | Semantic relation between words | No formal evaluation. Plays a role in most entailments found |
Cambridge | RTE4 | 3.0 | Meaning postulates from WordNet noun hyponymy, e.g. forall x: cat(x) -> animal(x) | No systematic evaluation |
CERES | RTE4 | 3.0 | Hypernyms, antonyms, indexWords (N,V,Adj,Adv) | Used, but no evaluation performed |
DFKI | RTE4 | 3.0 | Semantic relation between words | No separate evaluation |
DLSIUAES | RTE4 | Data taken from the RTE4 proceedings. Participants are recommended to add further information. | ||
EMORY | RTE4 | Data taken from the RTE4 proceedings. Participants are recommended to add further information. | ||
FbkIrst | RTE4 | 3.0 | Lexical similarity | No precise evaluation of the resource has been carried out. In our second run we used a combined system (EDITSneg + EDITSallbutneg), and we had an improvement of 0.6% in accuracy with respect to the first run in which only EDITSneg was used. EDITSallbutneg exploits lexical similarity (WordNet similarity), but we can’t affirm with precision that the improvement is due only to the use of WordNet |
FSC | RTE4 | Data taken from the RTE4 proceedings. Participants are recommended to add further information. | ||
IIT | RTE4 | Data taken from the RTE4 proceedings. Participants are recommended to add further information. | ||
IPD | RTE4 | Data taken from the RTE4 proceedings. Participants are recommended to add further information. | ||
OAQA | RTE4 | Data taken from the RTE4 proceedings. Participants are recommended to add further information. | ||
QUANTA | RTE4 | Data taken from the RTE4 proceedings. Participants are recommended to add further information. | ||
SAGAN | RTE4 | Data taken from the RTE4 proceedings. Participants are recommended to add further information. | ||
Stanford | RTE4 | Data taken from the RTE4 proceedings. Participants are recommended to add further information. | ||
UAIC | RTE4 | Data taken from the RTE4 proceedings. Participants are recommended to add further information. | ||
UMD | RTE4 | Data taken from the RTE4 proceedings. Participants are recommended to add further information. | ||
UNED | RTE4 | Data taken from the RTE4 proceedings. Participants are recommended to add further information. | ||
Uoeltg | RTE4 | Data taken from the RTE4 proceedings. Participants are recommended to add further information. | ||
UPC | RTE4 | Data taken from the RTE4 proceedings. Participants are recommended to add further information. | ||
AUEB | RTE3 | 2.1 | Synonymy resolution | Replacing the words of H with their synonyms in T: on RTE3 data sets 2% improvement |
UIUC | RTE3 | Semantic distance between words | ||
VENSES | RTE3 | 3.0 | Semantic relation between words | No evaluation of the resource |
New user | Participants are encouraged to contribute. |
Total: 24 |
---|
[*] For further information about participants, click here: RTE Challenges - Data about participants
Return to RTE Knowledge Resources