Difference between revisions of "RTE5 - Ablation Tests"

From ACL Wiki
Jump to navigation Jump to search
Line 11: Line 11:
 
| Acronym guide
 
| Acronym guide
 
| Siel_093.3way
 
| Siel_093.3way
| style="text-align: right;"|0
+
| style="text-align: right;"| 0
| style="text-align: right;"|0
+
| style="text-align: right;"| 0
 
| Acronym Resolution
 
| Acronym Resolution
  
 
|- bgcolor="#ECECEC" "align="left"
 
|- bgcolor="#ECECEC" "align="left"
| Acronym guide + <br>Acronym_rules by UAIC
+
| Acronym guide + <br>UAIC_Acronym_rules
 
| UAIC20091.3way
 
| UAIC20091.3way
| style="text-align: right;"| +0.0017
+
| style="text-align: right;"| 0.0017
| style="text-align: right;"| +0.0016
+
| style="text-align: right;"| 0.0016
 
| We start from acronym-guide, but additional we use a rule that consider for expressions like Xaaaa Ybbbb Zcccc the acronym XYZ, regardless of length of text with this form.
 
| We start from acronym-guide, but additional we use a rule that consider for expressions like Xaaaa Ybbbb Zcccc the acronym XYZ, regardless of length of text with this form.
  
Line 25: Line 25:
 
| DIRT
 
| DIRT
 
| BIU1.2way
 
| BIU1.2way
| style="text-align: right;"| +0.0133
+
| style="text-align: right;"| 0.0133
 
| style="text-align: right;"|  
 
| style="text-align: right;"|  
 
| Inference rules
 
| Inference rules
Line 39: Line 39:
 
| DIRT
 
| DIRT
 
| UAIC20091.3way
 
| UAIC20091.3way
| style="text-align: right;"| +0,0017
+
| style="text-align: right;"| 0.0017
| style="text-align: right;"| +0,0033
+
| style="text-align: right;"| 0.0033
 
| We transform text and hypothesis with MINIPAR into dependency trees: use of DIRT relations to map verbs in T with verbs in H
 
| We transform text and hypothesis with MINIPAR into dependency trees: use of DIRT relations to map verbs in T with verbs in H
  
Line 46: Line 46:
 
| Framenet
 
| Framenet
 
| DLSIUAES1.2way
 
| DLSIUAES1.2way
| style="text-align: right;"| +0,0116
+
| style="text-align: right;"| 0.0116
 
| style="text-align: right;"|  
 
| style="text-align: right;"|  
 
| frame-to-frame similarity metric
 
| frame-to-frame similarity metric
Line 53: Line 53:
 
| Framenet
 
| Framenet
 
| DLSIUAES1.3way
 
| DLSIUAES1.3way
| style="text-align: right;"| -0,0017
+
| style="text-align: right;"| -0.0017
| style="text-align: right;"| -0,0017
+
| style="text-align: right;"| -0.0017
 
| frame-to-frame similarity metric
 
| frame-to-frame similarity metric
  
Line 67: Line 67:
 
| Grady Ward’s MOBY Thesaurus + <br>Roget's Thesaurus
 
| Grady Ward’s MOBY Thesaurus + <br>Roget's Thesaurus
 
| VensesTeam2.2way
 
| VensesTeam2.2way
| style="text-align: right;"| +0.0283
+
| style="text-align: right;"| 0.0283
 
| style="text-align: right;"|  
 
| style="text-align: right;"|  
 
| Semantic fields are used as semantic similarity matching, in all cases of non identical lemmas
 
| Semantic fields are used as semantic similarity matching, in all cases of non identical lemmas
  
 +
|- bgcolor="#ECECEC" "align="left"
 +
| MontyLingua Tool
 +
| Siel_093.3way
 +
| style="text-align: right;"| 0
 +
| style="text-align: right;"| 0
 +
| For the VerbOcean, the verbs have to be in the base form. We used the "MontyLingua" tool to convert the verbs into their base form
 +
 +
|- bgcolor="#ECECEC" "align="left"
 +
| NEGATION_rules by UAIC
 +
| UAIC20091.3way
 +
| style="text-align: right;"| 0
 +
| style="text-align: right;"| -0.0134
 +
| Negation rules check in the dependency trees on verbs descending branches to see if some categories of words that change the meaning are found.
 +
 +
|- bgcolor="#ECECEC" "align="left"
 +
| NER
 +
| UI_ccg1.2way
 +
| style="text-align: right;"| 0.0483
 +
| style="text-align: right;"|
 +
| Named Entity recognition/comparison
 +
 +
|- bgcolor="#ECECEC" "align="left"
 +
| PropBank
 +
| cswhu1.3way
 +
| style="text-align: right;"| 0.0200
 +
| style="text-align: right;"| 0.0317
 +
| syntactic and semantic parsing
 +
 +
|- bgcolor="#ECECEC" "align="left"
 +
| Stanford NER
 +
| QUANTA1.2way
 +
| style="text-align: right;"| 0.0067
 +
| style="text-align: right;"|
 +
| We use Named Entity similarity as a feature
 +
 +
|- bgcolor="#ECECEC" "align="left"
 +
| Stopword list
 +
| FBKirst1.2way
 +
| style="text-align: right;"| 0.0150
 +
| style="text-align: right;"| -0.1028
 +
|
 +
 +
|- bgcolor="#ECECEC" "align="left"
 +
| Training data from RTE1, 2, 3
 +
| PeMoZa3.2way
 +
| style="text-align: right;"| 0
 +
| style="text-align: right;"|
 +
|
 +
 +
|- bgcolor="#ECECEC" "align="left"
 +
| Training data from RTE1, 2, 3
 +
| PeMoZa3.2way
 +
| style="text-align: right;"| 0
 +
| style="text-align: right;"|
 +
|
 +
 +
|- bgcolor="#ECECEC" "align="left"
 +
| Training data from RTE2
 +
| PeMoZa3.2way
 +
| style="text-align: right;"| 0.0066
 +
| style="text-align: right;"|
 +
|
 +
 +
|- bgcolor="#ECECEC" "align="left"
 +
| Training data from RTE2, 3
 +
| PeMoZa3.2way
 +
| style="text-align: right;"| 0
 +
| style="text-align: right;"|
 +
|
 +
 +
|- bgcolor="#ECECEC" "align="left"
 +
| VerbOcean
 +
| DFKI1.3way
 +
| style="text-align: right;"| 0
 +
| style="text-align: right;"| 0.0017
 +
|
 +
 +
|- bgcolor="#ECECEC" "align="left"
 +
| VerbOcean
 +
| DFKI2.3way
 +
| style="text-align: right;"| 0.0033
 +
| style="text-align: right;"| 0.0050
 +
|
 +
 +
|- bgcolor="#ECECEC" "align="left"
 +
| VerbOcean
 +
| DFKI3.3way
 +
| style="text-align: right;"| 0.0017
 +
| style="text-align: right;"| 0.0017
 +
|
 +
 +
|- bgcolor="#ECECEC" "align="left"
 +
| VerbOcean
 +
| FBKirst1.2way
 +
| style="text-align: right;"| -0.0016
 +
| style="text-align: right;"| -0.1028
 +
| Rules extracted from VerbOcean
 +
 +
|- bgcolor="#ECECEC" "align="left"
 +
| VerbOcean
 +
| QUANTA1.2way
 +
| style="text-align: right;"| 0
 +
| style="text-align: right;"|
 +
| We use "opposite-of" relation in VerbOcean as a feature
 +
 +
|- bgcolor="#ECECEC" "align="left"
 +
| VerbOcean
 +
| Siel_093.3way
 +
| style="text-align: right;"| 0
 +
| style="text-align: right;"| 0
 +
| Similarity/anthonymy/unrelatedness between verbs
 +
 +
|- bgcolor="#ECECEC" "align="left"
 +
| WikiPedia
 +
| BIU1.2way
 +
| style="text-align: right;"| -0.0100
 +
| style="text-align: right;"|
 +
| Lexical rules extracted from Wikipedia definition sentences, title parenthesis, redirect and hyperlink relations
 +
 +
|- bgcolor="#ECECEC" "align="left"
 +
| WikiPedia
 +
| cswhu1.3way
 +
| style="text-align: right;"| 0.0133
 +
| style="text-align: right;"| 0.0334
 +
| Lexical semantic rules
 +
 +
|- bgcolor="#ECECEC" "align="left"
 +
| WikiPedia
 +
| FBKirst1.2way
 +
| style="text-align: right;"| 0.0100
 +
| style="text-align: right;"|
 +
| Rules extracted from WP using Latent Semantic Analysis (LSA)
 +
 +
|- bgcolor="#ECECEC" "align="left"
 +
| WikiPedia
 +
| UAIC20091.3way
 +
| style="text-align: right;"| 0.0117
 +
| style="text-align: right;"| 0.0150
 +
| Relations between named entities
 +
 +
|- bgcolor="#ECECEC" "align="left"
 +
| Wikipedia + <br>NER's (LingPipe, GATE) + <br>Perl patterns
 +
| UAIC20091.3way
 +
| style="text-align: right;"| 0.0617
 +
| style="text-align: right;"| 0.0500
 +
| NE module: NERs, in order to identify Persons, Locations, Jobs, Languages, etc; Perl patterns built by us for RTE4 in order to identify numbers and dates; our own resources extracted from Wikipedia in order to identify a "distance" between one name entity from hypothesis and name entities from text
  
 
|}
 
|}

Revision as of 09:00, 24 November 2009

Ablated Resource Team Run Relative accuracy - 2way Relative accuracy - 3way Resource Usage Description
Acronym guide Siel_093.3way 0 0 Acronym Resolution
Acronym guide +
UAIC_Acronym_rules
UAIC20091.3way 0.0017 0.0016 We start from acronym-guide, but additional we use a rule that consider for expressions like Xaaaa Ybbbb Zcccc the acronym XYZ, regardless of length of text with this form.
DIRT BIU1.2way 0.0133 Inference rules
DIRT Boeing3.3way -0.0117 0
DIRT UAIC20091.3way 0.0017 0.0033 We transform text and hypothesis with MINIPAR into dependency trees: use of DIRT relations to map verbs in T with verbs in H
Framenet DLSIUAES1.2way 0.0116 frame-to-frame similarity metric
Framenet DLSIUAES1.3way -0.0017 -0.0017 frame-to-frame similarity metric
Framenet UB.dmirg3.2way 0
Grady Ward’s MOBY Thesaurus +
Roget's Thesaurus
VensesTeam2.2way 0.0283 Semantic fields are used as semantic similarity matching, in all cases of non identical lemmas
MontyLingua Tool Siel_093.3way 0 0 For the VerbOcean, the verbs have to be in the base form. We used the "MontyLingua" tool to convert the verbs into their base form
NEGATION_rules by UAIC UAIC20091.3way 0 -0.0134 Negation rules check in the dependency trees on verbs descending branches to see if some categories of words that change the meaning are found.
NER UI_ccg1.2way 0.0483 Named Entity recognition/comparison
PropBank cswhu1.3way 0.0200 0.0317 syntactic and semantic parsing
Stanford NER QUANTA1.2way 0.0067 We use Named Entity similarity as a feature
Stopword list FBKirst1.2way 0.0150 -0.1028
Training data from RTE1, 2, 3 PeMoZa3.2way 0
Training data from RTE1, 2, 3 PeMoZa3.2way 0
Training data from RTE2 PeMoZa3.2way 0.0066
Training data from RTE2, 3 PeMoZa3.2way 0
VerbOcean DFKI1.3way 0 0.0017
VerbOcean DFKI2.3way 0.0033 0.0050
VerbOcean DFKI3.3way 0.0017 0.0017
VerbOcean FBKirst1.2way -0.0016 -0.1028 Rules extracted from VerbOcean
VerbOcean QUANTA1.2way 0 We use "opposite-of" relation in VerbOcean as a feature
VerbOcean Siel_093.3way 0 0 Similarity/anthonymy/unrelatedness between verbs
WikiPedia BIU1.2way -0.0100 Lexical rules extracted from Wikipedia definition sentences, title parenthesis, redirect and hyperlink relations
WikiPedia cswhu1.3way 0.0133 0.0334 Lexical semantic rules
WikiPedia FBKirst1.2way 0.0100 Rules extracted from WP using Latent Semantic Analysis (LSA)
WikiPedia UAIC20091.3way 0.0117 0.0150 Relations between named entities
Wikipedia +
NER's (LingPipe, GATE) +
Perl patterns
UAIC20091.3way 0.0617 0.0500 NE module: NERs, in order to identify Persons, Locations, Jobs, Languages, etc; Perl patterns built by us for RTE4 in order to identify numbers and dates; our own resources extracted from Wikipedia in order to identify a "distance" between one name entity from hypothesis and name entities from text