Difference between revisions of "Temporal Information Extraction (State of the art)"

From ACL Wiki
Jump to navigation Jump to search
(2 intermediate revisions by the same user not shown)
Line 33: Line 33:
 
|-
 
|-
 
| HeidelTime (t)
 
| HeidelTime (t)
|  
+
| rule-based
 
| Stro ̈tgen et al., 2013
 
| Stro ̈tgen et al., 2013
 
| 83.85
 
| 83.85
Line 48: Line 48:
 
|-
 
|-
 
| NavyTime (1,2)
 
| NavyTime (1,2)
|  
+
| rule-based
 
| Chambers, 2013
 
| Chambers, 2013
 
| 78.72
 
| 78.72
Line 59: Line 59:
 
| 78.58
 
| 78.58
 
| 70.97
 
| 70.97
| na
+
| -
|  
+
| -
 
|-
 
|-
 
| ManTIME (4)
 
| ManTIME (4)
|  
+
| CRF, probabilistic post-processing pipeline, rule-based normaliser
 
| Filannino et al., 2013
 
| Filannino et al., 2013
 
| 78.86
 
| 78.86
Line 78: Line 78:
 
|-
 
|-
 
| SUTime
 
| SUTime
|  
+
| deterministic rule-based
 
| Chang et al., 2013
 
| Chang et al., 2013
 
| 78.72
 
| 78.72
Line 93: Line 93:
 
|-
 
|-
 
| ATT (2)
 
| ATT (2)
|  
+
| MaxEnt, third party normalisers
 
| Jung et al., 2013
 
| Jung et al., 2013
 
| '''90.57'''
 
| '''90.57'''
Line 104: Line 104:
 
| 76.91
 
| 76.91
 
| 65.57
 
| 65.57
| na
+
| -
|  
+
| -
 
|-
 
|-
 
| ClearTK (1,2)
 
| ClearTK (1,2)
|  
+
| SVM, Logistic Regression, third party normaliser
 
| Bethard, 2013
 
| Bethard, 2013
 
| 85.94
 
| 85.94
Line 123: Line 123:
 
|-
 
|-
 
| JU-CSE
 
| JU-CSE
|  
+
| CRF, rule-based normaliser
 
| Kolya et al., 2013
 
| Kolya et al., 2013
 
| 81.51
 
| 81.51
Line 134: Line 134:
 
| 73.87
 
| 73.87
 
| 63.81
 
| 63.81
| na
+
| -
|  
+
| -
 
|-
 
|-
 
| KUL (2)
 
| KUL (2)
|  
+
| Logistic regression, post-processing, rule-based normaliser
 
| Kolomiyets et al., 2013
 
| Kolomiyets et al., 2013
 
| 76.99
 
| 76.99
Line 149: Line 149:
 
| 75.24
 
| 75.24
 
| 62.95
 
| 62.95
| na
+
| -
|  
+
| -
 
|-
 
|-
 
| FSS-TimEx
 
| FSS-TimEx
|  
+
| rule-based
 
| Zavarella et al., 2013
 
| Zavarella et al., 2013
 
| 52.03
 
| 52.03
Line 164: Line 164:
 
| 68.47
 
| 68.47
 
| 58.24
 
| 58.24
| na
+
| -
|  
+
| -
 
|-
 
|-
 
|}
 
|}
Line 212: Line 212:
 
| 79.32
 
| 79.32
 
| 88.46
 
| 88.46
| 0.0
+
| -
| 0.0
+
| -
 
| 70.17
 
| 70.17
 
|  
 
|  
Line 228: Line 228:
 
| 90.86
 
| 90.86
 
| 67.87
 
| 67.87
|  
+
| [https://code.google.com/p/cleartk/ Download]
|  
+
| [http://opensource.org/licenses/BSD-3-Clause BSD-3 Clause]
 
|-
 
|-
 
| NavyTime (1)
 
| NavyTime (1)
Line 251: Line 251:
 
| 68.97
 
| 68.97
 
| 79.09
 
| 79.09
| 0.0
+
| -
| 0.0
+
| -
 
| 54.55
 
| 54.55
 
|  
 
|  
Line 267: Line 267:
 
| 91.76
 
| 91.76
 
| 52.69
 
| 52.69
 +
|
 +
|
 +
|-
 +
| FSS-TimeEx
 +
|
 +
| Zavarella et al., 2013
 +
| 63.13
 +
| 67.11
 +
| 65.06
 +
| 66.00
 +
| -
 +
| -
 +
| 42.94
 
|  
 
|  
 
|  
 
|  
Line 275: Line 288:
  
 
===Task C relation only: Annotating relations given gold entities and related pairs===
 
===Task C relation only: Annotating relations given gold entities and related pairs===
 +
 +
===Task ABC: Temporal awareness evaluation===
  
 
==Challenges==
 
==Challenges==

Revision as of 04:40, 11 June 2013

Data sets

Performance measures

Results

The following results refers to the TempEval-3 challenge, the last evaluation exercise.

Task A: Temporal expression extraction and normalisation

The table shows the best result for each system. Different runs per system are not shown.

System name (best run) Short description Main publication Identification Normalisation Overall score Software License
Strict matching Lenient matching Accuracy
Pre. Rec. F1 Pre. Rec. F1 Type Value
HeidelTime (t) rule-based Stro ̈tgen et al., 2013 83.85 78.99 81.34 93.08 87.68 90.30 90.91 85.95 77.61 Download GNU GPL v3
NavyTime (1,2) rule-based Chambers, 2013 78.72 80.43 79.57 89.36 91.30 90.32 88.90 78.58 70.97 - -
ManTIME (4) CRF, probabilistic post-processing pipeline, rule-based normaliser Filannino et al., 2013 78.86 70.29 74.33 95.12 84.78 89.66 86.31 76.92 68.97 Demo & Download GNU GPL v2
SUTime deterministic rule-based Chang et al., 2013 78.72 80.43 79.57 89.36 91.30 90.32 88.90 74.60 67.38 Demo & Download GNU GPL v2
ATT (2) MaxEnt, third party normalisers Jung et al., 2013 90.57 69.57 78.69 98.11 75.36 85.25 91.34 76.91 65.57 - -
ClearTK (1,2) SVM, Logistic Regression, third party normaliser Bethard, 2013 85.94 79.71 82.71 93.75 86.96 90.23 93.33 71.66 64.66 Download BSD-3 Clause
JU-CSE CRF, rule-based normaliser Kolya et al., 2013 81.51 70.29 75.49 93.28 80.43 86.38 87.39 73.87 63.81 - -
KUL (2) Logistic regression, post-processing, rule-based normaliser Kolomiyets et al., 2013 76.99 63.04 69.32 92.92 76.09 83.67 88.56 75.24 62.95 - -
FSS-TimEx rule-based Zavarella et al., 2013 52.03 46.38 49.04 90.24 80.43 85.06 81.08 68.47 58.24 - -

Task B: Event extraction and classification

System name (best run) Short description Main publication Identification Attributes Overall score Software License
Strict matching Accuracy
Pre. Rec. F1 Class Tense Aspect
ATT (1) Jung et al., 2013 81.44 80.67 81.05 88.69 73.37 90.68 71.88
KUL (2) Kolomiyets et al., 2013 80.69 77.99 79.32 88.46 - - 70.17
ClearTK (4) Bethard, 2013 81.40 76.38 78.81 86.12 78.20 90.86 67.87 Download BSD-3 Clause
NavyTime (1) Chambers, 2013 80.73 79.87 80.30 84.03 75.79 91.26 67.48
Temp: (ESAfeature) X, 2013 78.33 61.61 68.97 79.09 - - 54.55
JU_CSE Kolya et al., 2013 80.85 76.51 78.62 67.02 74.56 91.76 52.69
FSS-TimeEx Zavarella et al., 2013 63.13 67.11 65.06 66.00 - - 42.94

Task C: Annotating relations given gold entities

Task C relation only: Annotating relations given gold entities and related pairs

Task ABC: Temporal awareness evaluation

Challenges

  • TempEval, Temporal Relation Identification, 2007: web page
  • TempEval-2, Evaluating Events, Time Expressions, and Temporal Relations, 2010: web page
  • TempEval-3, Evaluating Time Expressions, Events, and Temporal Relations, 2013: web page

References

  • UzZaman, N., Llorens, H., Derczynski, L., Allen, J., Verhagen, M., and Pustejovsky, J. Semeval-2013 task 1: Tempeval-3: Evaluating time expressions, events, and temporal relations. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013) (Atlanta, Georgia, USA, June 2013), Association for Computational Linguistics, pp. 1–9.
  • Bethard, S. ClearTK-TimeML: A minimalist approach to tempeval 2013. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013) (Atlanta, Georgia, USA, June 2013), vol. 2, Association for Computational Linguistics, Association for Computational Linguistics, pp. 10–14.
  • Stro ̈tgen, J., Zell, J., and Gertz, M. Heideltime: Tuning english and developing spanish resources for tempeval-3. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013) (Atlanta, Georgia, USA, June 2013), Association for Computational Linguistics, pp. 15–19.
  • Jung, H., and Stent, A. ATT1: Temporal annotation using big windows and rich syntactic and semantic features. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013) (Atlanta, Georgia, USA, June 2013), Association for Computational Linguistics, pp. 20–24.
  • Filannino, M., Brown, G., and Nenadic, G. ManTIME: Temporal expression identification and normalization in the Tempeval-3 challenge. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evalu- ation (SemEval 2013) (Atlanta, Georgia, USA, June 2013), Association for Computational Linguistics, pp. 53–57.
  • Zavarella, V., and Tanev, H. FSS-TimEx for tempeval-3: Extracting temporal information from text. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013) (Atlanta, Georgia, USA, June 2013), Association for Computational Linguistics, pp. 58–63.
  • Kolya, A. K., Kundu, A., Gupta, R., Ekbal, A., and Bandyopadhyay, S. JU_CSE: A CRF based approach to annotation of temporal expression, event and temporal relations. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013) (Atlanta, Georgia, USA, June 2013), Association for Computational Linguistics, pp. 64–72.
  • Chambers, N. Navytime: Event and time ordering from raw text. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013) (Atlanta, Georgia, USA, June 2013), Association for Computational Linguistics, pp. 73–77.
  • Chang, A., and Manning, C. D. SUTime: Evaluation in TempEval-3. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013) (Atlanta, Georgia, USA, June 2013), Association for Computational Linguistics, pp. 78–82.
  • Kolomiyets, O., and Moens, M.-F. KUL: Data-driven approach to temporal parsing of newswire articles. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceed- ings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013) (Atlanta, Georgia, USA, June 2013), Association for Computational Linguistics, pp. 83–87.
  • Laokulrat, N., Miwa, M., Tsuruoka, Y., and Chikayama, T. UTTime: Temporal relation classification using deep syntactic features. In Second Joint Conference on Lexical and Computational Se- mantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013) (Atlanta, Georgia, USA, June 2013), Association for Computational Linguistics, pp. 88– 92.

See also

External links