Difference between revisions of "POS Tagging (State of the art)"

From ACL Wiki
Jump to navigation Jump to search
m (Edit license info)
(adds name for Flair link)
 
(9 intermediate revisions by 5 users not shown)
Line 30: Line 30:
 
|-
 
|-
 
| TnT*
 
| TnT*
| Hidden markov model
+
| Hidden Markov model
 
| Brants (2000)
 
| Brants (2000)
 
| [http://www.coli.uni-saarland.de/~thorsten/tnt/ TnT]
 
| [http://www.coli.uni-saarland.de/~thorsten/tnt/ TnT]
Line 39: Line 39:
 
|-
 
|-
 
| MElt
 
| MElt
| MEMM with external lexical information
+
| Maximum entropy Markov model with external lexical information
 
| Denis and Sagot (2009)
 
| Denis and Sagot (2009)
 
| [https://gforge.inria.fr/projects/lingwb/ Alpage linguistic workbench]
 
| [https://gforge.inria.fr/projects/lingwb/ Alpage linguistic workbench]
Line 57: Line 57:
 
|-
 
|-
 
| Averaged Perceptron
 
| Averaged Perceptron
| Averaged Perception discriminative sequence model
+
| Averaged perceptron
 
| Collins (2002)
 
| Collins (2002)
 
| Not available
 
| Not available
Line 93: Line 93:
 
|-
 
|-
 
| Morče/COMPOST
 
| Morče/COMPOST
| Averaged Perceptron
+
| Averaged perceptron
 
| Spoustová et al. (2009)
 
| Spoustová et al. (2009)
 
| [http://ufal.mff.cuni.cz/compost COMPOST]
 
| [http://ufal.mff.cuni.cz/compost COMPOST]
Line 102: Line 102:
 
|-
 
|-
 
| Morče/COMPOST
 
| Morče/COMPOST
| Averaged Perceptron
+
| Averaged perceptron
 
| Spoustová et al. (2009)
 
| Spoustová et al. (2009)
 
| [http://ufal.mff.cuni.cz/compost COMPOST]
 
| [http://ufal.mff.cuni.cz/compost COMPOST]
Line 154: Line 154:
 
| Not available
 
| Not available
 
| Unknown
 
| Unknown
 +
|-
 +
| CharWNN
 +
| MLP with neural character embeddings
 +
| dos Santos and Zadrozny (2014)
 +
| Not available
 +
| No
 +
| 97.32%
 +
| 89.86%
 +
| Unknown
 +
|-
 +
| structReg
 +
| CRF with structure regularization
 +
| Sun (2014)
 +
| Not available
 +
| No
 +
| 97.36%
 +
| Not available
 +
| Unknown
 +
|-
 +
| BI-LSTM-CRF
 +
| Bidirectional LSTM-CRF
 +
| Huang et al. (2015)
 +
| Not available
 +
| No
 +
| 97.55%
 +
| Not available
 +
| Unknown
 +
|-
 +
| NLP4J
 +
| Dynamic feature induction
 +
| Choi (2016)
 +
| [https://github.com/emorynlp/nlp4j NLP4J]
 +
| Yes
 +
| 97.64%
 +
| 92.03%
 +
| Apache 2
 +
|-
 +
| Flair
 +
| Bidirectional LSTM-CRF with contextual string embeddings
 +
| Akbik et al. (2018)
 +
| [https://github.com/zalandoresearch/flair/ Flair]
 +
| Yes
 +
| 97.85%
 +
| Not available
 +
| MIT
 
|}
 
|}
  
Line 206: Line 251:
  
 
== References ==
 
== References ==
 +
 +
* Akbik, Alan, Blythe, Duncan and Vollgraf, Roland. 2018. [http://www.aclweb.org/anthology/C18-1139 Contextual string embeddings for sequence labeling]. ''COLING 2018''.
  
 
* Brants, Thorsten. 2000. [http://acl.ldc.upenn.edu/A/A00/A00-1031.pdf TnT -- A Statistical Part-of-Speech Tagger]. "6th Applied Natural Language Processing Conference".
 
* Brants, Thorsten. 2000. [http://acl.ldc.upenn.edu/A/A00/A00-1031.pdf TnT -- A Statistical Part-of-Speech Tagger]. "6th Applied Natural Language Processing Conference".
Line 236: Line 283:
  
 
* Tsuruoka, Yoshimasa and Jun'ichi Tsujii. 2005. "[http://www-tsujii.is.s.u-tokyo.ac.jp/~tsuruoka/papers/emnlp05bidir.pdf Bidirectional Inference with the Easiest-First Strategy for Tagging Sequence Data]", ''Proceedings of HLT/EMNLP 2005'', pp. 467-474.
 
* Tsuruoka, Yoshimasa and Jun'ichi Tsujii. 2005. "[http://www-tsujii.is.s.u-tokyo.ac.jp/~tsuruoka/papers/emnlp05bidir.pdf Bidirectional Inference with the Easiest-First Strategy for Tagging Sequence Data]", ''Proceedings of HLT/EMNLP 2005'', pp. 467-474.
 +
 +
* Sun, Xu. "[http://papers.nips.cc/paper/5643-structure-regularization-for-structured-prediction.pdf Structure Regularization for Structured Prediction]". ''In Neural Information Processing Systems (NIPS)''. 2402-2410. 2014
 +
 +
* Cicero dos Santos, and Bianca Zadrozny. "[http://jmlr.org/proceedings/papers/v32/santos14.pdf Learning character-level representations for part-of-speech tagging]". ''In Proceedings of the 31st International Conference on Machine Learning, JMLR: W&CP volume 32''. 2014.
 +
 +
* Z. H. Huang, W. Xu, and K. Yu. "[http://arxiv.org/abs/1508.01991 Bidirectional LSTM-CRF Models for Sequence Tagging]". ''In arXiv:1508.01991''. 2015.
 +
 +
* Jinho D. Choi. 2016. "[https://aclweb.org/anthology/N/N16/N16-1031.pdf Dynamic Feature Induction: The Last Gist to the State-of-the-Art]", Proceedings of NAACL 2016.
  
 
== See also ==
 
== See also ==

Latest revision as of 12:27, 4 March 2019

Test collections

  • Performance measure: per token accuracy. (The convention is for this to be measured on all tokens, including punctuation tokens and other unambiguous tokens.)
  • English
    • Penn Treebank Wall Street Journal (WSJ) release 3 (LDC99T42). The splits of data for this task were not standardized early on (unlike for parsing) and early work uses various data splits defined by counts of tokens or by sections. Most work from 2002 on adopts the following data splits, introduced by Collins (2002):
      • Training data: sections 0-18
      • Development test data: sections 19-21
      • Testing data: sections 22-24
  • French
    • French TreeBank (FTB, Abeillé et al; 2003) Le Monde, December 2007 version, 28-tag tagset (CC tagset, Crabbé and Candito, 2008). Classical data split (10-10-80):
      • Training data: sentences 2471 to 12351
      • Development test data: sentences 1236 to 2470
      • Testing data: sentences 1 to 1235


Tables of results

WSJ

System name Short description Main publication Software Extra Data?*** All tokens Unknown words License
TnT* Hidden Markov model Brants (2000) TnT No 96.46% 85.86% Academic/research use only (license)
MElt Maximum entropy Markov model with external lexical information Denis and Sagot (2009) Alpage linguistic workbench No 96.96% 91.29% CeCILL-C
GENiA Tagger** Maximum entropy cyclic dependency network Tsuruoka, et al (2005) GENiA No 97.05% Not available Gratis for non-commercial usage
Averaged Perceptron Averaged perceptron Collins (2002) Not available No 97.11% Not available Unknown
Maxent easiest-first Maximum entropy bidirectional easiest-first inference Tsuruoka and Tsujii (2005) Easiest-first No 97.15% Not available Unknown
SVMTool SVM-based tagger and tagger generator Giménez and Márquez (2004) SVMTool No 97.16% 89.01% LGPL 2.1
LAPOS Perceptron based training with lookahead Tsuruoka, Miyao, and Kazama (2011) LAPOS No 97.22% Not available MIT
Morče/COMPOST Averaged perceptron Spoustová et al. (2009) COMPOST No 97.23% Not available Non-free (academic-only)
Morče/COMPOST Averaged perceptron Spoustová et al. (2009) COMPOST Yes 97.44% Not available Unknown
Stanford Tagger 1.0 Maximum entropy cyclic dependency network Toutanova et al. (2003) Stanford Tagger No 97.24% 89.04% GPL v2+
Stanford Tagger 2.0 Maximum entropy cyclic dependency network Manning (2011) Stanford Tagger No 97.29% 89.70% GPL v2+
Stanford Tagger 2.0 Maximum entropy cyclic dependency network Manning (2011) Stanford Tagger Yes 97.32% 90.79% GPL v2+
LTAG-spinal Bidirectional perceptron learning Shen et al. (2007) LTAG-spinal No 97.33% Not available Unknown
SCCN Semi-supervised condensed nearest neighbor Søgaard (2011) SCCN Yes 97.50% Not available Unknown
CharWNN MLP with neural character embeddings dos Santos and Zadrozny (2014) Not available No 97.32% 89.86% Unknown
structReg CRF with structure regularization Sun (2014) Not available No 97.36% Not available Unknown
BI-LSTM-CRF Bidirectional LSTM-CRF Huang et al. (2015) Not available No 97.55% Not available Unknown
NLP4J Dynamic feature induction Choi (2016) NLP4J Yes 97.64% 92.03% Apache 2
Flair Bidirectional LSTM-CRF with contextual string embeddings Akbik et al. (2018) Flair Yes 97.85% Not available MIT

(*) TnT: Accuracy is as reported by Giménez and Márquez (2004) for the given test collection. Brants (2000) reports 96.7% token accuracy and 85.5% unknown word accuracy on a 10-fold cross-validation of the Penn WSJ corpus.

(**) GENiA: Results are for models trained and tested on the given corpora (to be comparable to other results). The distributed GENiA tagger is trained on a mixed training corpus and gets 96.94% on WSJ, and 98.26% on GENiA biomedical English.

(***) Extra data: Whether system training exploited (usually large amounts of) extra unlabeled text, such as by semi-supervised learning, self-training, or using distributional similarity features, beyond the standard supervised training data.

FTB

System name Short description Main publication Software Extra Data?*** All tokens Unknown words License
Morfette Perceptron with external lexical information* Chrupała et al. (2008), Seddah et al. (2010) Morfette No 97.68% 90.52% New BSD
SEM CRF with external lexical information* Constant et al. (2011) SEM No 97.7% Not available "GNU"(?)
MElt MEMM with external lexical information* Denis and Sagot (2009) Alpage linguistic workbench No 97.80% 91.77% CeCILL-C

(*) External lexical information from the Lefff lexicon (Sagot 2010, Alexina project)

References

  • Manning, Christopher D. 2011. Part-of-Speech Tagging from 97% to 100%: Is It Time for Some Linguistics? In Alexander Gelbukh (ed.), Computational Linguistics and Intelligent Text Processing, 12th International Conference, CICLing 2011, Proceedings, Part I. Lecture Notes in Computer Science 6608, pp. 171--189. Springer.
  • Søgaard, Anders. 2011. Semi-supervised condensed nearest neighbor for part-of-speech tagging. The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT). Portland, Oregon.
  • Spoustová, Drahomíra "Johanka", Jan Hajič, Jan Raab and Miroslav Spousta. 2009. Semi-supervised Training for the Averaged Perceptron POS Tagger. Proceedings of the 12 EACL, pages 763-771.

See also