MUC-7 (State of the art): Difference between revisions

From ACL Wiki
Jump to navigation Jump to search
Pythonner (talk | contribs)
No edit summary
Pythonner (talk | contribs)
No edit summary
Line 40: Line 40:
== See also ==
== See also ==


* [[Named Entity Recognition (State of the art)|Named Entity Recognition]]
* [[State of the art]]
* [[State of the art]]

Revision as of 19:01, 31 July 2007

  • Performance measure: F = 2 * Precision * Recall / (Recall + Precision)
  • Precision: percentage of named entities found by the algorithm that are correct
  • Recall: percentage of named entities defined in the corpus that were found by the program
  • Exact calculation of precision and recall is explained in the MUC scoring software
  • Training data: Training section of MUC-7 dataset
  • Testing data: Formal section of MUC-7 dataset


Table of results

System name Short description Main publications Software Results (F)
Human Human annotator MUC-7 proceedings 97.60%
LTG Best MUC-7 participant Mikheev, Grover and Moens (1998) 93.39%


References

Mikheev, A., Grover, C., and Moens, M. (1998). Description of the LTG system used for MUC-7. Proceedings of the Seventh Message Understanding Conference (MUC-7). Fairfax, Virginia.

See also