Difference between revisions of "SAT Analogy Questions (State of the art)"

From ACL Wiki
Jump to navigation Jump to search
Line 26: Line 26:
 
| Veale (2004)
 
| Veale (2004)
 
| Veale (2004)
 
| Veale (2004)
| lexicon-based
+
| Lexicon-based
 
| 43.0%
 
| 43.0%
 
| 38.0-48.2%
 
| 38.0-48.2%
Line 33: Line 33:
 
| Turney and Littman (2005)
 
| Turney and Littman (2005)
 
| Turney and Littman (2005)
 
| Turney and Littman (2005)
| corpus-based
+
| Corpus-based
 
| 47.1%
 
| 47.1%
 
| 42.2-52.5%
 
| 42.2-52.5%
Line 40: Line 40:
 
| Turney (2006a)
 
| Turney (2006a)
 
| Turney (2006a)
 
| Turney (2006a)
| corpus-based
+
| Corpus-based
 
| 53.5%
 
| 53.5%
 
| 48.5-58.9%
 
| 48.5-58.9%
Line 47: Line 47:
 
| Turney (2006b)
 
| Turney (2006b)
 
| Turney (2006b)
 
| Turney (2006b)
| corpus-based
+
| Corpus-based
 
| 56.1%
 
| 56.1%
 
| 51.0–61.2%
 
| 51.0–61.2%

Revision as of 05:26, 13 May 2007

  • SAT= Scholastic Aptitude Test
  • 374 multiple-choice analogy questions; 5 choices per question
  • SAT questions collected by Michael Littman, available from Peter Turney
  • introduced in Turney et al. (2003) as a way of evaluating algorithms for measuring relational similarity
  • Algorithm = name of algorithm
  • Reference for algorithm = where to find out more about given algorithm for measuring similarity
  • Reference for experiment = where to find out more about evaluation of given algorithm with SAT questions
  • Type = general type of algorithm: corpus-based, lexicon-based, hybrid
  • Correct = percent of 374 questions that given algorithm answered correctly
  • 95% confidence = confidence interval calculated using Binomial Exact Test
  • table rows sorted in order of increasing percent correct
  • VSM = Vector Space Model
  • LRA = Latent Relational Analysis


Algorithm Reference for algorithm Reference for experiment Type Correct 95% confidence
KNOW-BEST Veale (2004) Veale (2004) Lexicon-based 43.0% 38.0-48.2%
VSM Turney and Littman (2005) Turney and Littman (2005) Corpus-based 47.1% 42.2-52.5%
PERT Turney (2006a) Turney (2006a) Corpus-based 53.5% 48.5-58.9%
LRA Turney (2006b) Turney (2006b) Corpus-based 56.1% 51.0–61.2%


Turney, P.D., Littman, M.L., Bigham, J., and Shnayder, V. (2003). Combining independent modules to solve multiple-choice synonym and analogy problems. Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP-03), Borovets, Bulgaria, pp. 482-489.

Turney, P.D., and Littman, M.L. (2005). Corpus-based learning of analogies and semantic relations. Machine Learning, 60 (1-3), 251-278.

Turney, P.D. (2006a). Expressing implicit semantic relations without supervision. Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics (Coling/ACL-06), Sydney, Australia, pp. 313-320.

Turney, P.D. (2006b). Similarity of semantic relations. Computational Linguistics, 32 (3), 379-416.

Veale, T. (2004). WordNet sits the SAT: A knowledge-based approach to lexical analogy. Proceedings of the 16th European Conference on Artificial Intelligence (ECAI 2004), pp. 606–612, Valencia, Spain.