<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://www.aclweb.org/aclwiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Pasky</id>
	<title>ACL Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://www.aclweb.org/aclwiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Pasky"/>
	<link rel="alternate" type="text/html" href="https://www.aclweb.org/aclwiki/Special:Contributions/Pasky"/>
	<updated>2026-05-02T07:05:14Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.6</generator>
	<entry>
		<id>https://www.aclweb.org/aclwiki/index.php?title=Question_Answering_(State_of_the_art)&amp;diff=11398</id>
		<title>Question Answering (State of the art)</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/aclwiki/index.php?title=Question_Answering_(State_of_the_art)&amp;diff=11398"/>
		<updated>2016-02-11T00:27:50Z</updated>

		<summary type="html">&lt;p&gt;Pasky: +two 2015 papers (that contain results of three algorithms)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Answer Sentence Selection ==&lt;br /&gt;
&lt;br /&gt;
The task of answer sentence selection is designed for the open-domain question answering setting. Given a question and a set of candidate sentences, the task is to choose the correct sentence that contains the exact answer and can sufficiently support the answer choice. &lt;br /&gt;
&lt;br /&gt;
* [http://cs.stanford.edu/people/mengqiu/data/qg-emnlp07-data.tgz QA Answer Sentence Selection Dataset]: labeled sentences using TREC QA track data, provided by [http://cs.stanford.edu/people/mengqiu/ Mengqiu Wang] and first used in [http://www.aclweb.org/anthology/D/D07/D07-1003.pdf Wang et al. (2007)].  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Algorithm&lt;br /&gt;
! Reference&lt;br /&gt;
! [http://en.wikipedia.org/wiki/Mean_average_precision MAP]&lt;br /&gt;
! [http://en.wikipedia.org/wiki/Mean_reciprocal_rank MRR]&lt;br /&gt;
|-&lt;br /&gt;
| Punyakanok (2004)&lt;br /&gt;
| Wang et al. (2007)&lt;br /&gt;
| 0.419&lt;br /&gt;
| 0.494&lt;br /&gt;
|-&lt;br /&gt;
| Cui (2005)&lt;br /&gt;
| Wang et al. (2007)&lt;br /&gt;
| 0.427&lt;br /&gt;
| 0.526&lt;br /&gt;
|-&lt;br /&gt;
| Wang (2007)&lt;br /&gt;
| Wang et al. (2007)&lt;br /&gt;
| 0.603&lt;br /&gt;
| 0.685&lt;br /&gt;
|-&lt;br /&gt;
| H&amp;amp;S (2010)&lt;br /&gt;
| Heilman and Smith (2010)&lt;br /&gt;
| 0.609&lt;br /&gt;
| 0.692&lt;br /&gt;
|-&lt;br /&gt;
| W&amp;amp;M (2010)&lt;br /&gt;
| Wang and Manning (2010)&lt;br /&gt;
| 0.595&lt;br /&gt;
| 0.695&lt;br /&gt;
|-&lt;br /&gt;
| Yao (2013)&lt;br /&gt;
| Yao et al. (2013)&lt;br /&gt;
| 0.631&lt;br /&gt;
| 0.748&lt;br /&gt;
|-&lt;br /&gt;
| S&amp;amp;M (2013)&lt;br /&gt;
| Severyn and Moschitti (2013)&lt;br /&gt;
| 0.678&lt;br /&gt;
| 0.736&lt;br /&gt;
|-&lt;br /&gt;
| Shnarch (2013) - Backward &lt;br /&gt;
| Shnarch (2013)&lt;br /&gt;
| 0.686&lt;br /&gt;
| 0.754&lt;br /&gt;
|-&lt;br /&gt;
| Yih (2013) - LCLR&lt;br /&gt;
| Yih et al. (2013)&lt;br /&gt;
| 0.709&lt;br /&gt;
| 0.770&lt;br /&gt;
|-&lt;br /&gt;
| Yu (2014) - TRAIN-ALL bigram+count&lt;br /&gt;
| Yu et al. (2014)&lt;br /&gt;
| 0.711&lt;br /&gt;
| 0.785&lt;br /&gt;
|-&lt;br /&gt;
| W&amp;amp;N (2015) - Three-Layer BLSTM+BM25&lt;br /&gt;
| Wang and Nyberg (2015)&lt;br /&gt;
| 0.713&lt;br /&gt;
| 0.791&lt;br /&gt;
|-&lt;br /&gt;
| Feng (2015) - Architecture-II&lt;br /&gt;
| Tan et al. (2015)&lt;br /&gt;
| 0.711&lt;br /&gt;
| 0.800&lt;br /&gt;
|-&lt;br /&gt;
| S&amp;amp;M (2015)&lt;br /&gt;
| Severyn and Moschitti (2015)&lt;br /&gt;
| 0.746&lt;br /&gt;
| 0.808&lt;br /&gt;
|-&lt;br /&gt;
| W&amp;amp;I (2015)&lt;br /&gt;
| Wang and Ittycheriah (2015)&lt;br /&gt;
| 0.746&lt;br /&gt;
| 0.820&lt;br /&gt;
|-&lt;br /&gt;
| Tan (2015) - QA-LSTM/CNN+attention &lt;br /&gt;
| Tan et al. (2015)&lt;br /&gt;
| 0.728&lt;br /&gt;
| 0.832&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Vasin Punyakanok, Dan Roth, and Wen-Tau Yih. 2004. [http://cogcomp.cs.illinois.edu/papers/PunyakanokRoYi04a.pdf Mapping dependencies trees: An application to question answering]. In Proceedings of the 8th International Symposium on Artificial Intelligence and Mathematics, Fort Lauderdale, FL, USA.&lt;br /&gt;
* Hang Cui, Renxu Sun, Keya Li, Min-Yen Kan, and Tat-Seng Chua. 2005. [http://ws.csie.ncku.edu.tw/login/upload/2005/paper/Question%20answering%20Question%20answering%20passage%20retrieval%20using%20dependency%20relations.pdf Question answering passage retrieval using dependency relations]. In Proceedings of the 28th ACM-SIGIR International Conference on Research and Development in Information Retrieval, Salvador, Brazil.&lt;br /&gt;
* Wang, Mengqiu and Smith, Noah A. and Mitamura, Teruko. 2007. [http://www.aclweb.org/anthology/D/D07/D07-1003.pdf What is the Jeopardy Model? A Quasi-Synchronous Grammar for QA]. In EMNLP-CoNLL 2007.&lt;br /&gt;
* Heilman, Michael and Smith, Noah A. 2010. [http://www.aclweb.org/anthology/N10-1145 Tree Edit Models for Recognizing Textual Entailments, Paraphrases, and Answers to Questions]. In NAACL-HLT 2010.&lt;br /&gt;
* Wang, Mengqiu and Manning, Christopher. 2010. [http://aclweb.org/anthology//C/C10/C10-1131.pdf Probabilistic Tree-Edit Models with Structured Latent Variables for Textual Entailment and Question Answering]. In COLING 2010.&lt;br /&gt;
* E. Shnarch. 2013. Probabilistic Models for Lexical Inference. Ph.D. thesis, Bar Ilan University.&lt;br /&gt;
* Yao, Xuchen and Van Durme, Benjamin and Callison-Burch, Chris and Clark, Peter. 2013. [http://www.aclweb.org/anthology/N13-1106.pdf Answer Extraction as Sequence Tagging with Tree Edit Distance]. In NAACL-HLT 2013.&lt;br /&gt;
* Yih, Wen-tau and Chang, Ming-Wei and Meek, Christopher and Pastusiak, Andrzej. 2013. [http://research.microsoft.com/pubs/192357/QA-SentSel-Updated-PostACL.pdf Question Answering Using Enhanced Lexical Semantic Models]. In ACL 2013.&lt;br /&gt;
* Severyn, Aliaksei and Moschitti, Alessandro. 2013. [http://www.aclweb.org/anthology/D13-1044.pdf Automatic Feature Engineering for Answer Selection and Extraction]. In EMNLP 2013.&lt;br /&gt;
* Lei Yu, Karl Moritz Hermann, Phil Blunsom, and Stephen Pulman. 2014. [http://arxiv.org/pdf/1412.1632v1.pdf Deep Learning for Answer Sentence Selection]. In NIPS deep learning workshop.&lt;br /&gt;
* Di Wang and Eric Nyberg. 2015. [http://www.aclweb.org/anthology/P15-2116 A Long Short-Term Memory Model for Answer Sentence Selection in Question Answering]. In ACL 2015.&lt;br /&gt;
* Minwei Feng, Bing Xiang, Michael R. Glass, Lidan Wang, Bowen Zhou. 2015. [http://arxiv.org/abs/1508.01585 Applying deep learning to answer selection: A study and an open task]. In ASRU 2015.&lt;br /&gt;
* Aliaksei Severyn and Alessandro Moschitti. 2015. [http://disi.unitn.it/~severyn/papers/sigir-2015-long.pdf Learning to Rank Short Text Pairs with Convolutional Deep Neural Networks]. In SIGIR 2015.&lt;br /&gt;
* Zhiguo Wang and Abraham Ittycheriah. 2015. [http://arxiv.org/abs/1507.02628 FAQ-based Question Answering via Word Alignment]. In eprint arXiv:1507.02628.&lt;br /&gt;
* Ming Tan, Cicero dos Santos, Bing Xiang &amp;amp; Bowen Zhou. 2015. [http://arxiv.org/abs/1511.04108 LSTM-Based Deep Learning Models for Nonfactoid Answer Selection]. In eprint arXiv:1511.04108.&lt;br /&gt;
&lt;br /&gt;
[[Category:State of the art]]&lt;/div&gt;</summary>
		<author><name>Pasky</name></author>
	</entry>
</feed>