<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://www.aclweb.org/aclwiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Zhiguo+Wang</id>
	<title>ACL Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://www.aclweb.org/aclwiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Zhiguo+Wang"/>
	<link rel="alternate" type="text/html" href="https://www.aclweb.org/aclwiki/Special:Contributions/Zhiguo_Wang"/>
	<updated>2026-04-29T16:40:31Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.6</generator>
	<entry>
		<id>https://www.aclweb.org/aclwiki/index.php?title=Question_Answering_(State_of_the_art)&amp;diff=11781</id>
		<title>Question Answering (State of the art)</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/aclwiki/index.php?title=Question_Answering_(State_of_the_art)&amp;diff=11781"/>
		<updated>2017-02-14T02:28:54Z</updated>

		<summary type="html">&lt;p&gt;Zhiguo Wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Answer Sentence Selection ==&lt;br /&gt;
&lt;br /&gt;
The task of answer sentence selection is designed for the open-domain question answering setting. Given a question and a set of candidate sentences, the task is to choose the correct sentence that contains the exact answer and can sufficiently support the answer choice. &lt;br /&gt;
&lt;br /&gt;
* [http://cs.stanford.edu/people/mengqiu/data/qg-emnlp07-data.tgz QA Answer Sentence Selection Dataset]: labeled sentences using TREC QA track data, provided by [http://cs.stanford.edu/people/mengqiu/ Mengqiu Wang] and first used in [http://www.aclweb.org/anthology/D/D07/D07-1003.pdf Wang et al. (2007)]. &lt;br /&gt;
* Over time, the original dataset diverged to two versions due to different pre-processing in recent publications: both have the same training set but their development and test sets differ. The Raw version has 82 questions in the development set and 100 questions in the test set; The Clean version (Wang and Ittycheriah et al. 2015, Tan et al. 2015, dos Santos et al. 2016, Wang et al. 2016) removed questions with no answers or with only positive/negative answers, thus has only 65 questions in the development set and 68 questions in the test set. &lt;br /&gt;
* Note: MAP/MRR scores on the two versions of TREC QA data (Clean vs Raw) are not comparable according to [https://dl.acm.org/authorize.cfm?key=N27026 Rao et al. (2016)]. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Algorithm - Raw Version of TREC QA&lt;br /&gt;
! Reference&lt;br /&gt;
! [http://en.wikipedia.org/wiki/Mean_average_precision MAP]&lt;br /&gt;
! [http://en.wikipedia.org/wiki/Mean_reciprocal_rank MRR]&lt;br /&gt;
|-&lt;br /&gt;
| Punyakanok (2004)&lt;br /&gt;
| Wang et al. (2007)&lt;br /&gt;
| 0.419&lt;br /&gt;
| 0.494&lt;br /&gt;
|-&lt;br /&gt;
| Cui (2005)&lt;br /&gt;
| Wang et al. (2007)&lt;br /&gt;
| 0.427&lt;br /&gt;
| 0.526&lt;br /&gt;
|-&lt;br /&gt;
| Wang (2007)&lt;br /&gt;
| Wang et al. (2007)&lt;br /&gt;
| 0.603&lt;br /&gt;
| 0.685&lt;br /&gt;
|-&lt;br /&gt;
| H&amp;amp;S (2010)&lt;br /&gt;
| Heilman and Smith (2010)&lt;br /&gt;
| 0.609&lt;br /&gt;
| 0.692&lt;br /&gt;
|-&lt;br /&gt;
| W&amp;amp;M (2010)&lt;br /&gt;
| Wang and Manning (2010)&lt;br /&gt;
| 0.595&lt;br /&gt;
| 0.695&lt;br /&gt;
|-&lt;br /&gt;
| Yao (2013)&lt;br /&gt;
| Yao et al. (2013)&lt;br /&gt;
| 0.631&lt;br /&gt;
| 0.748&lt;br /&gt;
|-&lt;br /&gt;
| S&amp;amp;M (2013)&lt;br /&gt;
| Severyn and Moschitti (2013)&lt;br /&gt;
| 0.678&lt;br /&gt;
| 0.736&lt;br /&gt;
|-&lt;br /&gt;
| Shnarch (2013) - Backward &lt;br /&gt;
| Shnarch (2013)&lt;br /&gt;
| 0.686&lt;br /&gt;
| 0.754&lt;br /&gt;
|-&lt;br /&gt;
| Yih (2013) - LCLR&lt;br /&gt;
| Yih et al. (2013)&lt;br /&gt;
| 0.709&lt;br /&gt;
| 0.770&lt;br /&gt;
|-&lt;br /&gt;
| Yu (2014) - TRAIN-ALL bigram+count&lt;br /&gt;
| Yu et al. (2014)&lt;br /&gt;
| 0.711&lt;br /&gt;
| 0.785&lt;br /&gt;
|-&lt;br /&gt;
| W&amp;amp;N (2015) - Three-Layer BLSTM+BM25&lt;br /&gt;
| Wang and Nyberg (2015)&lt;br /&gt;
| 0.713&lt;br /&gt;
| 0.791&lt;br /&gt;
|-&lt;br /&gt;
| Feng (2015) - Architecture-II&lt;br /&gt;
| Tan et al. (2015)&lt;br /&gt;
| 0.711&lt;br /&gt;
| 0.800&lt;br /&gt;
|-&lt;br /&gt;
| S&amp;amp;M (2015)&lt;br /&gt;
| Severyn and Moschitti (2015)&lt;br /&gt;
| 0.746&lt;br /&gt;
| 0.808&lt;br /&gt;
|-&lt;br /&gt;
| Yang (2016) - Attention-Based Neural Matching Model&lt;br /&gt;
| Yang et al. (2016)&lt;br /&gt;
| 0.750&lt;br /&gt;
| 0.811&lt;br /&gt;
|-&lt;br /&gt;
| H&amp;amp;L (2016) - Pairwise Word Interaction Modelling&lt;br /&gt;
| He and Lin (2016)&lt;br /&gt;
| 0.758&lt;br /&gt;
| 0.822&lt;br /&gt;
|-&lt;br /&gt;
| H&amp;amp;L (2015) - Multi-Perspective CNN&lt;br /&gt;
| He and Lin (2015)&lt;br /&gt;
| 0.762&lt;br /&gt;
| 0.830&lt;br /&gt;
|-&lt;br /&gt;
| Rao (2016) - PairwiseRank + Multi-Perspective CNN&lt;br /&gt;
| Rao et al. (2016)&lt;br /&gt;
| 0.780&lt;br /&gt;
| 0.834&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Algorithm - Clean Version of TREC QA&lt;br /&gt;
! Reference&lt;br /&gt;
! [http://en.wikipedia.org/wiki/Mean_average_precision MAP]&lt;br /&gt;
! [http://en.wikipedia.org/wiki/Mean_reciprocal_rank MRR]&lt;br /&gt;
|-&lt;br /&gt;
| W&amp;amp;I (2015)&lt;br /&gt;
| Wang and Ittycheriah (2015)&lt;br /&gt;
| 0.746&lt;br /&gt;
| 0.820&lt;br /&gt;
|-&lt;br /&gt;
| Tan (2015) - QA-LSTM/CNN+attention &lt;br /&gt;
| Tan et al. (2015)&lt;br /&gt;
| 0.728&lt;br /&gt;
| 0.832&lt;br /&gt;
|-&lt;br /&gt;
| dos Santos (2016) - Attentive Pooling CNN &lt;br /&gt;
| dos Santos et al. (2016)&lt;br /&gt;
| 0.753&lt;br /&gt;
| 0.851&lt;br /&gt;
|-&lt;br /&gt;
| Wang et al.  (2016) - L.D.C Model&lt;br /&gt;
| Wang et al. (2016)&lt;br /&gt;
| 0.771&lt;br /&gt;
| 0.845&lt;br /&gt;
|-&lt;br /&gt;
| H&amp;amp;L (2015) - Multi-Perspective CNN&lt;br /&gt;
| He and Lin (2015)&lt;br /&gt;
| 0.777&lt;br /&gt;
| 0.836&lt;br /&gt;
|-&lt;br /&gt;
| Rao et al.  (2016) - PairwiseRank + Multi-Perspective CNN&lt;br /&gt;
| Rao et al. (2016)&lt;br /&gt;
| 0.801&lt;br /&gt;
| 0.877&lt;br /&gt;
|-&lt;br /&gt;
| Wang et al.  (2017) - BiMPM&lt;br /&gt;
| Wang et al.  (2017)&lt;br /&gt;
| 0.802&lt;br /&gt;
| 0.875&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Vasin Punyakanok, Dan Roth, and Wen-Tau Yih. 2004. [http://cogcomp.cs.illinois.edu/papers/PunyakanokRoYi04a.pdf Mapping dependencies trees: An application to question answering]. In Proceedings of the 8th International Symposium on Artificial Intelligence and Mathematics, Fort Lauderdale, FL, USA.&lt;br /&gt;
* Hang Cui, Renxu Sun, Keya Li, Min-Yen Kan, and Tat-Seng Chua. 2005. [http://ws.csie.ncku.edu.tw/login/upload/2005/paper/Question%20answering%20Question%20answering%20passage%20retrieval%20using%20dependency%20relations.pdf Question answering passage retrieval using dependency relations]. In Proceedings of the 28th ACM-SIGIR International Conference on Research and Development in Information Retrieval, Salvador, Brazil.&lt;br /&gt;
* Wang, Mengqiu and Smith, Noah A. and Mitamura, Teruko. 2007. [http://www.aclweb.org/anthology/D/D07/D07-1003.pdf What is the Jeopardy Model? A Quasi-Synchronous Grammar for QA]. In EMNLP-CoNLL 2007.&lt;br /&gt;
* Heilman, Michael and Smith, Noah A. 2010. [http://www.aclweb.org/anthology/N10-1145 Tree Edit Models for Recognizing Textual Entailments, Paraphrases, and Answers to Questions]. In NAACL-HLT 2010.&lt;br /&gt;
* Wang, Mengqiu and Manning, Christopher. 2010. [http://aclweb.org/anthology//C/C10/C10-1131.pdf Probabilistic Tree-Edit Models with Structured Latent Variables for Textual Entailment and Question Answering]. In COLING 2010.&lt;br /&gt;
* E. Shnarch. 2013. Probabilistic Models for Lexical Inference. Ph.D. thesis, Bar Ilan University.&lt;br /&gt;
* Yao, Xuchen and Van Durme, Benjamin and Callison-Burch, Chris and Clark, Peter. 2013. [http://www.aclweb.org/anthology/N13-1106.pdf Answer Extraction as Sequence Tagging with Tree Edit Distance]. In NAACL-HLT 2013.&lt;br /&gt;
* Yih, Wen-tau and Chang, Ming-Wei and Meek, Christopher and Pastusiak, Andrzej. 2013. [http://research.microsoft.com/pubs/192357/QA-SentSel-Updated-PostACL.pdf Question Answering Using Enhanced Lexical Semantic Models]. In ACL 2013.&lt;br /&gt;
* Severyn, Aliaksei and Moschitti, Alessandro. 2013. [http://www.aclweb.org/anthology/D13-1044.pdf Automatic Feature Engineering for Answer Selection and Extraction]. In EMNLP 2013.&lt;br /&gt;
* Lei Yu, Karl Moritz Hermann, Phil Blunsom, and Stephen Pulman. 2014. [http://arxiv.org/pdf/1412.1632v1.pdf Deep Learning for Answer Sentence Selection]. In NIPS deep learning workshop.&lt;br /&gt;
* Di Wang and Eric Nyberg. 2015. [http://www.aclweb.org/anthology/P15-2116 A Long Short-Term Memory Model for Answer Sentence Selection in Question Answering]. In ACL 2015.&lt;br /&gt;
* Minwei Feng, Bing Xiang, Michael R. Glass, Lidan Wang, Bowen Zhou. 2015. [http://arxiv.org/abs/1508.01585 Applying deep learning to answer selection: A study and an open task]. In ASRU 2015.&lt;br /&gt;
* Aliaksei Severyn and Alessandro Moschitti. 2015. [http://disi.unitn.it/~severyn/papers/sigir-2015-long.pdf Learning to Rank Short Text Pairs with Convolutional Deep Neural Networks]. In SIGIR 2015.&lt;br /&gt;
* Zhiguo Wang and Abraham Ittycheriah. 2015. [http://arxiv.org/abs/1507.02628 FAQ-based Question Answering via Word Alignment]. In eprint arXiv:1507.02628.&lt;br /&gt;
* Ming Tan, Cicero dos Santos, Bing Xiang &amp;amp; Bowen Zhou. 2015. [http://arxiv.org/abs/1511.04108 LSTM-Based Deep Learning Models for Nonfactoid Answer Selection]. In eprint arXiv:1511.04108.&lt;br /&gt;
* Cicero dos Santos, Ming Tan, Bing Xiang &amp;amp; Bowen Zhou. 2016. [http://arxiv.org/abs/1602.03609 Attentive Pooling Networks]. In eprint arXiv:1602.03609.&lt;br /&gt;
* Zhiguo Wang, Haitao Mi and Abraham Ittycheriah. 2016. [http://arxiv.org/pdf/1602.07019v1.pdf Sentence Similarity Learning by Lexical Decomposition and Composition]. In Coling 2016.&lt;br /&gt;
* Hua He, Kevin Gimpel and Jimmy Lin. 2015. [http://aclweb.org/anthology/D/D15/D15-1181.pdf Multi-Perspective Sentence Similarity Modeling with Convolutional Neural Networks]. In EMNLP 2015.&lt;br /&gt;
* Hua He and Jimmy Lin. 2016. [https://cs.uwaterloo.ca/~jimmylin/publications/He_etal_NAACL-HTL2016.pdf Pairwise Word Interaction Modeling with Deep Neural Networks for Semantic Similarity Measurement]. In NAACL 2016.&lt;br /&gt;
* Liu Yang, Qingyao Ai, Jiafeng Guo, W. Bruce Croft. 2016. [http://maroo.cs.umass.edu/pub/web/getpdf.php?id=1240 aNMM: Ranking Short Answer Texts with Attention-Based Neural Matching Model]. In CIKM 2016.&lt;br /&gt;
* Jinfeng Rao, Hua He and Jimmy Lin. 2016. [https://dl.acm.org/authorize.cfm?key=N27026 Noise-Contrastive Estimation for Answer Selection with Deep Neural Networks]. In CIKM 2016.&lt;br /&gt;
[[Category:State of the art]]&lt;br /&gt;
* Zhiguo Wang, Wael Hamza and Radu Florian. 2017.  [https://arxiv.org/pdf/1702.03814.pdf Bilateral Multi-Perspective Matching for Natural Language Sentences]. In eprint arXiv:1702.03814.&lt;/div&gt;</summary>
		<author><name>Zhiguo Wang</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/aclwiki/index.php?title=Paraphrase_Identification_(State_of_the_art)&amp;diff=11697</id>
		<title>Paraphrase Identification (State of the art)</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/aclwiki/index.php?title=Paraphrase_Identification_(State_of_the_art)&amp;diff=11697"/>
		<updated>2016-11-26T13:02:14Z</updated>

		<summary type="html">&lt;p&gt;Zhiguo Wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* &#039;&#039;&#039;source&#039;&#039;&#039;: [http://research.microsoft.com/en-us/downloads/607D14D9-20CD-47E3-85BC-A2F65CD28042/default.aspx Microsoft Research Paraphrase Corpus] (MSRP)&lt;br /&gt;
* &#039;&#039;&#039;task&#039;&#039;&#039;: given a pair of sentences, classify them as paraphrases or not paraphrases&lt;br /&gt;
* &#039;&#039;&#039;see&#039;&#039;&#039;: Dolan et al. (2004)&lt;br /&gt;
* &#039;&#039;&#039;train&#039;&#039;&#039;: 4,076 sentence pairs (2,753 positive: 67.5%)&lt;br /&gt;
* &#039;&#039;&#039;test&#039;&#039;&#039;: 1,725 sentence pairs (1,147 positive: 66.5%)&lt;br /&gt;
* &#039;&#039;&#039;see also:&#039;&#039;&#039; [[Similarity (State of the art)]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Sample data ==&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Sentence 1&#039;&#039;&#039;: Amrozi accused his brother, whom he called &amp;quot;the witness&amp;quot;, of deliberately distorting his evidence.&lt;br /&gt;
* &#039;&#039;&#039;Sentence 2&#039;&#039;&#039;: Referring to him as only &amp;quot;the witness&amp;quot;, Amrozi accused his brother of deliberately distorting his evidence.&lt;br /&gt;
* &#039;&#039;&#039;Class&#039;&#039;&#039;: 1 (true paraphrase)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Table of results ==&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Listed in order of increasing F score.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;1&amp;quot; width=&amp;quot;100%&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Algorithm&lt;br /&gt;
! Reference&lt;br /&gt;
! Description&lt;br /&gt;
! Supervision&lt;br /&gt;
! Accuracy&lt;br /&gt;
! F&lt;br /&gt;
|-&lt;br /&gt;
| Vector Based Similarity (Baseline)&lt;br /&gt;
| Mihalcea et al. (2006)&lt;br /&gt;
| cosine similarity with tf-idf weighting&lt;br /&gt;
| unsupervised&lt;br /&gt;
| 65.4%&lt;br /&gt;
| 75.3%&lt;br /&gt;
|-&lt;br /&gt;
| ESA&lt;br /&gt;
| Hassan (2011)&lt;br /&gt;
| explicit semantic space&lt;br /&gt;
| unsupervised&lt;br /&gt;
| 67.0%&lt;br /&gt;
| 79.3%&lt;br /&gt;
|-&lt;br /&gt;
| KM&lt;br /&gt;
| Kozareva and Montoyo (2006)&lt;br /&gt;
| combination of lexical and semantic features&lt;br /&gt;
| supervised&lt;br /&gt;
| 76.6%&lt;br /&gt;
| 79.6%&lt;br /&gt;
|-&lt;br /&gt;
| LSA&lt;br /&gt;
| Hassan (2011)&lt;br /&gt;
| latent semantic space&lt;br /&gt;
| unsupervised&lt;br /&gt;
| 68.8%&lt;br /&gt;
| 79.9%&lt;br /&gt;
|-&lt;br /&gt;
| RMLMG&lt;br /&gt;
| Rus et al. (2008)&lt;br /&gt;
| graph subsumption&lt;br /&gt;
| unsupervised&lt;br /&gt;
| 70.6%&lt;br /&gt;
| 80.5%&lt;br /&gt;
|-&lt;br /&gt;
| MCS&lt;br /&gt;
| Mihalcea et al. (2006)&lt;br /&gt;
| combination of several word similarity measures&lt;br /&gt;
| unsupervised&lt;br /&gt;
| 70.3%&lt;br /&gt;
| 81.3%&lt;br /&gt;
|-&lt;br /&gt;
| STS&lt;br /&gt;
| Islam and Inkpen (2007)&lt;br /&gt;
| combination of semantic and string similarity&lt;br /&gt;
| unsupervised&lt;br /&gt;
| 72.6%&lt;br /&gt;
| 81.3%&lt;br /&gt;
|-&lt;br /&gt;
| SSA&lt;br /&gt;
| Hassan (2011)&lt;br /&gt;
| salient semantic space&lt;br /&gt;
| unsupervised&lt;br /&gt;
| 72.5%&lt;br /&gt;
| 81.4%&lt;br /&gt;
|-&lt;br /&gt;
| QKC&lt;br /&gt;
| Qiu et al. (2006)&lt;br /&gt;
| sentence dissimilarity classification&lt;br /&gt;
| supervised&lt;br /&gt;
| 72.0%&lt;br /&gt;
| 81.6%&lt;br /&gt;
|-&lt;br /&gt;
| ParaDetect&lt;br /&gt;
| Zia and Wasif (2012)&lt;br /&gt;
| PI using semantic heuristic features&lt;br /&gt;
| supervised&lt;br /&gt;
| 74.7%&lt;br /&gt;
| 81.8%&lt;br /&gt;
|-&lt;br /&gt;
| Vector-based similarity&lt;br /&gt;
| Milajevs et al. (2014)&lt;br /&gt;
| Additive composition of vectors and cosine distance&lt;br /&gt;
| unsupervised&lt;br /&gt;
| 73.0%&lt;br /&gt;
| 82.0%&lt;br /&gt;
|-&lt;br /&gt;
| SDS&lt;br /&gt;
| Blacoe and Lapata (2012)&lt;br /&gt;
| simple distributional semantic space&lt;br /&gt;
| supervised&lt;br /&gt;
| 73.0%&lt;br /&gt;
| 82.3%&lt;br /&gt;
|-&lt;br /&gt;
| matrixJcn&lt;br /&gt;
| Fernando and Stevenson (2008)&lt;br /&gt;
| JCN WordNet similarity with matrix&lt;br /&gt;
| unsupervised&lt;br /&gt;
| 74.1%&lt;br /&gt;
| 82.4%&lt;br /&gt;
|-&lt;br /&gt;
| FHS&lt;br /&gt;
| Finch et al. (2005)&lt;br /&gt;
| combination of MT evaluation measures as features&lt;br /&gt;
| supervised&lt;br /&gt;
| 75.0%&lt;br /&gt;
| 82.7%&lt;br /&gt;
|-&lt;br /&gt;
| PE&lt;br /&gt;
| Das and Smith (2009)&lt;br /&gt;
| product of experts&lt;br /&gt;
| supervised&lt;br /&gt;
| 76.1%&lt;br /&gt;
| 82.7%&lt;br /&gt;
|-&lt;br /&gt;
| WDDP&lt;br /&gt;
| Wan et al. (2006)&lt;br /&gt;
| dependency-based features&lt;br /&gt;
| supervised&lt;br /&gt;
| 75.6%&lt;br /&gt;
| 83.0%&lt;br /&gt;
|-&lt;br /&gt;
| SHPNM&lt;br /&gt;
| Socher et al. (2011)&lt;br /&gt;
| recursive autoencoder with dynamic pooling&lt;br /&gt;
| supervised&lt;br /&gt;
| 76.8%&lt;br /&gt;
| 83.6%&lt;br /&gt;
|-&lt;br /&gt;
| MTMETRICS&lt;br /&gt;
| Madnani et al. (2012)&lt;br /&gt;
| combination of eight machine translation metrics&lt;br /&gt;
| supervised&lt;br /&gt;
| 77.4%&lt;br /&gt;
| 84.1%&lt;br /&gt;
|-&lt;br /&gt;
| L.D.C Model&lt;br /&gt;
| Wang et al. (2016)&lt;br /&gt;
| Sentence Similarity Learning by Lexical Decomposition and Composition&lt;br /&gt;
| supervised&lt;br /&gt;
| 78.4%&lt;br /&gt;
| 84.7%&lt;br /&gt;
|-&lt;br /&gt;
| Multi-Perspective CNN&lt;br /&gt;
| He et al. (2015)&lt;br /&gt;
| Multi-perspective Convolutional NNs and structured similarity layer&lt;br /&gt;
| supervised&lt;br /&gt;
| 78.6%&lt;br /&gt;
| 84.7%&lt;br /&gt;
|-&lt;br /&gt;
| SAMS-RecNN&lt;br /&gt;
| Cheng and Kartsaklis (2015)&lt;br /&gt;
| Recursive NNs using syntax-aware multi-sense word embeddings&lt;br /&gt;
| supervised&lt;br /&gt;
| 78.6%&lt;br /&gt;
| 85.3%&lt;br /&gt;
|-&lt;br /&gt;
| TF-KLD&lt;br /&gt;
| Ji and Eisenstein (2013)&lt;br /&gt;
| Matrix factorization with supervised reweighting&lt;br /&gt;
| supervised&lt;br /&gt;
| 80.4%&lt;br /&gt;
| 85.9%&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Listed alphabetically.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Blacoe, W. and Lapata, M. (2012). [http://newdesign.aclweb.org/anthology/D/D12/D12-1050.pdf A comparison of vector-based representations for semantic composition], &#039;&#039;Proceedings of EMNLP&#039;&#039;, Jeju Island, Korea, pp. 546-556.&lt;br /&gt;
&lt;br /&gt;
Cheng, J. and Kartsaklis, D. (2015). [http://www.aclweb.org/anthology/D/D15/D15-1177.pdf Syntax-Aware Multi-Sense Word Embeddings for Deep Compositional Models of Meaning], &#039;&#039;Proceedings of EMNLP 2015&#039;&#039;, Lisbon, Portugal, pp. 1531-1542.&lt;br /&gt;
&lt;br /&gt;
Das, D., and Smith, N. (2009). [http://www.aclweb.org/anthology-new/P/P09/P09-1053.pdf Paraphrase identification as probabilistic quasi-synchronous recognition]. &#039;&#039;Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP&#039;&#039;, pp. 468-476, Suntec, Singapore.&lt;br /&gt;
&lt;br /&gt;
Dolan, B., Quirk, C., and Brockett, C. (2004). [http://acl.ldc.upenn.edu/C/C04/C04-1051.pdf Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources], &#039;&#039;Proceedings of the 20th international conference on Computational Linguistics (COLING 2004)&#039;&#039;, Geneva, Switzerland, pp. 350-356.&lt;br /&gt;
&lt;br /&gt;
Fernando, S., and Stevenson, M. (2008). [http://staffwww.dcs.shef.ac.uk/people/S.Fernando/pubs/clukPaper.pdf A semantic similarity approach to paraphrase detection], &#039;&#039;Computational Linguistics UK (CLUK 2008) 11th Annual Research Colloquium&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Finch, A., and H, Y.S., and Sumita, E. (2005). [http://aclweb.org/anthology/I/I05/I05-5003.pdf Using machine translation evaluation techniques to determine sentence-level semantic equivalence], &amp;quot;Proceedings of the Third International Workshop on Paraphrasing (IWP 2005)&amp;quot;, Jeju Island, South Korea, pp. 17-24.&lt;br /&gt;
&lt;br /&gt;
Hassan, Samer. [http://samerhassan.com/images/0/01/Dissertation.pdf Measuring Semantic Relatedness Using Salient Encyclopedic Concepts]. Doctor of Philosophy, August 2011&lt;br /&gt;
&lt;br /&gt;
He, Hua, Gimpel K. and Lin J. (2015). [http://aclweb.org/anthology/D/D15/D15-1181.pdf Multi-Perspective Sentence Similarity Modeling with Convolutional Neural Networks], &#039;&#039;Proceedings of EMNLP 2015&#039;&#039;, Lisbon, Portugal, pp. 1576-1586.&lt;br /&gt;
&lt;br /&gt;
Islam, A., and Inkpen, D. (2007). [http://www.site.uottawa.ca/~diana/publications/ranlp_2007_textsim_camera_ready.pdf Semantic similarity of short texts], &#039;&#039;Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2007)&#039;&#039;, Borovets, Bulgaria, pp. 291-297.&lt;br /&gt;
&lt;br /&gt;
Ji, Y. and Eisenstein, J. (2013) [http://www.aclweb.org/anthology/D/D13/D13-1090.pdf Discriminative Improvements to Distributional Sentence Similarity],&lt;br /&gt;
&#039;&#039;Proceedings of Empirical Methods in Natural Language Processing (EMNLP 2013)&#039;&#039;, Seattle, Washington, USA, pp. 891--896&lt;br /&gt;
&lt;br /&gt;
Kozareva, Z., and Montoyo, A. (2006). [http://www.dlsi.ua.es/~zkozareva/papers/fintalKozareva.pdf Paraphrase identification on the basis of supervised machine learning techniques], &#039;&#039;Advances in Natural Language Processing: 5th International Conference on NLP (FinTAL 2006)&#039;&#039;, Turku, Finland, 524-533.&lt;br /&gt;
&lt;br /&gt;
Madnani, N., Tetreault, J., and Chodorow, M. (2012). [http://www.aclweb.org/anthology-new/N/N12/N12-1019.pdf Re-examining Machine Translation Metrics for Paraphrase Identification], &#039;&#039;Proceedings of 2012 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2012)&#039;&#039;, pp. 182-190.&lt;br /&gt;
&lt;br /&gt;
Mihalcea, R., Corley, C., and Strapparava, C. (2006). [http://www.cse.unt.edu/~rada/papers/mihalcea.aaai06.pdf Corpus-based and knowledge-based measures of text semantic similarity], &#039;&#039;Proceedings of the National Conference on Artificial Intelligence (AAAI 2006)&#039;&#039;, Boston, Massachusetts, pp. 775-780.&lt;br /&gt;
&lt;br /&gt;
Milajevs, D., Kartsaklis, D., Sadrzadeh, M. and Purver, M. (2014). [https://aclweb.org/anthology/D/D14/D14-1079.pdf Evaluating Neural Word Representations in Tensor-Based Compositional Settings], &#039;&#039;Proceedings of EMNLP 2014&#039;&#039;, Doha, Qatar, pp. 708–719.&lt;br /&gt;
&lt;br /&gt;
Qiu, L. and Kan, M.Y. and Chua, T.S. (2006). [http://acl.ldc.upenn.edu/W/W06/W06-1603.pdf Paraphrase recognition via dissimilarity significance classification], &#039;&#039;Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing (EMNLP 2006)&#039;&#039;, pp. 18-26.&lt;br /&gt;
&lt;br /&gt;
Rus, V. and McCarthy, P.M. and Lintean, M.C. and McNamara, D.S. and Graesser, A.C. (2008). [http://csep.psyc.memphis.edu/McNamara/pdf/Paraphrase_Identification.pdf Paraphrase identification with lexico-syntactic graph subsumption], &#039;&#039;FLAIRS 2008&#039;&#039;, pp. 201-206.&lt;br /&gt;
&lt;br /&gt;
Socher, R. and Huang, E.H., and Pennington, J. and Ng, A.Y., and Manning, C.D. (2011). [http://www.socher.org/uploads/Main/SocherHuangPenningtonNgManning_NIPS2011.pdf Dynamic pooling and unfolding recursive autoencoders for paraphrase detection], &amp;quot;Advances in Neural Information Processing Systems 24&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Wan, S., Dras, M., Dale, R., and Paris, C. (2006). [http://www.alta.asn.au/events/altw2006/proceedings/swan-final.pdf Using dependency-based features to take the &amp;quot;para-farce&amp;quot; out of paraphrase], &#039;&#039;Proceedings of the Australasian Language Technology Workshop (ALTW 2006)&#039;&#039;, pp. 131-138.&lt;br /&gt;
&lt;br /&gt;
Zia Ul-Qayyum and Wasif Altaf, (2012). [http://maxwellsci.com/print/rjaset/v4-4894-4904.pdf Paraphrase Identification using Semantic Heuristic Features], &#039;&#039;Research Journal of Applied Sciences, Engineering and Technology&#039;&#039;, 4(22): 4894-4904.&lt;br /&gt;
&lt;br /&gt;
Zhiguo Wang, Haitao Mi and Abraham Ittycheriah. 2016. [http://arxiv.org/pdf/1602.07019v1.pdf Sentence Similarity Learning by Lexical Decomposition and Composition]. In Coling 2016.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Please keep this list in alphabetical order --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:State of the art]]&lt;br /&gt;
[[Category:Similarity]]&lt;/div&gt;</summary>
		<author><name>Zhiguo Wang</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/aclwiki/index.php?title=Paraphrase_Identification_(State_of_the_art)&amp;diff=11696</id>
		<title>Paraphrase Identification (State of the art)</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/aclwiki/index.php?title=Paraphrase_Identification_(State_of_the_art)&amp;diff=11696"/>
		<updated>2016-11-26T13:00:39Z</updated>

		<summary type="html">&lt;p&gt;Zhiguo Wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* &#039;&#039;&#039;source&#039;&#039;&#039;: [http://research.microsoft.com/en-us/downloads/607D14D9-20CD-47E3-85BC-A2F65CD28042/default.aspx Microsoft Research Paraphrase Corpus] (MSRP)&lt;br /&gt;
* &#039;&#039;&#039;task&#039;&#039;&#039;: given a pair of sentences, classify them as paraphrases or not paraphrases&lt;br /&gt;
* &#039;&#039;&#039;see&#039;&#039;&#039;: Dolan et al. (2004)&lt;br /&gt;
* &#039;&#039;&#039;train&#039;&#039;&#039;: 4,076 sentence pairs (2,753 positive: 67.5%)&lt;br /&gt;
* &#039;&#039;&#039;test&#039;&#039;&#039;: 1,725 sentence pairs (1,147 positive: 66.5%)&lt;br /&gt;
* &#039;&#039;&#039;see also:&#039;&#039;&#039; [[Similarity (State of the art)]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Sample data ==&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Sentence 1&#039;&#039;&#039;: Amrozi accused his brother, whom he called &amp;quot;the witness&amp;quot;, of deliberately distorting his evidence.&lt;br /&gt;
* &#039;&#039;&#039;Sentence 2&#039;&#039;&#039;: Referring to him as only &amp;quot;the witness&amp;quot;, Amrozi accused his brother of deliberately distorting his evidence.&lt;br /&gt;
* &#039;&#039;&#039;Class&#039;&#039;&#039;: 1 (true paraphrase)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Table of results ==&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Listed in order of increasing F score.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;1&amp;quot; width=&amp;quot;100%&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Algorithm&lt;br /&gt;
! Reference&lt;br /&gt;
! Description&lt;br /&gt;
! Supervision&lt;br /&gt;
! Accuracy&lt;br /&gt;
! F&lt;br /&gt;
|-&lt;br /&gt;
| Vector Based Similarity (Baseline)&lt;br /&gt;
| Mihalcea et al. (2006)&lt;br /&gt;
| cosine similarity with tf-idf weighting&lt;br /&gt;
| unsupervised&lt;br /&gt;
| 65.4%&lt;br /&gt;
| 75.3%&lt;br /&gt;
|-&lt;br /&gt;
| ESA&lt;br /&gt;
| Hassan (2011)&lt;br /&gt;
| explicit semantic space&lt;br /&gt;
| unsupervised&lt;br /&gt;
| 67.0%&lt;br /&gt;
| 79.3%&lt;br /&gt;
|-&lt;br /&gt;
| KM&lt;br /&gt;
| Kozareva and Montoyo (2006)&lt;br /&gt;
| combination of lexical and semantic features&lt;br /&gt;
| supervised&lt;br /&gt;
| 76.6%&lt;br /&gt;
| 79.6%&lt;br /&gt;
|-&lt;br /&gt;
| LSA&lt;br /&gt;
| Hassan (2011)&lt;br /&gt;
| latent semantic space&lt;br /&gt;
| unsupervised&lt;br /&gt;
| 68.8%&lt;br /&gt;
| 79.9%&lt;br /&gt;
|-&lt;br /&gt;
| RMLMG&lt;br /&gt;
| Rus et al. (2008)&lt;br /&gt;
| graph subsumption&lt;br /&gt;
| unsupervised&lt;br /&gt;
| 70.6%&lt;br /&gt;
| 80.5%&lt;br /&gt;
|-&lt;br /&gt;
| MCS&lt;br /&gt;
| Mihalcea et al. (2006)&lt;br /&gt;
| combination of several word similarity measures&lt;br /&gt;
| unsupervised&lt;br /&gt;
| 70.3%&lt;br /&gt;
| 81.3%&lt;br /&gt;
|-&lt;br /&gt;
| STS&lt;br /&gt;
| Islam and Inkpen (2007)&lt;br /&gt;
| combination of semantic and string similarity&lt;br /&gt;
| unsupervised&lt;br /&gt;
| 72.6%&lt;br /&gt;
| 81.3%&lt;br /&gt;
|-&lt;br /&gt;
| SSA&lt;br /&gt;
| Hassan (2011)&lt;br /&gt;
| salient semantic space&lt;br /&gt;
| unsupervised&lt;br /&gt;
| 72.5%&lt;br /&gt;
| 81.4%&lt;br /&gt;
|-&lt;br /&gt;
| QKC&lt;br /&gt;
| Qiu et al. (2006)&lt;br /&gt;
| sentence dissimilarity classification&lt;br /&gt;
| supervised&lt;br /&gt;
| 72.0%&lt;br /&gt;
| 81.6%&lt;br /&gt;
|-&lt;br /&gt;
| ParaDetect&lt;br /&gt;
| Zia and Wasif (2012)&lt;br /&gt;
| PI using semantic heuristic features&lt;br /&gt;
| supervised&lt;br /&gt;
| 74.7%&lt;br /&gt;
| 81.8%&lt;br /&gt;
|-&lt;br /&gt;
| Vector-based similarity&lt;br /&gt;
| Milajevs et al. (2014)&lt;br /&gt;
| Additive composition of vectors and cosine distance&lt;br /&gt;
| unsupervised&lt;br /&gt;
| 73.0%&lt;br /&gt;
| 82.0%&lt;br /&gt;
|-&lt;br /&gt;
| SDS&lt;br /&gt;
| Blacoe and Lapata (2012)&lt;br /&gt;
| simple distributional semantic space&lt;br /&gt;
| supervised&lt;br /&gt;
| 73.0%&lt;br /&gt;
| 82.3%&lt;br /&gt;
|-&lt;br /&gt;
| matrixJcn&lt;br /&gt;
| Fernando and Stevenson (2008)&lt;br /&gt;
| JCN WordNet similarity with matrix&lt;br /&gt;
| unsupervised&lt;br /&gt;
| 74.1%&lt;br /&gt;
| 82.4%&lt;br /&gt;
|-&lt;br /&gt;
| FHS&lt;br /&gt;
| Finch et al. (2005)&lt;br /&gt;
| combination of MT evaluation measures as features&lt;br /&gt;
| supervised&lt;br /&gt;
| 75.0%&lt;br /&gt;
| 82.7%&lt;br /&gt;
|-&lt;br /&gt;
| PE&lt;br /&gt;
| Das and Smith (2009)&lt;br /&gt;
| product of experts&lt;br /&gt;
| supervised&lt;br /&gt;
| 76.1%&lt;br /&gt;
| 82.7%&lt;br /&gt;
|-&lt;br /&gt;
| WDDP&lt;br /&gt;
| Wan et al. (2006)&lt;br /&gt;
| dependency-based features&lt;br /&gt;
| supervised&lt;br /&gt;
| 75.6%&lt;br /&gt;
| 83.0%&lt;br /&gt;
|-&lt;br /&gt;
| SHPNM&lt;br /&gt;
| Socher et al. (2011)&lt;br /&gt;
| recursive autoencoder with dynamic pooling&lt;br /&gt;
| supervised&lt;br /&gt;
| 76.8%&lt;br /&gt;
| 83.6%&lt;br /&gt;
|-&lt;br /&gt;
| MTMETRICS&lt;br /&gt;
| Madnani et al. (2012)&lt;br /&gt;
| combination of eight machine translation metrics&lt;br /&gt;
| supervised&lt;br /&gt;
| 77.4%&lt;br /&gt;
| 84.1%&lt;br /&gt;
-&lt;br /&gt;
| L.D.C Model&lt;br /&gt;
| Wang et al. (2016)&lt;br /&gt;
| Sentence Similarity Learning by Lexical Decomposition and Composition&lt;br /&gt;
| supervised&lt;br /&gt;
| 78.4%&lt;br /&gt;
| 84.7%&lt;br /&gt;
|-&lt;br /&gt;
| Multi-Perspective CNN&lt;br /&gt;
| He et al. (2015)&lt;br /&gt;
| Multi-perspective Convolutional NNs and structured similarity layer&lt;br /&gt;
| supervised&lt;br /&gt;
| 78.6%&lt;br /&gt;
| 84.7%&lt;br /&gt;
|-&lt;br /&gt;
| SAMS-RecNN&lt;br /&gt;
| Cheng and Kartsaklis (2015)&lt;br /&gt;
| Recursive NNs using syntax-aware multi-sense word embeddings&lt;br /&gt;
| supervised&lt;br /&gt;
| 78.6%&lt;br /&gt;
| 85.3%&lt;br /&gt;
|-&lt;br /&gt;
| TF-KLD&lt;br /&gt;
| Ji and Eisenstein (2013)&lt;br /&gt;
| Matrix factorization with supervised reweighting&lt;br /&gt;
| supervised&lt;br /&gt;
| 80.4%&lt;br /&gt;
| 85.9%&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Listed alphabetically.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Blacoe, W. and Lapata, M. (2012). [http://newdesign.aclweb.org/anthology/D/D12/D12-1050.pdf A comparison of vector-based representations for semantic composition], &#039;&#039;Proceedings of EMNLP&#039;&#039;, Jeju Island, Korea, pp. 546-556.&lt;br /&gt;
&lt;br /&gt;
Cheng, J. and Kartsaklis, D. (2015). [http://www.aclweb.org/anthology/D/D15/D15-1177.pdf Syntax-Aware Multi-Sense Word Embeddings for Deep Compositional Models of Meaning], &#039;&#039;Proceedings of EMNLP 2015&#039;&#039;, Lisbon, Portugal, pp. 1531-1542.&lt;br /&gt;
&lt;br /&gt;
Das, D., and Smith, N. (2009). [http://www.aclweb.org/anthology-new/P/P09/P09-1053.pdf Paraphrase identification as probabilistic quasi-synchronous recognition]. &#039;&#039;Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP&#039;&#039;, pp. 468-476, Suntec, Singapore.&lt;br /&gt;
&lt;br /&gt;
Dolan, B., Quirk, C., and Brockett, C. (2004). [http://acl.ldc.upenn.edu/C/C04/C04-1051.pdf Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources], &#039;&#039;Proceedings of the 20th international conference on Computational Linguistics (COLING 2004)&#039;&#039;, Geneva, Switzerland, pp. 350-356.&lt;br /&gt;
&lt;br /&gt;
Fernando, S., and Stevenson, M. (2008). [http://staffwww.dcs.shef.ac.uk/people/S.Fernando/pubs/clukPaper.pdf A semantic similarity approach to paraphrase detection], &#039;&#039;Computational Linguistics UK (CLUK 2008) 11th Annual Research Colloquium&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Finch, A., and H, Y.S., and Sumita, E. (2005). [http://aclweb.org/anthology/I/I05/I05-5003.pdf Using machine translation evaluation techniques to determine sentence-level semantic equivalence], &amp;quot;Proceedings of the Third International Workshop on Paraphrasing (IWP 2005)&amp;quot;, Jeju Island, South Korea, pp. 17-24.&lt;br /&gt;
&lt;br /&gt;
Hassan, Samer. [http://samerhassan.com/images/0/01/Dissertation.pdf Measuring Semantic Relatedness Using Salient Encyclopedic Concepts]. Doctor of Philosophy, August 2011&lt;br /&gt;
&lt;br /&gt;
He, Hua, Gimpel K. and Lin J. (2015). [http://aclweb.org/anthology/D/D15/D15-1181.pdf Multi-Perspective Sentence Similarity Modeling with Convolutional Neural Networks], &#039;&#039;Proceedings of EMNLP 2015&#039;&#039;, Lisbon, Portugal, pp. 1576-1586.&lt;br /&gt;
&lt;br /&gt;
Islam, A., and Inkpen, D. (2007). [http://www.site.uottawa.ca/~diana/publications/ranlp_2007_textsim_camera_ready.pdf Semantic similarity of short texts], &#039;&#039;Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2007)&#039;&#039;, Borovets, Bulgaria, pp. 291-297.&lt;br /&gt;
&lt;br /&gt;
Ji, Y. and Eisenstein, J. (2013) [http://www.aclweb.org/anthology/D/D13/D13-1090.pdf Discriminative Improvements to Distributional Sentence Similarity],&lt;br /&gt;
&#039;&#039;Proceedings of Empirical Methods in Natural Language Processing (EMNLP 2013)&#039;&#039;, Seattle, Washington, USA, pp. 891--896&lt;br /&gt;
&lt;br /&gt;
Kozareva, Z., and Montoyo, A. (2006). [http://www.dlsi.ua.es/~zkozareva/papers/fintalKozareva.pdf Paraphrase identification on the basis of supervised machine learning techniques], &#039;&#039;Advances in Natural Language Processing: 5th International Conference on NLP (FinTAL 2006)&#039;&#039;, Turku, Finland, 524-533.&lt;br /&gt;
&lt;br /&gt;
Madnani, N., Tetreault, J., and Chodorow, M. (2012). [http://www.aclweb.org/anthology-new/N/N12/N12-1019.pdf Re-examining Machine Translation Metrics for Paraphrase Identification], &#039;&#039;Proceedings of 2012 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2012)&#039;&#039;, pp. 182-190.&lt;br /&gt;
&lt;br /&gt;
Mihalcea, R., Corley, C., and Strapparava, C. (2006). [http://www.cse.unt.edu/~rada/papers/mihalcea.aaai06.pdf Corpus-based and knowledge-based measures of text semantic similarity], &#039;&#039;Proceedings of the National Conference on Artificial Intelligence (AAAI 2006)&#039;&#039;, Boston, Massachusetts, pp. 775-780.&lt;br /&gt;
&lt;br /&gt;
Milajevs, D., Kartsaklis, D., Sadrzadeh, M. and Purver, M. (2014). [https://aclweb.org/anthology/D/D14/D14-1079.pdf Evaluating Neural Word Representations in Tensor-Based Compositional Settings], &#039;&#039;Proceedings of EMNLP 2014&#039;&#039;, Doha, Qatar, pp. 708–719.&lt;br /&gt;
&lt;br /&gt;
Qiu, L. and Kan, M.Y. and Chua, T.S. (2006). [http://acl.ldc.upenn.edu/W/W06/W06-1603.pdf Paraphrase recognition via dissimilarity significance classification], &#039;&#039;Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing (EMNLP 2006)&#039;&#039;, pp. 18-26.&lt;br /&gt;
&lt;br /&gt;
Rus, V. and McCarthy, P.M. and Lintean, M.C. and McNamara, D.S. and Graesser, A.C. (2008). [http://csep.psyc.memphis.edu/McNamara/pdf/Paraphrase_Identification.pdf Paraphrase identification with lexico-syntactic graph subsumption], &#039;&#039;FLAIRS 2008&#039;&#039;, pp. 201-206.&lt;br /&gt;
&lt;br /&gt;
Socher, R. and Huang, E.H., and Pennington, J. and Ng, A.Y., and Manning, C.D. (2011). [http://www.socher.org/uploads/Main/SocherHuangPenningtonNgManning_NIPS2011.pdf Dynamic pooling and unfolding recursive autoencoders for paraphrase detection], &amp;quot;Advances in Neural Information Processing Systems 24&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Wan, S., Dras, M., Dale, R., and Paris, C. (2006). [http://www.alta.asn.au/events/altw2006/proceedings/swan-final.pdf Using dependency-based features to take the &amp;quot;para-farce&amp;quot; out of paraphrase], &#039;&#039;Proceedings of the Australasian Language Technology Workshop (ALTW 2006)&#039;&#039;, pp. 131-138.&lt;br /&gt;
&lt;br /&gt;
Zia Ul-Qayyum and Wasif Altaf, (2012). [http://maxwellsci.com/print/rjaset/v4-4894-4904.pdf Paraphrase Identification using Semantic Heuristic Features], &#039;&#039;Research Journal of Applied Sciences, Engineering and Technology&#039;&#039;, 4(22): 4894-4904.&lt;br /&gt;
&lt;br /&gt;
Zhiguo Wang, Haitao Mi and Abraham Ittycheriah. 2016. [http://arxiv.org/pdf/1602.07019v1.pdf Sentence Similarity Learning by Lexical Decomposition and Composition]. In Coling 2016.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Please keep this list in alphabetical order --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:State of the art]]&lt;br /&gt;
[[Category:Similarity]]&lt;/div&gt;</summary>
		<author><name>Zhiguo Wang</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/aclwiki/index.php?title=Question_Answering_(State_of_the_art)&amp;diff=11694</id>
		<title>Question Answering (State of the art)</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/aclwiki/index.php?title=Question_Answering_(State_of_the_art)&amp;diff=11694"/>
		<updated>2016-11-20T15:43:09Z</updated>

		<summary type="html">&lt;p&gt;Zhiguo Wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Answer Sentence Selection ==&lt;br /&gt;
&lt;br /&gt;
The task of answer sentence selection is designed for the open-domain question answering setting. Given a question and a set of candidate sentences, the task is to choose the correct sentence that contains the exact answer and can sufficiently support the answer choice. &lt;br /&gt;
&lt;br /&gt;
* [http://cs.stanford.edu/people/mengqiu/data/qg-emnlp07-data.tgz QA Answer Sentence Selection Dataset]: labeled sentences using TREC QA track data, provided by [http://cs.stanford.edu/people/mengqiu/ Mengqiu Wang] and first used in [http://www.aclweb.org/anthology/D/D07/D07-1003.pdf Wang et al. (2007)]. &lt;br /&gt;
* Over time, the original dataset diverged to two versions due to different pre-processing in recent publications: both have the same training set but their development and test sets differ. The Raw version has 82 questions in the development set and 100 questions in the test set; The Clean version (Wang and Ittycheriah et al. 2015, Tan et al. 2015, dos Santos et al. 2016, Wang et al. 2016) removed questions with no answers or with only positive/negative answers, thus has only 65 questions in the development set and 68 questions in the test set. &lt;br /&gt;
* Note: MAP/MRR scores on the two versions of TREC QA data (Clean vs Raw) are not comparable according to [http://www.cs.umd.edu/~jinfeng/publications/PairwiseNeuralNetwork_CIKM2016.pdf Rao et al. (2016)]. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Algorithm - Raw Version of TREC QA&lt;br /&gt;
! Reference&lt;br /&gt;
! [http://en.wikipedia.org/wiki/Mean_average_precision MAP]&lt;br /&gt;
! [http://en.wikipedia.org/wiki/Mean_reciprocal_rank MRR]&lt;br /&gt;
|-&lt;br /&gt;
| Punyakanok (2004)&lt;br /&gt;
| Wang et al. (2007)&lt;br /&gt;
| 0.419&lt;br /&gt;
| 0.494&lt;br /&gt;
|-&lt;br /&gt;
| Cui (2005)&lt;br /&gt;
| Wang et al. (2007)&lt;br /&gt;
| 0.427&lt;br /&gt;
| 0.526&lt;br /&gt;
|-&lt;br /&gt;
| Wang (2007)&lt;br /&gt;
| Wang et al. (2007)&lt;br /&gt;
| 0.603&lt;br /&gt;
| 0.685&lt;br /&gt;
|-&lt;br /&gt;
| H&amp;amp;S (2010)&lt;br /&gt;
| Heilman and Smith (2010)&lt;br /&gt;
| 0.609&lt;br /&gt;
| 0.692&lt;br /&gt;
|-&lt;br /&gt;
| W&amp;amp;M (2010)&lt;br /&gt;
| Wang and Manning (2010)&lt;br /&gt;
| 0.595&lt;br /&gt;
| 0.695&lt;br /&gt;
|-&lt;br /&gt;
| Yao (2013)&lt;br /&gt;
| Yao et al. (2013)&lt;br /&gt;
| 0.631&lt;br /&gt;
| 0.748&lt;br /&gt;
|-&lt;br /&gt;
| S&amp;amp;M (2013)&lt;br /&gt;
| Severyn and Moschitti (2013)&lt;br /&gt;
| 0.678&lt;br /&gt;
| 0.736&lt;br /&gt;
|-&lt;br /&gt;
| Shnarch (2013) - Backward &lt;br /&gt;
| Shnarch (2013)&lt;br /&gt;
| 0.686&lt;br /&gt;
| 0.754&lt;br /&gt;
|-&lt;br /&gt;
| Yih (2013) - LCLR&lt;br /&gt;
| Yih et al. (2013)&lt;br /&gt;
| 0.709&lt;br /&gt;
| 0.770&lt;br /&gt;
|-&lt;br /&gt;
| Yu (2014) - TRAIN-ALL bigram+count&lt;br /&gt;
| Yu et al. (2014)&lt;br /&gt;
| 0.711&lt;br /&gt;
| 0.785&lt;br /&gt;
|-&lt;br /&gt;
| W&amp;amp;N (2015) - Three-Layer BLSTM+BM25&lt;br /&gt;
| Wang and Nyberg (2015)&lt;br /&gt;
| 0.713&lt;br /&gt;
| 0.791&lt;br /&gt;
|-&lt;br /&gt;
| Feng (2015) - Architecture-II&lt;br /&gt;
| Tan et al. (2015)&lt;br /&gt;
| 0.711&lt;br /&gt;
| 0.800&lt;br /&gt;
|-&lt;br /&gt;
| S&amp;amp;M (2015)&lt;br /&gt;
| Severyn and Moschitti (2015)&lt;br /&gt;
| 0.746&lt;br /&gt;
| 0.808&lt;br /&gt;
|-&lt;br /&gt;
| H&amp;amp;L (2016) - Pairwise Word Interaction Modelling&lt;br /&gt;
| He and Lin (2016)&lt;br /&gt;
| 0.758&lt;br /&gt;
| 0.822&lt;br /&gt;
|-&lt;br /&gt;
| H&amp;amp;L (2015) - Multi-Perspective CNN&lt;br /&gt;
| He and Lin (2015)&lt;br /&gt;
| 0.762&lt;br /&gt;
| 0.830&lt;br /&gt;
|-&lt;br /&gt;
| Rao (2016) - PairwiseRank + Multi-Perspective CNN&lt;br /&gt;
| Rao et al. (2016)&lt;br /&gt;
| 0.780&lt;br /&gt;
| 0.834&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Algorithm - Clean Version of TREC QA&lt;br /&gt;
! Reference&lt;br /&gt;
! [http://en.wikipedia.org/wiki/Mean_average_precision MAP]&lt;br /&gt;
! [http://en.wikipedia.org/wiki/Mean_reciprocal_rank MRR]&lt;br /&gt;
|-&lt;br /&gt;
| W&amp;amp;I (2015)&lt;br /&gt;
| Wang and Ittycheriah (2015)&lt;br /&gt;
| 0.746&lt;br /&gt;
| 0.820&lt;br /&gt;
|-&lt;br /&gt;
| Tan (2015) - QA-LSTM/CNN+attention &lt;br /&gt;
| Tan et al. (2015)&lt;br /&gt;
| 0.728&lt;br /&gt;
| 0.832&lt;br /&gt;
|-&lt;br /&gt;
| dos Santos (2016) - Attentive Pooling CNN &lt;br /&gt;
| dos Santos et al. (2016)&lt;br /&gt;
| 0.753&lt;br /&gt;
| 0.851&lt;br /&gt;
|-&lt;br /&gt;
| Wang et al.  (2016) - L.D.C Model&lt;br /&gt;
| Wang et al. (2016)&lt;br /&gt;
| 0.771&lt;br /&gt;
| 0.845&lt;br /&gt;
|-&lt;br /&gt;
| H&amp;amp;L (2015) - Multi-Perspective CNN&lt;br /&gt;
| He and Lin (2015)&lt;br /&gt;
| 0.777&lt;br /&gt;
| 0.836&lt;br /&gt;
|-&lt;br /&gt;
| Rao et al.  (2016) - PairwiseRank + Multi-Perspective CNN&lt;br /&gt;
| Rao et al. (2016)&lt;br /&gt;
| 0.801&lt;br /&gt;
| 0.877&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Vasin Punyakanok, Dan Roth, and Wen-Tau Yih. 2004. [http://cogcomp.cs.illinois.edu/papers/PunyakanokRoYi04a.pdf Mapping dependencies trees: An application to question answering]. In Proceedings of the 8th International Symposium on Artificial Intelligence and Mathematics, Fort Lauderdale, FL, USA.&lt;br /&gt;
* Hang Cui, Renxu Sun, Keya Li, Min-Yen Kan, and Tat-Seng Chua. 2005. [http://ws.csie.ncku.edu.tw/login/upload/2005/paper/Question%20answering%20Question%20answering%20passage%20retrieval%20using%20dependency%20relations.pdf Question answering passage retrieval using dependency relations]. In Proceedings of the 28th ACM-SIGIR International Conference on Research and Development in Information Retrieval, Salvador, Brazil.&lt;br /&gt;
* Wang, Mengqiu and Smith, Noah A. and Mitamura, Teruko. 2007. [http://www.aclweb.org/anthology/D/D07/D07-1003.pdf What is the Jeopardy Model? A Quasi-Synchronous Grammar for QA]. In EMNLP-CoNLL 2007.&lt;br /&gt;
* Heilman, Michael and Smith, Noah A. 2010. [http://www.aclweb.org/anthology/N10-1145 Tree Edit Models for Recognizing Textual Entailments, Paraphrases, and Answers to Questions]. In NAACL-HLT 2010.&lt;br /&gt;
* Wang, Mengqiu and Manning, Christopher. 2010. [http://aclweb.org/anthology//C/C10/C10-1131.pdf Probabilistic Tree-Edit Models with Structured Latent Variables for Textual Entailment and Question Answering]. In COLING 2010.&lt;br /&gt;
* E. Shnarch. 2013. Probabilistic Models for Lexical Inference. Ph.D. thesis, Bar Ilan University.&lt;br /&gt;
* Yao, Xuchen and Van Durme, Benjamin and Callison-Burch, Chris and Clark, Peter. 2013. [http://www.aclweb.org/anthology/N13-1106.pdf Answer Extraction as Sequence Tagging with Tree Edit Distance]. In NAACL-HLT 2013.&lt;br /&gt;
* Yih, Wen-tau and Chang, Ming-Wei and Meek, Christopher and Pastusiak, Andrzej. 2013. [http://research.microsoft.com/pubs/192357/QA-SentSel-Updated-PostACL.pdf Question Answering Using Enhanced Lexical Semantic Models]. In ACL 2013.&lt;br /&gt;
* Severyn, Aliaksei and Moschitti, Alessandro. 2013. [http://www.aclweb.org/anthology/D13-1044.pdf Automatic Feature Engineering for Answer Selection and Extraction]. In EMNLP 2013.&lt;br /&gt;
* Lei Yu, Karl Moritz Hermann, Phil Blunsom, and Stephen Pulman. 2014. [http://arxiv.org/pdf/1412.1632v1.pdf Deep Learning for Answer Sentence Selection]. In NIPS deep learning workshop.&lt;br /&gt;
* Di Wang and Eric Nyberg. 2015. [http://www.aclweb.org/anthology/P15-2116 A Long Short-Term Memory Model for Answer Sentence Selection in Question Answering]. In ACL 2015.&lt;br /&gt;
* Minwei Feng, Bing Xiang, Michael R. Glass, Lidan Wang, Bowen Zhou. 2015. [http://arxiv.org/abs/1508.01585 Applying deep learning to answer selection: A study and an open task]. In ASRU 2015.&lt;br /&gt;
* Aliaksei Severyn and Alessandro Moschitti. 2015. [http://disi.unitn.it/~severyn/papers/sigir-2015-long.pdf Learning to Rank Short Text Pairs with Convolutional Deep Neural Networks]. In SIGIR 2015.&lt;br /&gt;
* Zhiguo Wang and Abraham Ittycheriah. 2015. [http://arxiv.org/abs/1507.02628 FAQ-based Question Answering via Word Alignment]. In eprint arXiv:1507.02628.&lt;br /&gt;
* Ming Tan, Cicero dos Santos, Bing Xiang &amp;amp; Bowen Zhou. 2015. [http://arxiv.org/abs/1511.04108 LSTM-Based Deep Learning Models for Nonfactoid Answer Selection]. In eprint arXiv:1511.04108.&lt;br /&gt;
* Cicero dos Santos, Ming Tan, Bing Xiang &amp;amp; Bowen Zhou. 2016. [http://arxiv.org/abs/1602.03609 Attentive Pooling Networks]. In eprint arXiv:1602.03609.&lt;br /&gt;
* Zhiguo Wang, Haitao Mi and Abraham Ittycheriah. 2016. [http://arxiv.org/pdf/1602.07019v1.pdf Sentence Similarity Learning by Lexical Decomposition and Composition]. In Coling 2016.&lt;br /&gt;
* Hua He, Kevin Gimpel and Jimmy Lin. 2015. [http://aclweb.org/anthology/D/D15/D15-1181.pdf Multi-Perspective Sentence Similarity Modeling with Convolutional Neural Networks]. In EMNLP 2015.&lt;br /&gt;
* Hua He and Jimmy Lin. 2016. [https://cs.uwaterloo.ca/~jimmylin/publications/He_etal_NAACL-HTL2016.pdf Pairwise Word Interaction Modeling with Deep Neural Networks for Semantic Similarity Measurement]. In NAACL 2016.&lt;br /&gt;
* Jinfeng Rao, Hua He and Jimmy Lin. 2016. [http://www.cs.umd.edu/~jinfeng/publications/PairwiseNeuralNetwork_CIKM2016.pdf Noise-Contrastive Estimation for Answer Selection with Deep Neural Networks]. In CIKM 2016&lt;br /&gt;
[[Category:State of the art]]&lt;/div&gt;</summary>
		<author><name>Zhiguo Wang</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/aclwiki/index.php?title=Question_Answering_(State_of_the_art)&amp;diff=11419</id>
		<title>Question Answering (State of the art)</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/aclwiki/index.php?title=Question_Answering_(State_of_the_art)&amp;diff=11419"/>
		<updated>2016-02-24T15:36:34Z</updated>

		<summary type="html">&lt;p&gt;Zhiguo Wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Answer Sentence Selection ==&lt;br /&gt;
&lt;br /&gt;
The task of answer sentence selection is designed for the open-domain question answering setting. Given a question and a set of candidate sentences, the task is to choose the correct sentence that contains the exact answer and can sufficiently support the answer choice. &lt;br /&gt;
&lt;br /&gt;
* [http://cs.stanford.edu/people/mengqiu/data/qg-emnlp07-data.tgz QA Answer Sentence Selection Dataset]: labeled sentences using TREC QA track data, provided by [http://cs.stanford.edu/people/mengqiu/ Mengqiu Wang] and first used in [http://www.aclweb.org/anthology/D/D07/D07-1003.pdf Wang et al. (2007)].  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Algorithm&lt;br /&gt;
! Reference&lt;br /&gt;
! [http://en.wikipedia.org/wiki/Mean_average_precision MAP]&lt;br /&gt;
! [http://en.wikipedia.org/wiki/Mean_reciprocal_rank MRR]&lt;br /&gt;
|-&lt;br /&gt;
| Punyakanok (2004)&lt;br /&gt;
| Wang et al. (2007)&lt;br /&gt;
| 0.419&lt;br /&gt;
| 0.494&lt;br /&gt;
|-&lt;br /&gt;
| Cui (2005)&lt;br /&gt;
| Wang et al. (2007)&lt;br /&gt;
| 0.427&lt;br /&gt;
| 0.526&lt;br /&gt;
|-&lt;br /&gt;
| Wang (2007)&lt;br /&gt;
| Wang et al. (2007)&lt;br /&gt;
| 0.603&lt;br /&gt;
| 0.685&lt;br /&gt;
|-&lt;br /&gt;
| H&amp;amp;S (2010)&lt;br /&gt;
| Heilman and Smith (2010)&lt;br /&gt;
| 0.609&lt;br /&gt;
| 0.692&lt;br /&gt;
|-&lt;br /&gt;
| W&amp;amp;M (2010)&lt;br /&gt;
| Wang and Manning (2010)&lt;br /&gt;
| 0.595&lt;br /&gt;
| 0.695&lt;br /&gt;
|-&lt;br /&gt;
| Yao (2013)&lt;br /&gt;
| Yao et al. (2013)&lt;br /&gt;
| 0.631&lt;br /&gt;
| 0.748&lt;br /&gt;
|-&lt;br /&gt;
| S&amp;amp;M (2013)&lt;br /&gt;
| Severyn and Moschitti (2013)&lt;br /&gt;
| 0.678&lt;br /&gt;
| 0.736&lt;br /&gt;
|-&lt;br /&gt;
| Shnarch (2013) - Backward &lt;br /&gt;
| Shnarch (2013)&lt;br /&gt;
| 0.686&lt;br /&gt;
| 0.754&lt;br /&gt;
|-&lt;br /&gt;
| Yih (2013) - LCLR&lt;br /&gt;
| Yih et al. (2013)&lt;br /&gt;
| 0.709&lt;br /&gt;
| 0.770&lt;br /&gt;
|-&lt;br /&gt;
| Yu (2014) - TRAIN-ALL bigram+count&lt;br /&gt;
| Yu et al. (2014)&lt;br /&gt;
| 0.711&lt;br /&gt;
| 0.785&lt;br /&gt;
|-&lt;br /&gt;
| W&amp;amp;N (2015) - Three-Layer BLSTM+BM25&lt;br /&gt;
| Wang and Nyberg (2015)&lt;br /&gt;
| 0.713&lt;br /&gt;
| 0.791&lt;br /&gt;
|-&lt;br /&gt;
| Feng (2015) - Architecture-II&lt;br /&gt;
| Tan et al. (2015)&lt;br /&gt;
| 0.711&lt;br /&gt;
| 0.800&lt;br /&gt;
|-&lt;br /&gt;
| S&amp;amp;M (2015)&lt;br /&gt;
| Severyn and Moschitti (2015)&lt;br /&gt;
| 0.746&lt;br /&gt;
| 0.808&lt;br /&gt;
|-&lt;br /&gt;
| W&amp;amp;I (2015)&lt;br /&gt;
| Wang and Ittycheriah (2015)&lt;br /&gt;
| 0.746&lt;br /&gt;
| 0.820&lt;br /&gt;
|-&lt;br /&gt;
| Tan (2015) - QA-LSTM/CNN+attention &lt;br /&gt;
| Tan et al. (2015)&lt;br /&gt;
| 0.728&lt;br /&gt;
| 0.832&lt;br /&gt;
|-&lt;br /&gt;
| dos Santos (2016) - Attentive Pooling CNN &lt;br /&gt;
| dos Santos et al. (2016)&lt;br /&gt;
| 0.753&lt;br /&gt;
| 0.851&lt;br /&gt;
|-&lt;br /&gt;
| Wang et al.  (2016) - Lexical Decomposition and Composition&lt;br /&gt;
| Wang et al. (2016)&lt;br /&gt;
| 0.771&lt;br /&gt;
| 0.845&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Vasin Punyakanok, Dan Roth, and Wen-Tau Yih. 2004. [http://cogcomp.cs.illinois.edu/papers/PunyakanokRoYi04a.pdf Mapping dependencies trees: An application to question answering]. In Proceedings of the 8th International Symposium on Artificial Intelligence and Mathematics, Fort Lauderdale, FL, USA.&lt;br /&gt;
* Hang Cui, Renxu Sun, Keya Li, Min-Yen Kan, and Tat-Seng Chua. 2005. [http://ws.csie.ncku.edu.tw/login/upload/2005/paper/Question%20answering%20Question%20answering%20passage%20retrieval%20using%20dependency%20relations.pdf Question answering passage retrieval using dependency relations]. In Proceedings of the 28th ACM-SIGIR International Conference on Research and Development in Information Retrieval, Salvador, Brazil.&lt;br /&gt;
* Wang, Mengqiu and Smith, Noah A. and Mitamura, Teruko. 2007. [http://www.aclweb.org/anthology/D/D07/D07-1003.pdf What is the Jeopardy Model? A Quasi-Synchronous Grammar for QA]. In EMNLP-CoNLL 2007.&lt;br /&gt;
* Heilman, Michael and Smith, Noah A. 2010. [http://www.aclweb.org/anthology/N10-1145 Tree Edit Models for Recognizing Textual Entailments, Paraphrases, and Answers to Questions]. In NAACL-HLT 2010.&lt;br /&gt;
* Wang, Mengqiu and Manning, Christopher. 2010. [http://aclweb.org/anthology//C/C10/C10-1131.pdf Probabilistic Tree-Edit Models with Structured Latent Variables for Textual Entailment and Question Answering]. In COLING 2010.&lt;br /&gt;
* E. Shnarch. 2013. Probabilistic Models for Lexical Inference. Ph.D. thesis, Bar Ilan University.&lt;br /&gt;
* Yao, Xuchen and Van Durme, Benjamin and Callison-Burch, Chris and Clark, Peter. 2013. [http://www.aclweb.org/anthology/N13-1106.pdf Answer Extraction as Sequence Tagging with Tree Edit Distance]. In NAACL-HLT 2013.&lt;br /&gt;
* Yih, Wen-tau and Chang, Ming-Wei and Meek, Christopher and Pastusiak, Andrzej. 2013. [http://research.microsoft.com/pubs/192357/QA-SentSel-Updated-PostACL.pdf Question Answering Using Enhanced Lexical Semantic Models]. In ACL 2013.&lt;br /&gt;
* Severyn, Aliaksei and Moschitti, Alessandro. 2013. [http://www.aclweb.org/anthology/D13-1044.pdf Automatic Feature Engineering for Answer Selection and Extraction]. In EMNLP 2013.&lt;br /&gt;
* Lei Yu, Karl Moritz Hermann, Phil Blunsom, and Stephen Pulman. 2014. [http://arxiv.org/pdf/1412.1632v1.pdf Deep Learning for Answer Sentence Selection]. In NIPS deep learning workshop.&lt;br /&gt;
* Di Wang and Eric Nyberg. 2015. [http://www.aclweb.org/anthology/P15-2116 A Long Short-Term Memory Model for Answer Sentence Selection in Question Answering]. In ACL 2015.&lt;br /&gt;
* Minwei Feng, Bing Xiang, Michael R. Glass, Lidan Wang, Bowen Zhou. 2015. [http://arxiv.org/abs/1508.01585 Applying deep learning to answer selection: A study and an open task]. In ASRU 2015.&lt;br /&gt;
* Aliaksei Severyn and Alessandro Moschitti. 2015. [http://disi.unitn.it/~severyn/papers/sigir-2015-long.pdf Learning to Rank Short Text Pairs with Convolutional Deep Neural Networks]. In SIGIR 2015.&lt;br /&gt;
* Zhiguo Wang and Abraham Ittycheriah. 2015. [http://arxiv.org/abs/1507.02628 FAQ-based Question Answering via Word Alignment]. In eprint arXiv:1507.02628.&lt;br /&gt;
* Ming Tan, Cicero dos Santos, Bing Xiang &amp;amp; Bowen Zhou. 2015. [http://arxiv.org/abs/1511.04108 LSTM-Based Deep Learning Models for Nonfactoid Answer Selection]. In eprint arXiv:1511.04108.&lt;br /&gt;
* Cicero dos Santos, Ming Tan, Bing Xiang &amp;amp; Bowen Zhou. 2016. [http://arxiv.org/abs/1602.03609 Attentive Pooling Networks]. In eprint arXiv:1602.03609.&lt;br /&gt;
* Zhiguo Wang, Haitao Mi and Abraham Ittycheriah. 2016. [http://arxiv.org/pdf/1602.07019v1.pdf Sentence Similarity Learning by Lexical Decomposition and Composition]. In eprint arXiv:1602.07019.&lt;br /&gt;
[[Category:State of the art]]&lt;/div&gt;</summary>
		<author><name>Zhiguo Wang</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/aclwiki/index.php?title=Question_Answering_(State_of_the_art)&amp;diff=11371</id>
		<title>Question Answering (State of the art)</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/aclwiki/index.php?title=Question_Answering_(State_of_the_art)&amp;diff=11371"/>
		<updated>2016-01-21T16:01:44Z</updated>

		<summary type="html">&lt;p&gt;Zhiguo Wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Answer Sentence Selection ==&lt;br /&gt;
&lt;br /&gt;
The task of answer sentence selection is designed for the open-domain question answering setting. Given a question and a set of candidate sentences, the task is to choose the correct sentence that contains the exact answer and can sufficiently support the answer choice. &lt;br /&gt;
&lt;br /&gt;
* [http://cs.stanford.edu/people/mengqiu/data/qg-emnlp07-data.tgz QA Answer Sentence Selection Dataset]: labeled sentences using TREC QA track data, provided by [http://cs.stanford.edu/people/mengqiu/ Mengqiu Wang] and first used in [http://www.aclweb.org/anthology/D/D07/D07-1003.pdf Wang et al. (2007)].  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Algorithm&lt;br /&gt;
! Reference&lt;br /&gt;
! [http://en.wikipedia.org/wiki/Mean_average_precision MAP]&lt;br /&gt;
! [http://en.wikipedia.org/wiki/Mean_reciprocal_rank MRR]&lt;br /&gt;
|-&lt;br /&gt;
| Punyakanok (2004)&lt;br /&gt;
| Wang et al. (2007)&lt;br /&gt;
| 0.419&lt;br /&gt;
| 0.494&lt;br /&gt;
|-&lt;br /&gt;
| Cui (2005)&lt;br /&gt;
| Wang et al. (2007)&lt;br /&gt;
| 0.427&lt;br /&gt;
| 0.526&lt;br /&gt;
|-&lt;br /&gt;
| Wang (2007)&lt;br /&gt;
| Wang et al. (2007)&lt;br /&gt;
| 0.603&lt;br /&gt;
| 0.685&lt;br /&gt;
|-&lt;br /&gt;
| H&amp;amp;S (2010)&lt;br /&gt;
| Heilman and Smith (2010)&lt;br /&gt;
| 0.609&lt;br /&gt;
| 0.692&lt;br /&gt;
|-&lt;br /&gt;
| W&amp;amp;M (2010)&lt;br /&gt;
| Wang and Manning (2010)&lt;br /&gt;
| 0.595&lt;br /&gt;
| 0.695&lt;br /&gt;
|-&lt;br /&gt;
| Yao (2013)&lt;br /&gt;
| Yao et al. (2013)&lt;br /&gt;
| 0.631&lt;br /&gt;
| 0.748&lt;br /&gt;
|-&lt;br /&gt;
| S&amp;amp;M (2013)&lt;br /&gt;
| Severyn and Moschitti (2013)&lt;br /&gt;
| 0.678&lt;br /&gt;
| 0.736&lt;br /&gt;
|-&lt;br /&gt;
| Shnarch (2013) - Backward &lt;br /&gt;
| Shnarch (2013)&lt;br /&gt;
| 0.686&lt;br /&gt;
| 0.754&lt;br /&gt;
|-&lt;br /&gt;
| Yih (2013) - LCLR&lt;br /&gt;
| Yih et al. (2013)&lt;br /&gt;
| 0.709&lt;br /&gt;
| 0.770&lt;br /&gt;
|-&lt;br /&gt;
| Yu (2014) - TRAIN-ALL bigram+count&lt;br /&gt;
| Yu et al. (2014)&lt;br /&gt;
| 0.711&lt;br /&gt;
| 0.785&lt;br /&gt;
|-&lt;br /&gt;
| S&amp;amp;M (2015)&lt;br /&gt;
| Severyn and Moschitti (2015)&lt;br /&gt;
| 0.746&lt;br /&gt;
| 0.808&lt;br /&gt;
|-&lt;br /&gt;
| W&amp;amp;I (2015)&lt;br /&gt;
| Wang and Ittycheriah (2015)&lt;br /&gt;
| 0.746&lt;br /&gt;
| 0.820&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Vasin Punyakanok, Dan Roth, and Wen-Tau Yih. 2004. [http://cogcomp.cs.illinois.edu/papers/PunyakanokRoYi04a.pdf Mapping dependencies trees: An application to question answering]. In Proceedings of the 8th International Symposium on Artificial Intelligence and Mathematics, Fort Lauderdale, FL, USA.&lt;br /&gt;
* Hang Cui, Renxu Sun, Keya Li, Min-Yen Kan, and Tat-Seng Chua. 2005. [http://ws.csie.ncku.edu.tw/login/upload/2005/paper/Question%20answering%20Question%20answering%20passage%20retrieval%20using%20dependency%20relations.pdf Question answering passage retrieval using dependency relations]. In Proceedings of the 28th ACM-SIGIR International Conference on Research and Development in Information Retrieval, Salvador, Brazil.&lt;br /&gt;
* Wang, Mengqiu and Smith, Noah A. and Mitamura, Teruko. 2007. [http://www.aclweb.org/anthology/D/D07/D07-1003.pdf What is the Jeopardy Model? A Quasi-Synchronous Grammar for QA]. In EMNLP-CoNLL 2007.&lt;br /&gt;
* Heilman, Michael and Smith, Noah A. 2010. [http://www.aclweb.org/anthology/N10-1145 Tree Edit Models for Recognizing Textual Entailments, Paraphrases, and Answers to Questions]. In NAACL-HLT 2010.&lt;br /&gt;
* Wang, Mengqiu and Manning, Christopher. 2010. [http://aclweb.org/anthology//C/C10/C10-1131.pdf Probabilistic Tree-Edit Models with Structured Latent Variables for Textual Entailment and Question Answering]. In COLING 2010.&lt;br /&gt;
* E. Shnarch. 2013. Probabilistic Models for Lexical Inference. Ph.D. thesis, Bar Ilan University.&lt;br /&gt;
* Yao, Xuchen and Van Durme, Benjamin and Callison-Burch, Chris and Clark, Peter. 2013. [http://www.aclweb.org/anthology/N13-1106.pdf Answer Extraction as Sequence Tagging with Tree Edit Distance]. In NAACL-HLT 2013.&lt;br /&gt;
* Yih, Wen-tau and Chang, Ming-Wei and Meek, Christopher and Pastusiak, Andrzej. 2013. [http://research.microsoft.com/pubs/192357/QA-SentSel-Updated-PostACL.pdf Question Answering Using Enhanced Lexical Semantic Models]. In ACL 2013.&lt;br /&gt;
* Severyn, Aliaksei and Moschitti, Alessandro. 2013. [http://www.aclweb.org/anthology/D13-1044.pdf Automatic Feature Engineering for Answer Selection and Extraction]. In EMNLP 2013.&lt;br /&gt;
* Lei Yu, Karl Moritz Hermann, Phil Blunsom, and Stephen Pulman. 2014. [http://arxiv.org/pdf/1412.1632v1.pdf Deep Learning for Answer Sentence Selection]. In NIPS deep learning workshop.&lt;br /&gt;
* Zhiguo Wang and Abraham Ittycheriah. 2015. [http://arxiv.org/abs/1507.02628 FAQ-based Question Answering via Word Alignment]. In eprint arXiv:1507.02628.&lt;br /&gt;
[[Category:State of the art]]&lt;br /&gt;
* Aliaksei Severyn and Alessandro Moschitti. 2015. [http://disi.unitn.it/~severyn/papers/sigir-2015-long.pdf Learning to Rank Short Text Pairs with Convolutional Deep Neural Networks]. In SIGIR 2015.&lt;/div&gt;</summary>
		<author><name>Zhiguo Wang</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/aclwiki/index.php?title=Question_Answering_(State_of_the_art)&amp;diff=11370</id>
		<title>Question Answering (State of the art)</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/aclwiki/index.php?title=Question_Answering_(State_of_the_art)&amp;diff=11370"/>
		<updated>2016-01-21T16:01:05Z</updated>

		<summary type="html">&lt;p&gt;Zhiguo Wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Answer Sentence Selection ==&lt;br /&gt;
&lt;br /&gt;
The task of answer sentence selection is designed for the open-domain question answering setting. Given a question and a set of candidate sentences, the task is to choose the correct sentence that contains the exact answer and can sufficiently support the answer choice. &lt;br /&gt;
&lt;br /&gt;
* [http://cs.stanford.edu/people/mengqiu/data/qg-emnlp07-data.tgz QA Answer Sentence Selection Dataset]: labeled sentences using TREC QA track data, provided by [http://cs.stanford.edu/people/mengqiu/ Mengqiu Wang] and first used in [http://www.aclweb.org/anthology/D/D07/D07-1003.pdf Wang et al. (2007)].  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Algorithm&lt;br /&gt;
! Reference&lt;br /&gt;
! [http://en.wikipedia.org/wiki/Mean_average_precision MAP]&lt;br /&gt;
! [http://en.wikipedia.org/wiki/Mean_reciprocal_rank MRR]&lt;br /&gt;
|-&lt;br /&gt;
| Punyakanok (2004)&lt;br /&gt;
| Wang et al. (2007)&lt;br /&gt;
| 0.419&lt;br /&gt;
| 0.494&lt;br /&gt;
|-&lt;br /&gt;
| Cui (2005)&lt;br /&gt;
| Wang et al. (2007)&lt;br /&gt;
| 0.427&lt;br /&gt;
| 0.526&lt;br /&gt;
|-&lt;br /&gt;
| Wang (2007)&lt;br /&gt;
| Wang et al. (2007)&lt;br /&gt;
| 0.603&lt;br /&gt;
| 0.685&lt;br /&gt;
|-&lt;br /&gt;
| H&amp;amp;S (2010)&lt;br /&gt;
| Heilman and Smith (2010)&lt;br /&gt;
| 0.609&lt;br /&gt;
| 0.692&lt;br /&gt;
|-&lt;br /&gt;
| W&amp;amp;M (2010)&lt;br /&gt;
| Wang and Manning (2010)&lt;br /&gt;
| 0.595&lt;br /&gt;
| 0.695&lt;br /&gt;
|-&lt;br /&gt;
| Yao (2013)&lt;br /&gt;
| Yao et al. (2013)&lt;br /&gt;
| 0.631&lt;br /&gt;
| 0.748&lt;br /&gt;
|-&lt;br /&gt;
| S&amp;amp;M (2013)&lt;br /&gt;
| Severyn and Moschitti (2013)&lt;br /&gt;
| 0.678&lt;br /&gt;
| 0.736&lt;br /&gt;
|-&lt;br /&gt;
| Shnarch (2013) - Backward &lt;br /&gt;
| Shnarch (2013)&lt;br /&gt;
| 0.686&lt;br /&gt;
| 0.754&lt;br /&gt;
|-&lt;br /&gt;
| Yih (2013) - LCLR&lt;br /&gt;
| Yih et al. (2013)&lt;br /&gt;
| 0.709&lt;br /&gt;
| 0.770&lt;br /&gt;
|-&lt;br /&gt;
| Yu (2014) - TRAIN-ALL bigram+count&lt;br /&gt;
| Yu et al. (2014)&lt;br /&gt;
| 0.711&lt;br /&gt;
| 0.785&lt;br /&gt;
|-&lt;br /&gt;
| S&amp;amp;M (2015)&lt;br /&gt;
| Severyn and Moschitti (2015)&lt;br /&gt;
| 0.746&lt;br /&gt;
| 0.808&lt;br /&gt;
|-&lt;br /&gt;
| W&amp;amp;I (2015)&lt;br /&gt;
| Wang and Ittycheriah (2015)&lt;br /&gt;
| 0.746&lt;br /&gt;
| 0.820&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Vasin Punyakanok, Dan Roth, and Wen-Tau Yih. 2004. [http://cogcomp.cs.illinois.edu/papers/PunyakanokRoYi04a.pdf Mapping dependencies trees: An application to question answering]. In Proceedings of the 8th International Symposium on Artificial Intelligence and Mathematics, Fort Lauderdale, FL, USA.&lt;br /&gt;
* Hang Cui, Renxu Sun, Keya Li, Min-Yen Kan, and Tat-Seng Chua. 2005. [http://ws.csie.ncku.edu.tw/login/upload/2005/paper/Question%20answering%20Question%20answering%20passage%20retrieval%20using%20dependency%20relations.pdf Question answering passage retrieval using dependency relations]. In Proceedings of the 28th ACM-SIGIR International Conference on Research and Development in Information Retrieval, Salvador, Brazil.&lt;br /&gt;
* Wang, Mengqiu and Smith, Noah A. and Mitamura, Teruko. 2007. [http://www.aclweb.org/anthology/D/D07/D07-1003.pdf What is the Jeopardy Model? A Quasi-Synchronous Grammar for QA]. In EMNLP-CoNLL 2007.&lt;br /&gt;
* Heilman, Michael and Smith, Noah A. 2010. [http://www.aclweb.org/anthology/N10-1145 Tree Edit Models for Recognizing Textual Entailments, Paraphrases, and Answers to Questions]. In NAACL-HLT 2010.&lt;br /&gt;
* Wang, Mengqiu and Manning, Christopher. 2010. [http://aclweb.org/anthology//C/C10/C10-1131.pdf Probabilistic Tree-Edit Models with Structured Latent Variables for Textual Entailment and Question Answering]. In COLING 2010.&lt;br /&gt;
* E. Shnarch. 2013. Probabilistic Models for Lexical Inference. Ph.D. thesis, Bar Ilan University.&lt;br /&gt;
* Yao, Xuchen and Van Durme, Benjamin and Callison-Burch, Chris and Clark, Peter. 2013. [http://www.aclweb.org/anthology/N13-1106.pdf Answer Extraction as Sequence Tagging with Tree Edit Distance]. In NAACL-HLT 2013.&lt;br /&gt;
* Yih, Wen-tau and Chang, Ming-Wei and Meek, Christopher and Pastusiak, Andrzej. 2013. [http://research.microsoft.com/pubs/192357/QA-SentSel-Updated-PostACL.pdf Question Answering Using Enhanced Lexical Semantic Models]. In ACL 2013.&lt;br /&gt;
* Severyn, Aliaksei and Moschitti, Alessandro. 2013. [http://www.aclweb.org/anthology/D13-1044.pdf Automatic Feature Engineering for Answer Selection and Extraction]. In EMNLP 2013.&lt;br /&gt;
* Lei Yu, Karl Moritz Hermann, Phil Blunsom, and Stephen Pulman. 2014. [http://arxiv.org/pdf/1412.1632v1.pdf Deep Learning for Answer Sentence Selection]. In NIPS deep learning workshop.&lt;br /&gt;
* Zhiguo Wang and Abraham Ittycheriah. 2015. [http://arxiv.org/abs/1507.02628 FAQ-based Question Answering via Word Alignment]. In eprint arXiv:1507.02628.&lt;br /&gt;
[[Category:State of the art]]&lt;br /&gt;
* Aliaksei Severyn and Alessandro Moschitti. 2015 [http://disi.unitn.it/~severyn/papers/sigir-2015-long.pdf Learning to Rank Short Text Pairs with Convolutional Deep Neural Networks]. In SIGIR 2015.&lt;/div&gt;</summary>
		<author><name>Zhiguo Wang</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/aclwiki/index.php?title=Question_Answering_(State_of_the_art)&amp;diff=11166</id>
		<title>Question Answering (State of the art)</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/aclwiki/index.php?title=Question_Answering_(State_of_the_art)&amp;diff=11166"/>
		<updated>2015-07-10T13:24:43Z</updated>

		<summary type="html">&lt;p&gt;Zhiguo Wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Answer Sentence Selection ==&lt;br /&gt;
&lt;br /&gt;
The task of answer sentence selection is designed for the open-domain question answering setting. Given a question and a set of candidate sentences, the task is to choose the correct sentence that contains the exact answer and can sufficiently support the answer choice. &lt;br /&gt;
&lt;br /&gt;
* [http://cs.stanford.edu/people/mengqiu/data/qg-emnlp07-data.tgz QA Answer Sentence Selection Dataset]: labeled sentences using TREC QA track data, provided by [http://cs.stanford.edu/people/mengqiu/ Mengqiu Wang] and first used in [http://www.aclweb.org/anthology/D/D07/D07-1003.pdf Wang et al. (2007)].  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Algorithm&lt;br /&gt;
! Reference&lt;br /&gt;
! [http://en.wikipedia.org/wiki/Mean_average_precision MAP]&lt;br /&gt;
! [http://en.wikipedia.org/wiki/Mean_reciprocal_rank MRR]&lt;br /&gt;
|-&lt;br /&gt;
| Punyakanok (2004)&lt;br /&gt;
| Wang et al. (2007)&lt;br /&gt;
| 0.419&lt;br /&gt;
| 0.494&lt;br /&gt;
|-&lt;br /&gt;
| Cui (2005)&lt;br /&gt;
| Wang et al. (2007)&lt;br /&gt;
| 0.427&lt;br /&gt;
| 0.526&lt;br /&gt;
|-&lt;br /&gt;
| Wang (2007)&lt;br /&gt;
| Wang et al. (2007)&lt;br /&gt;
| 0.603&lt;br /&gt;
| 0.685&lt;br /&gt;
|-&lt;br /&gt;
| H&amp;amp;S (2010)&lt;br /&gt;
| Heilman and Smith (2010)&lt;br /&gt;
| 0.609&lt;br /&gt;
| 0.692&lt;br /&gt;
|-&lt;br /&gt;
| W&amp;amp;M (2010)&lt;br /&gt;
| Wang and Manning (2010)&lt;br /&gt;
| 0.595&lt;br /&gt;
| 0.695&lt;br /&gt;
|-&lt;br /&gt;
| Yao (2013)&lt;br /&gt;
| Yao et al. (2013)&lt;br /&gt;
| 0.631&lt;br /&gt;
| 0.748&lt;br /&gt;
|-&lt;br /&gt;
| S&amp;amp;M (2013)&lt;br /&gt;
| Severyn and Moschitti (2013)&lt;br /&gt;
| 0.678&lt;br /&gt;
| 0.736&lt;br /&gt;
|-&lt;br /&gt;
| Shnarch (2013) - Backward &lt;br /&gt;
| Shnarch (2013)&lt;br /&gt;
| 0.686&lt;br /&gt;
| 0.754&lt;br /&gt;
|-&lt;br /&gt;
| Yih (2013) - LCLR&lt;br /&gt;
| Yih et al. (2013)&lt;br /&gt;
| 0.709&lt;br /&gt;
| 0.770&lt;br /&gt;
|-&lt;br /&gt;
| Yu (2014) - TRAIN-ALL bigram+count&lt;br /&gt;
| Yu et al. (2014)&lt;br /&gt;
| 0.711&lt;br /&gt;
| 0.785&lt;br /&gt;
|-&lt;br /&gt;
| W&amp;amp;I (2015)&lt;br /&gt;
| Wang and Ittycheriah (2015)&lt;br /&gt;
| 0.746&lt;br /&gt;
| 0.820&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Vasin Punyakanok, Dan Roth, and Wen-Tau Yih. 2004. [http://cogcomp.cs.illinois.edu/papers/PunyakanokRoYi04a.pdf Mapping dependencies trees: An application to question answering]. In Proceedings of the 8th International Symposium on Artificial Intelligence and Mathematics, Fort Lauderdale, FL, USA.&lt;br /&gt;
* Hang Cui, Renxu Sun, Keya Li, Min-Yen Kan, and Tat-Seng Chua. 2005. [http://ws.csie.ncku.edu.tw/login/upload/2005/paper/Question%20answering%20Question%20answering%20passage%20retrieval%20using%20dependency%20relations.pdf Question answering passage retrieval using dependency relations]. In Proceedings of the 28th ACM-SIGIR International Conference on Research and Development in Information Retrieval, Salvador, Brazil.&lt;br /&gt;
* Wang, Mengqiu and Smith, Noah A. and Mitamura, Teruko. 2007. [http://www.aclweb.org/anthology/D/D07/D07-1003.pdf What is the Jeopardy Model? A Quasi-Synchronous Grammar for QA]. In EMNLP-CoNLL 2007.&lt;br /&gt;
* Heilman, Michael and Smith, Noah A. 2010. [http://www.aclweb.org/anthology/N10-1145 Tree Edit Models for Recognizing Textual Entailments, Paraphrases, and Answers to Questions]. In NAACL-HLT 2010.&lt;br /&gt;
* Wang, Mengqiu and Manning, Christopher. 2010. [http://aclweb.org/anthology//C/C10/C10-1131.pdf Probabilistic Tree-Edit Models with Structured Latent Variables for Textual Entailment and Question Answering]. In COLING 2010.&lt;br /&gt;
* E. Shnarch. 2013. Probabilistic Models for Lexical Inference. Ph.D. thesis, Bar Ilan University.&lt;br /&gt;
* Yao, Xuchen and Van Durme, Benjamin and Callison-Burch, Chris and Clark, Peter. 2013. [http://www.aclweb.org/anthology/N13-1106.pdf Answer Extraction as Sequence Tagging with Tree Edit Distance]. In NAACL-HLT 2013.&lt;br /&gt;
* Yih, Wen-tau and Chang, Ming-Wei and Meek, Christopher and Pastusiak, Andrzej. 2013. [http://research.microsoft.com/pubs/192357/QA-SentSel-Updated-PostACL.pdf Question Answering Using Enhanced Lexical Semantic Models]. In ACL 2013.&lt;br /&gt;
* Severyn, Aliaksei and Moschitti, Alessandro. 2013. [http://www.aclweb.org/anthology/D13-1044.pdf Automatic Feature Engineering for Answer Selection and Extraction]. In EMNLP 2013.&lt;br /&gt;
* Lei Yu, Karl Moritz Hermann, Phil Blunsom, and Stephen Pulman. 2014. [http://arxiv.org/pdf/1412.1632v1.pdf Deep Learning for Answer Sentence Selection]. In NIPS deep learning workshop.&lt;br /&gt;
* Zhiguo Wang and Abraham Ittycheriah. 2015. [http://arxiv.org/abs/1507.02628 FAQ-based Question Answering via Word Alignment]. In eprint arXiv:1507.02628.&lt;br /&gt;
[[Category:State of the art]]&lt;/div&gt;</summary>
		<author><name>Zhiguo Wang</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/aclwiki/index.php?title=Question_Answering_(State_of_the_art)&amp;diff=11165</id>
		<title>Question Answering (State of the art)</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/aclwiki/index.php?title=Question_Answering_(State_of_the_art)&amp;diff=11165"/>
		<updated>2015-07-10T12:52:08Z</updated>

		<summary type="html">&lt;p&gt;Zhiguo Wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Answer Sentence Selection ==&lt;br /&gt;
&lt;br /&gt;
The task of answer sentence selection is designed for the open-domain question answering setting. Given a question and a set of candidate sentences, the task is to choose the correct sentence that contains the exact answer and can sufficiently support the answer choice. &lt;br /&gt;
&lt;br /&gt;
* [http://cs.stanford.edu/people/mengqiu/data/qg-emnlp07-data.tgz QA Answer Sentence Selection Dataset]: labeled sentences using TREC QA track data, provided by [http://cs.stanford.edu/people/mengqiu/ Mengqiu Wang] and first used in [http://www.aclweb.org/anthology/D/D07/D07-1003.pdf Wang et al. (2007)].  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Algorithm&lt;br /&gt;
! Reference&lt;br /&gt;
! [http://en.wikipedia.org/wiki/Mean_average_precision MAP]&lt;br /&gt;
! [http://en.wikipedia.org/wiki/Mean_reciprocal_rank MRR]&lt;br /&gt;
|-&lt;br /&gt;
| Punyakanok (2004)&lt;br /&gt;
| Wang et al. (2007)&lt;br /&gt;
| 0.419&lt;br /&gt;
| 0.494&lt;br /&gt;
|-&lt;br /&gt;
| Cui (2005)&lt;br /&gt;
| Wang et al. (2007)&lt;br /&gt;
| 0.427&lt;br /&gt;
| 0.526&lt;br /&gt;
|-&lt;br /&gt;
| Wang (2007)&lt;br /&gt;
| Wang et al. (2007)&lt;br /&gt;
| 0.603&lt;br /&gt;
| 0.685&lt;br /&gt;
|-&lt;br /&gt;
| H&amp;amp;S (2010)&lt;br /&gt;
| Heilman and Smith (2010)&lt;br /&gt;
| 0.609&lt;br /&gt;
| 0.692&lt;br /&gt;
|-&lt;br /&gt;
| W&amp;amp;M (2010)&lt;br /&gt;
| Wang and Manning (2010)&lt;br /&gt;
| 0.595&lt;br /&gt;
| 0.695&lt;br /&gt;
|-&lt;br /&gt;
| Yao (2013)&lt;br /&gt;
| Yao et al. (2013)&lt;br /&gt;
| 0.631&lt;br /&gt;
| 0.748&lt;br /&gt;
|-&lt;br /&gt;
| S&amp;amp;M (2013)&lt;br /&gt;
| Severyn and Moschitti (2013)&lt;br /&gt;
| 0.678&lt;br /&gt;
| 0.736&lt;br /&gt;
|-&lt;br /&gt;
| Shnarch (2013) - Backward &lt;br /&gt;
| Shnarch (2013)&lt;br /&gt;
| 0.686&lt;br /&gt;
| 0.754&lt;br /&gt;
|-&lt;br /&gt;
| Yih (2013) - LCLR&lt;br /&gt;
| Yih et al. (2013)&lt;br /&gt;
| 0.709&lt;br /&gt;
| 0.770&lt;br /&gt;
|-&lt;br /&gt;
| Yu (2014) - TRAIN-ALL bigram+count&lt;br /&gt;
| Yu et al. (2014)&lt;br /&gt;
| 0.711&lt;br /&gt;
| 0.785&lt;br /&gt;
|-&lt;br /&gt;
| W&amp;amp;I (2015)&lt;br /&gt;
| Wang and Ittycheriah (2015)&lt;br /&gt;
| 0.746&lt;br /&gt;
| 0.820&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Vasin Punyakanok, Dan Roth, and Wen-Tau Yih. 2004. [http://cogcomp.cs.illinois.edu/papers/PunyakanokRoYi04a.pdf Mapping dependencies trees: An application to question answering]. In Proceedings of the 8th International Symposium on Artificial Intelligence and Mathematics, Fort Lauderdale, FL, USA.&lt;br /&gt;
* Hang Cui, Renxu Sun, Keya Li, Min-Yen Kan, and Tat-Seng Chua. 2005. [http://ws.csie.ncku.edu.tw/login/upload/2005/paper/Question%20answering%20Question%20answering%20passage%20retrieval%20using%20dependency%20relations.pdf Question answering passage retrieval using dependency relations]. In Proceedings of the 28th ACM-SIGIR International Conference on Research and Development in Information Retrieval, Salvador, Brazil.&lt;br /&gt;
* Wang, Mengqiu and Smith, Noah A. and Mitamura, Teruko. 2007. [http://www.aclweb.org/anthology/D/D07/D07-1003.pdf What is the Jeopardy Model? A Quasi-Synchronous Grammar for QA]. In EMNLP-CoNLL 2007.&lt;br /&gt;
* Heilman, Michael and Smith, Noah A. 2010. [http://www.aclweb.org/anthology/N10-1145 Tree Edit Models for Recognizing Textual Entailments, Paraphrases, and Answers to Questions]. In NAACL-HLT 2010.&lt;br /&gt;
* Wang, Mengqiu and Manning, Christopher. 2010. [http://aclweb.org/anthology//C/C10/C10-1131.pdf Probabilistic Tree-Edit Models with Structured Latent Variables for Textual Entailment and Question Answering]. In COLING 2010.&lt;br /&gt;
* E. Shnarch. 2013. Probabilistic Models for Lexical Inference. Ph.D. thesis, Bar Ilan University.&lt;br /&gt;
* Yao, Xuchen and Van Durme, Benjamin and Callison-Burch, Chris and Clark, Peter. 2013. [http://www.aclweb.org/anthology/N13-1106.pdf Answer Extraction as Sequence Tagging with Tree Edit Distance]. In NAACL-HLT 2013.&lt;br /&gt;
* Yih, Wen-tau and Chang, Ming-Wei and Meek, Christopher and Pastusiak, Andrzej. 2013. [http://research.microsoft.com/pubs/192357/QA-SentSel-Updated-PostACL.pdf Question Answering Using Enhanced Lexical Semantic Models]. In ACL 2013.&lt;br /&gt;
* Severyn, Aliaksei and Moschitti, Alessandro. 2013. [http://www.aclweb.org/anthology/D13-1044.pdf Automatic Feature Engineering for Answer Selection and Extraction]. In EMNLP 2013.&lt;br /&gt;
* Lei Yu, Karl Moritz Hermann, Phil Blunsom, and Stephen Pulman. 2014 [http://arxiv.org/pdf/1412.1632v1.pdf Deep Learning for Answer Sentence Selection]. In NIPS deep learning workshop.&lt;br /&gt;
* Zhiguo Wang and Abraham Ittycheriah. 2015 [http://arxiv.org/abs/1507.02628 FAQ-based Question Answering via Word Alignment]. In eprint arXiv:1507.02628.&lt;br /&gt;
[[Category:State of the art]]&lt;/div&gt;</summary>
		<author><name>Zhiguo Wang</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/aclwiki/index.php?title=User:Zhiguo_Wang&amp;diff=11055</id>
		<title>User:Zhiguo Wang</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/aclwiki/index.php?title=User:Zhiguo_Wang&amp;diff=11055"/>
		<updated>2015-06-04T14:42:11Z</updated>

		<summary type="html">&lt;p&gt;Zhiguo Wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;I&#039;m a research staff member at IBM Watson research center. My research interests include statistical parsing, question answering and machine learning.&lt;/div&gt;</summary>
		<author><name>Zhiguo Wang</name></author>
	</entry>
</feed>