<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://www.aclweb.org/aclwiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Lyangumass</id>
	<title>ACL Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://www.aclweb.org/aclwiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Lyangumass"/>
	<link rel="alternate" type="text/html" href="https://www.aclweb.org/aclwiki/Special:Contributions/Lyangumass"/>
	<updated>2026-04-09T00:07:26Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.6</generator>
	<entry>
		<id>https://www.aclweb.org/aclwiki/index.php?title=Question_Answering_(State_of_the_art)&amp;diff=11700</id>
		<title>Question Answering (State of the art)</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/aclwiki/index.php?title=Question_Answering_(State_of_the_art)&amp;diff=11700"/>
		<updated>2016-11-27T23:37:04Z</updated>

		<summary type="html">&lt;p&gt;Lyangumass: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Answer Sentence Selection ==&lt;br /&gt;
&lt;br /&gt;
The task of answer sentence selection is designed for the open-domain question answering setting. Given a question and a set of candidate sentences, the task is to choose the correct sentence that contains the exact answer and can sufficiently support the answer choice. &lt;br /&gt;
&lt;br /&gt;
* [http://cs.stanford.edu/people/mengqiu/data/qg-emnlp07-data.tgz QA Answer Sentence Selection Dataset]: labeled sentences using TREC QA track data, provided by [http://cs.stanford.edu/people/mengqiu/ Mengqiu Wang] and first used in [http://www.aclweb.org/anthology/D/D07/D07-1003.pdf Wang et al. (2007)]. &lt;br /&gt;
* Over time, the original dataset diverged to two versions due to different pre-processing in recent publications: both have the same training set but their development and test sets differ. The Raw version has 82 questions in the development set and 100 questions in the test set; The Clean version (Wang and Ittycheriah et al. 2015, Tan et al. 2015, dos Santos et al. 2016, Wang et al. 2016) removed questions with no answers or with only positive/negative answers, thus has only 65 questions in the development set and 68 questions in the test set. &lt;br /&gt;
* Note: MAP/MRR scores on the two versions of TREC QA data (Clean vs Raw) are not comparable according to [http://www.cs.umd.edu/~jinfeng/publications/PairwiseNeuralNetwork_CIKM2016.pdf Rao et al. (2016)]. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Algorithm - Raw Version of TREC QA&lt;br /&gt;
! Reference&lt;br /&gt;
! [http://en.wikipedia.org/wiki/Mean_average_precision MAP]&lt;br /&gt;
! [http://en.wikipedia.org/wiki/Mean_reciprocal_rank MRR]&lt;br /&gt;
|-&lt;br /&gt;
| Punyakanok (2004)&lt;br /&gt;
| Wang et al. (2007)&lt;br /&gt;
| 0.419&lt;br /&gt;
| 0.494&lt;br /&gt;
|-&lt;br /&gt;
| Cui (2005)&lt;br /&gt;
| Wang et al. (2007)&lt;br /&gt;
| 0.427&lt;br /&gt;
| 0.526&lt;br /&gt;
|-&lt;br /&gt;
| Wang (2007)&lt;br /&gt;
| Wang et al. (2007)&lt;br /&gt;
| 0.603&lt;br /&gt;
| 0.685&lt;br /&gt;
|-&lt;br /&gt;
| H&amp;amp;S (2010)&lt;br /&gt;
| Heilman and Smith (2010)&lt;br /&gt;
| 0.609&lt;br /&gt;
| 0.692&lt;br /&gt;
|-&lt;br /&gt;
| W&amp;amp;M (2010)&lt;br /&gt;
| Wang and Manning (2010)&lt;br /&gt;
| 0.595&lt;br /&gt;
| 0.695&lt;br /&gt;
|-&lt;br /&gt;
| Yao (2013)&lt;br /&gt;
| Yao et al. (2013)&lt;br /&gt;
| 0.631&lt;br /&gt;
| 0.748&lt;br /&gt;
|-&lt;br /&gt;
| S&amp;amp;M (2013)&lt;br /&gt;
| Severyn and Moschitti (2013)&lt;br /&gt;
| 0.678&lt;br /&gt;
| 0.736&lt;br /&gt;
|-&lt;br /&gt;
| Shnarch (2013) - Backward &lt;br /&gt;
| Shnarch (2013)&lt;br /&gt;
| 0.686&lt;br /&gt;
| 0.754&lt;br /&gt;
|-&lt;br /&gt;
| Yih (2013) - LCLR&lt;br /&gt;
| Yih et al. (2013)&lt;br /&gt;
| 0.709&lt;br /&gt;
| 0.770&lt;br /&gt;
|-&lt;br /&gt;
| Yu (2014) - TRAIN-ALL bigram+count&lt;br /&gt;
| Yu et al. (2014)&lt;br /&gt;
| 0.711&lt;br /&gt;
| 0.785&lt;br /&gt;
|-&lt;br /&gt;
| W&amp;amp;N (2015) - Three-Layer BLSTM+BM25&lt;br /&gt;
| Wang and Nyberg (2015)&lt;br /&gt;
| 0.713&lt;br /&gt;
| 0.791&lt;br /&gt;
|-&lt;br /&gt;
| Feng (2015) - Architecture-II&lt;br /&gt;
| Tan et al. (2015)&lt;br /&gt;
| 0.711&lt;br /&gt;
| 0.800&lt;br /&gt;
|-&lt;br /&gt;
| S&amp;amp;M (2015)&lt;br /&gt;
| Severyn and Moschitti (2015)&lt;br /&gt;
| 0.746&lt;br /&gt;
| 0.808&lt;br /&gt;
|-&lt;br /&gt;
| Yang (2016) - Attention-Based Neural Matching Model&lt;br /&gt;
| Yang et al. (2016)&lt;br /&gt;
| 0.750&lt;br /&gt;
| 0.811&lt;br /&gt;
|-&lt;br /&gt;
| H&amp;amp;L (2016) - Pairwise Word Interaction Modelling&lt;br /&gt;
| He and Lin (2016)&lt;br /&gt;
| 0.758&lt;br /&gt;
| 0.822&lt;br /&gt;
|-&lt;br /&gt;
| H&amp;amp;L (2015) - Multi-Perspective CNN&lt;br /&gt;
| He and Lin (2015)&lt;br /&gt;
| 0.762&lt;br /&gt;
| 0.830&lt;br /&gt;
|-&lt;br /&gt;
| Rao (2016) - PairwiseRank + Multi-Perspective CNN&lt;br /&gt;
| Rao et al. (2016)&lt;br /&gt;
| 0.780&lt;br /&gt;
| 0.834&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Algorithm - Clean Version of TREC QA&lt;br /&gt;
! Reference&lt;br /&gt;
! [http://en.wikipedia.org/wiki/Mean_average_precision MAP]&lt;br /&gt;
! [http://en.wikipedia.org/wiki/Mean_reciprocal_rank MRR]&lt;br /&gt;
|-&lt;br /&gt;
| W&amp;amp;I (2015)&lt;br /&gt;
| Wang and Ittycheriah (2015)&lt;br /&gt;
| 0.746&lt;br /&gt;
| 0.820&lt;br /&gt;
|-&lt;br /&gt;
| Tan (2015) - QA-LSTM/CNN+attention &lt;br /&gt;
| Tan et al. (2015)&lt;br /&gt;
| 0.728&lt;br /&gt;
| 0.832&lt;br /&gt;
|-&lt;br /&gt;
| dos Santos (2016) - Attentive Pooling CNN &lt;br /&gt;
| dos Santos et al. (2016)&lt;br /&gt;
| 0.753&lt;br /&gt;
| 0.851&lt;br /&gt;
|-&lt;br /&gt;
| Wang et al.  (2016) - L.D.C Model&lt;br /&gt;
| Wang et al. (2016)&lt;br /&gt;
| 0.771&lt;br /&gt;
| 0.845&lt;br /&gt;
|-&lt;br /&gt;
| H&amp;amp;L (2015) - Multi-Perspective CNN&lt;br /&gt;
| He and Lin (2015)&lt;br /&gt;
| 0.777&lt;br /&gt;
| 0.836&lt;br /&gt;
|-&lt;br /&gt;
| Rao et al.  (2016) - PairwiseRank + Multi-Perspective CNN&lt;br /&gt;
| Rao et al. (2016)&lt;br /&gt;
| 0.801&lt;br /&gt;
| 0.877&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Vasin Punyakanok, Dan Roth, and Wen-Tau Yih. 2004. [http://cogcomp.cs.illinois.edu/papers/PunyakanokRoYi04a.pdf Mapping dependencies trees: An application to question answering]. In Proceedings of the 8th International Symposium on Artificial Intelligence and Mathematics, Fort Lauderdale, FL, USA.&lt;br /&gt;
* Hang Cui, Renxu Sun, Keya Li, Min-Yen Kan, and Tat-Seng Chua. 2005. [http://ws.csie.ncku.edu.tw/login/upload/2005/paper/Question%20answering%20Question%20answering%20passage%20retrieval%20using%20dependency%20relations.pdf Question answering passage retrieval using dependency relations]. In Proceedings of the 28th ACM-SIGIR International Conference on Research and Development in Information Retrieval, Salvador, Brazil.&lt;br /&gt;
* Wang, Mengqiu and Smith, Noah A. and Mitamura, Teruko. 2007. [http://www.aclweb.org/anthology/D/D07/D07-1003.pdf What is the Jeopardy Model? A Quasi-Synchronous Grammar for QA]. In EMNLP-CoNLL 2007.&lt;br /&gt;
* Heilman, Michael and Smith, Noah A. 2010. [http://www.aclweb.org/anthology/N10-1145 Tree Edit Models for Recognizing Textual Entailments, Paraphrases, and Answers to Questions]. In NAACL-HLT 2010.&lt;br /&gt;
* Wang, Mengqiu and Manning, Christopher. 2010. [http://aclweb.org/anthology//C/C10/C10-1131.pdf Probabilistic Tree-Edit Models with Structured Latent Variables for Textual Entailment and Question Answering]. In COLING 2010.&lt;br /&gt;
* E. Shnarch. 2013. Probabilistic Models for Lexical Inference. Ph.D. thesis, Bar Ilan University.&lt;br /&gt;
* Yao, Xuchen and Van Durme, Benjamin and Callison-Burch, Chris and Clark, Peter. 2013. [http://www.aclweb.org/anthology/N13-1106.pdf Answer Extraction as Sequence Tagging with Tree Edit Distance]. In NAACL-HLT 2013.&lt;br /&gt;
* Yih, Wen-tau and Chang, Ming-Wei and Meek, Christopher and Pastusiak, Andrzej. 2013. [http://research.microsoft.com/pubs/192357/QA-SentSel-Updated-PostACL.pdf Question Answering Using Enhanced Lexical Semantic Models]. In ACL 2013.&lt;br /&gt;
* Severyn, Aliaksei and Moschitti, Alessandro. 2013. [http://www.aclweb.org/anthology/D13-1044.pdf Automatic Feature Engineering for Answer Selection and Extraction]. In EMNLP 2013.&lt;br /&gt;
* Lei Yu, Karl Moritz Hermann, Phil Blunsom, and Stephen Pulman. 2014. [http://arxiv.org/pdf/1412.1632v1.pdf Deep Learning for Answer Sentence Selection]. In NIPS deep learning workshop.&lt;br /&gt;
* Di Wang and Eric Nyberg. 2015. [http://www.aclweb.org/anthology/P15-2116 A Long Short-Term Memory Model for Answer Sentence Selection in Question Answering]. In ACL 2015.&lt;br /&gt;
* Minwei Feng, Bing Xiang, Michael R. Glass, Lidan Wang, Bowen Zhou. 2015. [http://arxiv.org/abs/1508.01585 Applying deep learning to answer selection: A study and an open task]. In ASRU 2015.&lt;br /&gt;
* Aliaksei Severyn and Alessandro Moschitti. 2015. [http://disi.unitn.it/~severyn/papers/sigir-2015-long.pdf Learning to Rank Short Text Pairs with Convolutional Deep Neural Networks]. In SIGIR 2015.&lt;br /&gt;
* Zhiguo Wang and Abraham Ittycheriah. 2015. [http://arxiv.org/abs/1507.02628 FAQ-based Question Answering via Word Alignment]. In eprint arXiv:1507.02628.&lt;br /&gt;
* Ming Tan, Cicero dos Santos, Bing Xiang &amp;amp; Bowen Zhou. 2015. [http://arxiv.org/abs/1511.04108 LSTM-Based Deep Learning Models for Nonfactoid Answer Selection]. In eprint arXiv:1511.04108.&lt;br /&gt;
* Cicero dos Santos, Ming Tan, Bing Xiang &amp;amp; Bowen Zhou. 2016. [http://arxiv.org/abs/1602.03609 Attentive Pooling Networks]. In eprint arXiv:1602.03609.&lt;br /&gt;
* Zhiguo Wang, Haitao Mi and Abraham Ittycheriah. 2016. [http://arxiv.org/pdf/1602.07019v1.pdf Sentence Similarity Learning by Lexical Decomposition and Composition]. In Coling 2016.&lt;br /&gt;
* Hua He, Kevin Gimpel and Jimmy Lin. 2015. [http://aclweb.org/anthology/D/D15/D15-1181.pdf Multi-Perspective Sentence Similarity Modeling with Convolutional Neural Networks]. In EMNLP 2015.&lt;br /&gt;
* Hua He and Jimmy Lin. 2016. [https://cs.uwaterloo.ca/~jimmylin/publications/He_etal_NAACL-HTL2016.pdf Pairwise Word Interaction Modeling with Deep Neural Networks for Semantic Similarity Measurement]. In NAACL 2016.&lt;br /&gt;
* Liu Yang, Qingyao Ai, Jiafeng Guo, W. Bruce Croft. 2016. [http://maroo.cs.umass.edu/pub/web/getpdf.php?id=1240 aNMM: Ranking Short Answer Texts with Attention-Based Neural Matching Model]. In CIKM 2016.&lt;br /&gt;
* Jinfeng Rao, Hua He and Jimmy Lin. 2016. [http://www.cs.umd.edu/~jinfeng/publications/PairwiseNeuralNetwork_CIKM2016.pdf Noise-Contrastive Estimation for Answer Selection with Deep Neural Networks]. In CIKM 2016.&lt;br /&gt;
[[Category:State of the art]]&lt;/div&gt;</summary>
		<author><name>Lyangumass</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/aclwiki/index.php?title=Question_Answering_(State_of_the_art)&amp;diff=11699</id>
		<title>Question Answering (State of the art)</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/aclwiki/index.php?title=Question_Answering_(State_of_the_art)&amp;diff=11699"/>
		<updated>2016-11-27T23:30:55Z</updated>

		<summary type="html">&lt;p&gt;Lyangumass: Add TREC QA results with aNMM model from Yang et al. in CIKM 2016&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Answer Sentence Selection ==&lt;br /&gt;
&lt;br /&gt;
The task of answer sentence selection is designed for the open-domain question answering setting. Given a question and a set of candidate sentences, the task is to choose the correct sentence that contains the exact answer and can sufficiently support the answer choice. &lt;br /&gt;
&lt;br /&gt;
* [http://cs.stanford.edu/people/mengqiu/data/qg-emnlp07-data.tgz QA Answer Sentence Selection Dataset]: labeled sentences using TREC QA track data, provided by [http://cs.stanford.edu/people/mengqiu/ Mengqiu Wang] and first used in [http://www.aclweb.org/anthology/D/D07/D07-1003.pdf Wang et al. (2007)]. &lt;br /&gt;
* Over time, the original dataset diverged to two versions due to different pre-processing in recent publications: both have the same training set but their development and test sets differ. The Raw version has 82 questions in the development set and 100 questions in the test set; The Clean version (Wang and Ittycheriah et al. 2015, Tan et al. 2015, dos Santos et al. 2016, Wang et al. 2016) removed questions with no answers or with only positive/negative answers, thus has only 65 questions in the development set and 68 questions in the test set. &lt;br /&gt;
* Note: MAP/MRR scores on the two versions of TREC QA data (Clean vs Raw) are not comparable according to [http://www.cs.umd.edu/~jinfeng/publications/PairwiseNeuralNetwork_CIKM2016.pdf Rao et al. (2016)]. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Algorithm - Raw Version of TREC QA&lt;br /&gt;
! Reference&lt;br /&gt;
! [http://en.wikipedia.org/wiki/Mean_average_precision MAP]&lt;br /&gt;
! [http://en.wikipedia.org/wiki/Mean_reciprocal_rank MRR]&lt;br /&gt;
|-&lt;br /&gt;
| Punyakanok (2004)&lt;br /&gt;
| Wang et al. (2007)&lt;br /&gt;
| 0.419&lt;br /&gt;
| 0.494&lt;br /&gt;
|-&lt;br /&gt;
| Cui (2005)&lt;br /&gt;
| Wang et al. (2007)&lt;br /&gt;
| 0.427&lt;br /&gt;
| 0.526&lt;br /&gt;
|-&lt;br /&gt;
| Wang (2007)&lt;br /&gt;
| Wang et al. (2007)&lt;br /&gt;
| 0.603&lt;br /&gt;
| 0.685&lt;br /&gt;
|-&lt;br /&gt;
| H&amp;amp;S (2010)&lt;br /&gt;
| Heilman and Smith (2010)&lt;br /&gt;
| 0.609&lt;br /&gt;
| 0.692&lt;br /&gt;
|-&lt;br /&gt;
| W&amp;amp;M (2010)&lt;br /&gt;
| Wang and Manning (2010)&lt;br /&gt;
| 0.595&lt;br /&gt;
| 0.695&lt;br /&gt;
|-&lt;br /&gt;
| Yao (2013)&lt;br /&gt;
| Yao et al. (2013)&lt;br /&gt;
| 0.631&lt;br /&gt;
| 0.748&lt;br /&gt;
|-&lt;br /&gt;
| S&amp;amp;M (2013)&lt;br /&gt;
| Severyn and Moschitti (2013)&lt;br /&gt;
| 0.678&lt;br /&gt;
| 0.736&lt;br /&gt;
|-&lt;br /&gt;
| Shnarch (2013) - Backward &lt;br /&gt;
| Shnarch (2013)&lt;br /&gt;
| 0.686&lt;br /&gt;
| 0.754&lt;br /&gt;
|-&lt;br /&gt;
| Yih (2013) - LCLR&lt;br /&gt;
| Yih et al. (2013)&lt;br /&gt;
| 0.709&lt;br /&gt;
| 0.770&lt;br /&gt;
|-&lt;br /&gt;
| Yu (2014) - TRAIN-ALL bigram+count&lt;br /&gt;
| Yu et al. (2014)&lt;br /&gt;
| 0.711&lt;br /&gt;
| 0.785&lt;br /&gt;
|-&lt;br /&gt;
| W&amp;amp;N (2015) - Three-Layer BLSTM+BM25&lt;br /&gt;
| Wang and Nyberg (2015)&lt;br /&gt;
| 0.713&lt;br /&gt;
| 0.791&lt;br /&gt;
|-&lt;br /&gt;
| Feng (2015) - Architecture-II&lt;br /&gt;
| Tan et al. (2015)&lt;br /&gt;
| 0.711&lt;br /&gt;
| 0.800&lt;br /&gt;
|-&lt;br /&gt;
| S&amp;amp;M (2015)&lt;br /&gt;
| Severyn and Moschitti (2015)&lt;br /&gt;
| 0.746&lt;br /&gt;
| 0.808&lt;br /&gt;
|-&lt;br /&gt;
| Yang (2016) - Attention-Based Neural Matching Model&lt;br /&gt;
| Yang et al. (2016)&lt;br /&gt;
| 0.750&lt;br /&gt;
| 0.811&lt;br /&gt;
|-&lt;br /&gt;
| H&amp;amp;L (2016) - Pairwise Word Interaction Modelling&lt;br /&gt;
| He and Lin (2016)&lt;br /&gt;
| 0.758&lt;br /&gt;
| 0.822&lt;br /&gt;
|-&lt;br /&gt;
| H&amp;amp;L (2015) - Multi-Perspective CNN&lt;br /&gt;
| He and Lin (2015)&lt;br /&gt;
| 0.762&lt;br /&gt;
| 0.830&lt;br /&gt;
|-&lt;br /&gt;
| Rao (2016) - PairwiseRank + Multi-Perspective CNN&lt;br /&gt;
| Rao et al. (2016)&lt;br /&gt;
| 0.780&lt;br /&gt;
| 0.834&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Algorithm - Clean Version of TREC QA&lt;br /&gt;
! Reference&lt;br /&gt;
! [http://en.wikipedia.org/wiki/Mean_average_precision MAP]&lt;br /&gt;
! [http://en.wikipedia.org/wiki/Mean_reciprocal_rank MRR]&lt;br /&gt;
|-&lt;br /&gt;
| W&amp;amp;I (2015)&lt;br /&gt;
| Wang and Ittycheriah (2015)&lt;br /&gt;
| 0.746&lt;br /&gt;
| 0.820&lt;br /&gt;
|-&lt;br /&gt;
| Tan (2015) - QA-LSTM/CNN+attention &lt;br /&gt;
| Tan et al. (2015)&lt;br /&gt;
| 0.728&lt;br /&gt;
| 0.832&lt;br /&gt;
|-&lt;br /&gt;
| dos Santos (2016) - Attentive Pooling CNN &lt;br /&gt;
| dos Santos et al. (2016)&lt;br /&gt;
| 0.753&lt;br /&gt;
| 0.851&lt;br /&gt;
|-&lt;br /&gt;
| Wang et al.  (2016) - L.D.C Model&lt;br /&gt;
| Wang et al. (2016)&lt;br /&gt;
| 0.771&lt;br /&gt;
| 0.845&lt;br /&gt;
|-&lt;br /&gt;
| H&amp;amp;L (2015) - Multi-Perspective CNN&lt;br /&gt;
| He and Lin (2015)&lt;br /&gt;
| 0.777&lt;br /&gt;
| 0.836&lt;br /&gt;
|-&lt;br /&gt;
| Rao et al.  (2016) - PairwiseRank + Multi-Perspective CNN&lt;br /&gt;
| Rao et al. (2016)&lt;br /&gt;
| 0.801&lt;br /&gt;
| 0.877&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Vasin Punyakanok, Dan Roth, and Wen-Tau Yih. 2004. [http://cogcomp.cs.illinois.edu/papers/PunyakanokRoYi04a.pdf Mapping dependencies trees: An application to question answering]. In Proceedings of the 8th International Symposium on Artificial Intelligence and Mathematics, Fort Lauderdale, FL, USA.&lt;br /&gt;
* Hang Cui, Renxu Sun, Keya Li, Min-Yen Kan, and Tat-Seng Chua. 2005. [http://ws.csie.ncku.edu.tw/login/upload/2005/paper/Question%20answering%20Question%20answering%20passage%20retrieval%20using%20dependency%20relations.pdf Question answering passage retrieval using dependency relations]. In Proceedings of the 28th ACM-SIGIR International Conference on Research and Development in Information Retrieval, Salvador, Brazil.&lt;br /&gt;
* Wang, Mengqiu and Smith, Noah A. and Mitamura, Teruko. 2007. [http://www.aclweb.org/anthology/D/D07/D07-1003.pdf What is the Jeopardy Model? A Quasi-Synchronous Grammar for QA]. In EMNLP-CoNLL 2007.&lt;br /&gt;
* Heilman, Michael and Smith, Noah A. 2010. [http://www.aclweb.org/anthology/N10-1145 Tree Edit Models for Recognizing Textual Entailments, Paraphrases, and Answers to Questions]. In NAACL-HLT 2010.&lt;br /&gt;
* Wang, Mengqiu and Manning, Christopher. 2010. [http://aclweb.org/anthology//C/C10/C10-1131.pdf Probabilistic Tree-Edit Models with Structured Latent Variables for Textual Entailment and Question Answering]. In COLING 2010.&lt;br /&gt;
* E. Shnarch. 2013. Probabilistic Models for Lexical Inference. Ph.D. thesis, Bar Ilan University.&lt;br /&gt;
* Yao, Xuchen and Van Durme, Benjamin and Callison-Burch, Chris and Clark, Peter. 2013. [http://www.aclweb.org/anthology/N13-1106.pdf Answer Extraction as Sequence Tagging with Tree Edit Distance]. In NAACL-HLT 2013.&lt;br /&gt;
* Yih, Wen-tau and Chang, Ming-Wei and Meek, Christopher and Pastusiak, Andrzej. 2013. [http://research.microsoft.com/pubs/192357/QA-SentSel-Updated-PostACL.pdf Question Answering Using Enhanced Lexical Semantic Models]. In ACL 2013.&lt;br /&gt;
* Severyn, Aliaksei and Moschitti, Alessandro. 2013. [http://www.aclweb.org/anthology/D13-1044.pdf Automatic Feature Engineering for Answer Selection and Extraction]. In EMNLP 2013.&lt;br /&gt;
* Lei Yu, Karl Moritz Hermann, Phil Blunsom, and Stephen Pulman. 2014. [http://arxiv.org/pdf/1412.1632v1.pdf Deep Learning for Answer Sentence Selection]. In NIPS deep learning workshop.&lt;br /&gt;
* Di Wang and Eric Nyberg. 2015. [http://www.aclweb.org/anthology/P15-2116 A Long Short-Term Memory Model for Answer Sentence Selection in Question Answering]. In ACL 2015.&lt;br /&gt;
* Minwei Feng, Bing Xiang, Michael R. Glass, Lidan Wang, Bowen Zhou. 2015. [http://arxiv.org/abs/1508.01585 Applying deep learning to answer selection: A study and an open task]. In ASRU 2015.&lt;br /&gt;
* Aliaksei Severyn and Alessandro Moschitti. 2015. [http://disi.unitn.it/~severyn/papers/sigir-2015-long.pdf Learning to Rank Short Text Pairs with Convolutional Deep Neural Networks]. In SIGIR 2015.&lt;br /&gt;
* Zhiguo Wang and Abraham Ittycheriah. 2015. [http://arxiv.org/abs/1507.02628 FAQ-based Question Answering via Word Alignment]. In eprint arXiv:1507.02628.&lt;br /&gt;
* Ming Tan, Cicero dos Santos, Bing Xiang &amp;amp; Bowen Zhou. 2015. [http://arxiv.org/abs/1511.04108 LSTM-Based Deep Learning Models for Nonfactoid Answer Selection]. In eprint arXiv:1511.04108.&lt;br /&gt;
* Cicero dos Santos, Ming Tan, Bing Xiang &amp;amp; Bowen Zhou. 2016. [http://arxiv.org/abs/1602.03609 Attentive Pooling Networks]. In eprint arXiv:1602.03609.&lt;br /&gt;
* Zhiguo Wang, Haitao Mi and Abraham Ittycheriah. 2016. [http://arxiv.org/pdf/1602.07019v1.pdf Sentence Similarity Learning by Lexical Decomposition and Composition]. In Coling 2016.&lt;br /&gt;
* Hua He, Kevin Gimpel and Jimmy Lin. 2015. [http://aclweb.org/anthology/D/D15/D15-1181.pdf Multi-Perspective Sentence Similarity Modeling with Convolutional Neural Networks]. In EMNLP 2015.&lt;br /&gt;
* Hua He and Jimmy Lin. 2016. [https://cs.uwaterloo.ca/~jimmylin/publications/He_etal_NAACL-HTL2016.pdf Pairwise Word Interaction Modeling with Deep Neural Networks for Semantic Similarity Measurement]. In NAACL 2016.&lt;br /&gt;
* Liu Yang, Qingyao Ai, Jiafeng Guo, W. Bruce Croft. 2016. [http://maroo.cs.umass.edu/pub/web/getpdf.php?id=1240 aNMM: Ranking Short Answer Texts with Attention-Based Neural Matching Model]. In CIKM 2016&lt;br /&gt;
* Jinfeng Rao, Hua He and Jimmy Lin. 2016. [http://www.cs.umd.edu/~jinfeng/publications/PairwiseNeuralNetwork_CIKM2016.pdf Noise-Contrastive Estimation for Answer Selection with Deep Neural Networks]. In CIKM 2016&lt;br /&gt;
[[Category:State of the art]]&lt;/div&gt;</summary>
		<author><name>Lyangumass</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/aclwiki/index.php?title=User:Lyangumass&amp;diff=11698</id>
		<title>User:Lyangumass</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/aclwiki/index.php?title=User:Lyangumass&amp;diff=11698"/>
		<updated>2016-11-27T23:24:20Z</updated>

		<summary type="html">&lt;p&gt;Lyangumass: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;I am a PhD student in the Center for Intelligent Information Retrieval (CIIR) , College of Information and Computer Sciences, University of Massachusetts Amherst under the supervision of Prof. W. Bruce Croft.&lt;br /&gt;
&lt;br /&gt;
I was a Research Assistant at Text Mining Group in School of Information Systems, Singapore Management University, under the supervision of Prof. Jing Jiang on text mining and machine learning. I was also working closely with Prof. Feida Zhu. I got my Master degree from Peking University. For industrial experiences, I worked as a research intern in Microsoft Research Redmond, Microsoft Bing and a software engineer intern in Search R&amp;amp;D Department of Baidu Inc. My research areas include information retrieval, natural language processing,  text mining and machine learning. I have published papers in conferences such as SIGIR, CIKM, ICDM, NAACL, COLING and ECIR, including a co-authored paper that was a runner-up for the best paper award in SocInfo’13. I&#039;m now focusing on research on deep learning, statistical language models, probabilistic graphical models, ranking and relevance, question answering, learning to rank and online user modeling/profiling.&lt;br /&gt;
&lt;br /&gt;
My homepage: https://sites.google.com/site/lyangwww/&lt;/div&gt;</summary>
		<author><name>Lyangumass</name></author>
	</entry>
</feed>