<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://www.aclweb.org/aclwiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Goldan55</id>
	<title>ACL Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://www.aclweb.org/aclwiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Goldan55"/>
	<link rel="alternate" type="text/html" href="https://www.aclweb.org/aclwiki/Special:Contributions/Goldan55"/>
	<updated>2026-04-10T18:04:32Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.6</generator>
	<entry>
		<id>https://www.aclweb.org/aclwiki/index.php?title=Constrained_Conditional_Model&amp;diff=8107</id>
		<title>Constrained Conditional Model</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/aclwiki/index.php?title=Constrained_Conditional_Model&amp;diff=8107"/>
		<updated>2010-08-29T03:17:32Z</updated>

		<summary type="html">&lt;p&gt;Goldan55: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A &#039;&#039;&#039;Constrained Conditional Model&#039;&#039;&#039; (CCM) is a [[machine learning]] and inference framework that refers to augmenting the learning of conditional (probabilistic or discriminative) models with declarative constraints (written, for example, using a first-order representation) as a way to support decisions in an expressive output space while maintaining modularity and tractability of training and inference. &lt;br /&gt;
&lt;br /&gt;
Models of this kind have recently attracted much attention within the NLP community.&lt;br /&gt;
Formulating problems as constrained optimization problems over the output of learned models has several advantages. It allows one to focus on the modeling of problems by providing the opportunity to incorporate domain-specific knowledge as global constraints using a first order language. Using this declarative framework frees the developer from low level feature engineering while capturing the problem&#039;s domain-specific properties and guarantying exact inference. From a machine learning perspective it allows decoupling the stage of model generation (learning) from that of the constrained inference stage, thus helping to simplify the learning stage while improving the quality of the solutions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Motivation==&lt;br /&gt;
Making decisions in many learning domains (such as natural language processing and computer vision problems) often involve assigning values to sets of interdependent variables where the expressive dependency structure can influence, or even dictate, what assignments are possible. These settings are applicable to Structured Learning problems such as semantic role labeling but also for cases that require making use of multiple pre-learned components, such as summarization, textual entailment and question answering. In all these cases, it is natural to formulate the decision problem as a constrained optimization problem, with an objective function that is composed of learned models, subject to domain or problem specific constraints. &lt;br /&gt;
&lt;br /&gt;
Constrained Conditional Models is a learning and inference framework that augments the learning of conditional (probabilistic or discriminative) models with declarative constraints (written, for example, using a first-order representation) as a way to support decisions in an expressive output space while maintaining modularity and tractability of training and inference. In most applications of this framework in NLP, following &amp;lt;ref&amp;gt;Dan Roth and Wen-tau Yih, [http://l2r.cs.uiuc.edu/~danr/Papers/RothYi04.pdf  &amp;quot;A Linear Programming Formulation for Global Inference in Natural Language Tasks.&amp;quot;]  &#039;&#039;CoNLL&#039;&#039;, (2004).&amp;lt;/ref&amp;gt;, Integer Linear Programming (ILP) was used as the inference framework, although other algorithms can be used for that purpose.&lt;br /&gt;
  &lt;br /&gt;
&lt;br /&gt;
==Training Paradigms==&lt;br /&gt;
=== Learning Local VS. Global Models ===&lt;br /&gt;
The objective function used by CCMs can be decomposed and learned in several ways, ranging from a complete joint training of the model along with the constraints to completely decoupling between the learning and the inference stage. In the latter case, several local models are learned independently and the dependency between these models is considered only at decision time via a global decision process.  The advantages of each approach are discussed in &amp;lt;ref&amp;gt;Vasin Punyakanok and Dan Roth and Wen-Tau Yih and Dav Zimak, [http://l2r.cs.uiuc.edu/~danr/Papers/PRYZ05.pdf  &amp;quot;Learning and Inference over Constrained Output.&amp;quot;]  &#039;&#039;IJCAI&#039;&#039;, (2005).&amp;lt;/ref&amp;gt;, which studies the two training paradigms: (1) local models: L+I (learning+inference) and (2) global model: IBT (Inference based training), and shows both theoretically and experimentally that while IBT (joint training) is best in the limit, under some conditions (basically, ”good” components”) L+I can generalize better.&lt;br /&gt;
&lt;br /&gt;
=== Minimally Supervised CCM ===&lt;br /&gt;
CCM can help reduce supervision by using domain knowledge (expressed as constraints) to drive learning. These setting were studied in &lt;br /&gt;
&amp;lt;ref&amp;gt;Ming-Wei Chang and Lev Ratinov and Dan Roth, [http://l2r.cs.uiuc.edu/~danr/Papers/ChangRaRo07.pdf  &amp;quot;Guiding Semi-Supervision with Constraint-Driven Learning.&amp;quot;]  &#039;&#039;ACL&#039;&#039;, (2007).&amp;lt;/ref&amp;gt; and &amp;lt;ref&amp;gt;Ming-Wei Chang and Lev Ratinov and Dan Roth, [http://l2r.cs.uiuc.edu/~danr/Papers/ChangRaRo08.pdf  &amp;quot;Constraints as Prior Knowledge.&amp;quot;]  &#039;&#039;ICML Workshop on Prior Knowledge for Text and Language Processing}, (2008).&amp;lt;/ref&amp;gt;. These works introduce semi-supervised Constraints Driven Learning&lt;br /&gt;
(CODL) and show that by incorporating domain knowledge the performance of the learned model improves significantly. &lt;br /&gt;
 &lt;br /&gt;
=== Learning over Latent Representations ===&lt;br /&gt;
CCMs were also applied to latent learning frameworks, where the learning problem is defined over a latent representation layer. Since the notion of a &#039;&#039;correct representation&#039;&#039; is inherently ill-defined no gold-labeled data regarding the representation decision is available to the learner. Identifying the correct (or optimal) learning representation is viewed as a structured prediction process and therefore modeled as a CCM. &lt;br /&gt;
This problem was studied by several papers, in both supervised &amp;lt;ref&amp;gt;Ming-Wei Chang and Dan Goldwasser and Dan Roth and Vivek Srikumar, [http://l2r.cs.uiuc.edu/~danr/Papers/CGRS10.pdf  &amp;quot; Discriminative Learning over Constrained Latent Representations.&amp;quot;]  NAACL, (2010).&amp;lt;/ref&amp;gt;  and unsupervised &amp;lt;ref&amp;gt;Ming-Wei Chang Dan Goldwasser Dan Roth and Yuancheng Tu, [http://l2r.cs.uiuc.edu/~danr/Papers/CGRT10.pdf  &amp;quot;Unsupervised Constraint Driven Learning For Transliteration Discovery.&amp;quot;]  NAACL, (2009).&amp;lt;/ref&amp;gt;  settings and in all cases showed that explicitly modeling the interdependencies between representation decisions via constraints results in an improved performance.&lt;br /&gt;
&lt;br /&gt;
== CCM for Natural Language Processing Applications ==&lt;br /&gt;
The advantages of the CCM declarative formulation and the availability of off-the-shelf solvers have led to a large variety of natural language processing tasks being formulated within framework, including semantic role labeling &amp;lt;ref&amp;gt;Vasin Punyakanok, Dan Roth, Wen-tau Yih and Dav Zimak, [http://l2r.cs.uiuc.edu/~danr/Papers/PRYZ04.pdf &amp;quot;Semantic Role Labeling via Integer Linear Programming Inference.&amp;quot;]  COLING, (2004).&amp;lt;/ref&amp;gt;, syntactic parsing  &amp;lt;ref&amp;gt;Sagae, K. and Miyao, Y. and Tsujii, J., [http://www.aclweb.org/anthology/P07-1079 &amp;quot;HPSG Parsing with Shallow Dependency Constraints.&amp;quot;]  ACL, (2007).&amp;lt;/ref&amp;gt;,  coreference resolution  &amp;lt;ref&amp;gt;P. Denis and J. Baldridge, [http://l2r.cs.uiuc.edu/~danr/Papers/PRYZ04.pdf &amp;quot;Joint Determination of Anaphoricity and Coreference Resolution using	Integer Programming.&amp;quot;]  NAACL-HLT, (2007).&amp;lt;/ref&amp;gt;, summarization &amp;lt;ref&amp;gt;J. Clarke and M. Lapata, [http://www.jair.org/media/2433/live-2433-3730-jair.ps &amp;quot;Global Inference for Sentence Compression: An Integer Linear Programming Approach.&amp;quot;]  Journal of Artificial Intelligence Research (JAIR), (2008).&amp;lt;/ref&amp;gt;, transliteration &amp;lt;ref&amp;gt;D. Goldwasser and D. Roth, [http://l2r.cs.uiuc.edu/~danr/Papers/GoldwasserRo08a.pdf &amp;quot;Transliteration as Constrained Optimization.&amp;quot;]  EMNLP, (2008).&amp;lt;/ref&amp;gt; and joint information extraction &amp;lt;ref&amp;gt;D. Roth and W. Yih}, [http://l2r.cs.uiuc.edu/~danr/Papers/RothYi07.pdf &amp;quot;Global Inference for Entity and Relation Identification via a Linear&lt;br /&gt;
	Programming Formulation.&amp;quot;]  Introduction to Statistical Relational Learning,MIT Press, (2007).&amp;lt;/ref&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Most of these works use an Integer Linear Programming solver to solve the decision problem. Although theoretically solving an Integer Linear Program is exponential in the size of the decision problem in practice using state-of-the-art solvers and sophisticated formulations &amp;lt;ref&amp;gt;André F. T. Martins, Noah A. Smith, and Eric P. Xing &lt;br /&gt;
, [http://www.cs.cmu.edu/~nasmith/papers/martins+smith+xing.acl09.pdf &amp;quot;Concise Integer Linear Programming Formulations for Dependency Parsing .&amp;quot;]  ACL, (2009).&amp;lt;/ref&amp;gt;  large scale problems can be solved efficiently.&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
* &#039;&#039;&#039;CCM Tutorial&#039;&#039;&#039; [http://l2r.cs.uiuc.edu/~danr/Talks/ILP-CCM-Tutorial-NAACL10.pdf Integer Linear Programming in NLP – Constrained Conditional Models, NAACL-2010] &lt;br /&gt;
* &#039;&#039;&#039;CCM Software&#039;&#039;&#039; [http://cogcomp.cs.illinois.edu/page/software_view/11 Learning Based Java]&lt;br /&gt;
&lt;br /&gt;
== External links==&lt;br /&gt;
* [http://l2r.cs.uiuc.edu/~cogcomp/wpt.php?pr_key=CCM University of Illinois Cognitive Computation Group]&lt;br /&gt;
* [http://www-tsujii.is.s.u-tokyo.ac.jp/ilpnlp/ Workshop on Integer Linear Programming for Natural Language Processing, NAACL-2009]&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
 &amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Goldan55</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/aclwiki/index.php?title=Research&amp;diff=8022</id>
		<title>Research</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/aclwiki/index.php?title=Research&amp;diff=8022"/>
		<updated>2010-06-10T10:25:05Z</updated>

		<summary type="html">&lt;p&gt;Goldan55: /* ACL Wiki articles and tutorials */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is a list of links to information on research in Computational Linguistics.&lt;br /&gt;
&lt;br /&gt;
* [http://www.aclweb.org/anthology ACL Anthology] - more than 10,000 CL papers&lt;br /&gt;
* [[Bibliographies]]&lt;br /&gt;
* [[Books]]&lt;br /&gt;
* [[Formalisms]]&lt;br /&gt;
* [[Papers]]&lt;br /&gt;
* [[Resources]]&lt;br /&gt;
* [[Wikipedia articles]] - on topics related to Computational Linguistics&lt;br /&gt;
&lt;br /&gt;
== ACL Wiki articles and tutorials ==&lt;br /&gt;
Write your own article or tutorial!&lt;br /&gt;
&amp;lt;!-- Please keep this list in alphabetical order --&amp;gt;&lt;br /&gt;
* [[Active Learning for NLP]] (stub)&lt;br /&gt;
* [[Computational Lexicology]]&lt;br /&gt;
* [[Computational Morphology]] (stub)&lt;br /&gt;
* [[Computational Phonology]]&lt;br /&gt;
* [[Computational Semantics]]&lt;br /&gt;
* [[Computational Syntax]]&lt;br /&gt;
* [[Constrained Conditional Model]]&lt;br /&gt;
* [[Dialectometrics]]&lt;br /&gt;
* [[Dialogue Systems]] (stub)&lt;br /&gt;
* [[Distributional Hypothesis]]&lt;br /&gt;
* [[Graph Based Methods]] (stub)&lt;br /&gt;
* [[Information Extraction]] (stub)&lt;br /&gt;
* [[Lexical Acquisition]] (stub)&lt;br /&gt;
* [[Machine Translation]] (stub)&lt;br /&gt;
* [[Natural Language Generation Portal]]&lt;br /&gt;
* [[Natural Language Understanding]] (redirect)&lt;br /&gt;
* [[Multiword Expressions]] (stub)&lt;br /&gt;
* [[Parsing]] (stub)&lt;br /&gt;
* [[Part-of-speech tagging]]&lt;br /&gt;
* [[Question Answering]]&lt;br /&gt;
* [[Semantics]] (stub)&lt;br /&gt;
* [[Speech Processing]]&lt;br /&gt;
* [[Statistical Semantics]]&lt;br /&gt;
* [[Text Categorization]]&lt;br /&gt;
* [[Text Summarization]] (stub)&lt;br /&gt;
* [[Word Sense Disambiguation]]&lt;br /&gt;
&amp;lt;!-- Please keep this list in alphabetical order --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Research|*]]&lt;/div&gt;</summary>
		<author><name>Goldan55</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/aclwiki/index.php?title=Constrained_Conditional_Model&amp;diff=8021</id>
		<title>Constrained Conditional Model</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/aclwiki/index.php?title=Constrained_Conditional_Model&amp;diff=8021"/>
		<updated>2010-06-10T10:24:11Z</updated>

		<summary type="html">&lt;p&gt;Goldan55: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Making complex decisions in real world problems often involves assigning values to sets of interdependent variables where the expressive dependency structure can influence, or even dictate, what assignments are possible. Structured learning problems provide one such example, but the setting we study is broader. We are interested in cases where decisions depend on multiple models that cannot be learned simultaneously as well as cases where constraints among models&#039; outcomes are available only at decision time.&lt;br /&gt;
&lt;br /&gt;
We have developed a general framework -- &#039;&#039;&#039;Constrained Conditional Models&#039;&#039;&#039; -- that augments the learning of conditional (probabilistic or discriminative) models with declarative constraints (written, for example, using a first-order representation) as a way to support decisions in an expressive output space while maintaining modularity and tractability of training and inference. While incorporating nonlocal dependencies in a probabilistic model can lead to intractable training and inference, our framework allows one to learn a rather simple (or multiple simple) model(s), and make decisions with more expressive models that take into account also global declerative (hard or soft) constraints. We have used this framework successfully in the context of multiple NLP and IE problems, starting with our work on named entities and relations (CoNLL&#039;94) and our SRL work.&lt;br /&gt;
Our framework, which suggests to learn conditional models and use them as an objective function for a global constrained optimization problem, has been followed by a large body of work in NLP. Following (Roth and Yih, 2004) that has formalized global decision problems in the context of IE as constrained optimization problems and solved these optimization problems using Integer Linear Programming (ILP) we have seen (Punyakanok et al., 2005; Barzilay and Lapata, 2006; Clarke and Lapata, ; Marciniak and Strube, 2005) and others.&lt;br /&gt;
&lt;br /&gt;
We have also studied theoretically training paradigms for CCMs and have developed an understanding for the advantages of different training regimes. Recently we studied unsupervised learning in this framework and have shown that declarative constraints can be used to take advantage of unlabeled data when training conditional models.&lt;br /&gt;
&lt;br /&gt;
==Tutorials==&lt;br /&gt;
* [http://l2r.cs.uiuc.edu/~danr/Talks/CRR-CCM-Tutorial-EACL09.ppt EACL-09 Tutorial on Constrained Conditional Models]&lt;/div&gt;</summary>
		<author><name>Goldan55</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/aclwiki/index.php?title=Constrained_Conditional_Model&amp;diff=8020</id>
		<title>Constrained Conditional Model</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/aclwiki/index.php?title=Constrained_Conditional_Model&amp;diff=8020"/>
		<updated>2010-06-10T10:21:23Z</updated>

		<summary type="html">&lt;p&gt;Goldan55: New page: Making complex decisions in real world problems often involves assigning values to sets of interdependent variables where the expressive dependency structure can influence, or even dictate...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Making complex decisions in real world problems often involves assigning values to sets of interdependent variables where the expressive dependency structure can influence, or even dictate, what assignments are possible. Structured learning problems provide one such example, but the setting we study is broader. We are interested in cases where decisions depend on multiple models that cannot be learned simultaneously as well as cases where constraints among models&#039; outcomes are available only at decision time.&lt;br /&gt;
We have developed a general framework -- &#039;&#039;&#039;Constrained Conditional Models&#039;&#039;&#039; -- that augments the learning of conditional (probabilistic or discriminative) models with declarative constraints (written, for example, using a first-order representation) as a way to support decisions in an expressive output space while maintaining modularity and tractability of training and inference. While incorporating nonlocal dependencies in a probabilistic model can lead to intractable training and inference, our framework allows one to learn a rather simple (or multiple simple) model(s), and make decisions with more expressive models that take into account also global declerative (hard or soft) constraints. We have used this framework successfully in the context of multiple NLP and IE problems, starting with our work on named entities and relations (CoNLL&#039;94) and our SRL work.&lt;br /&gt;
Our framework, which suggests to learn conditional models and use them as an objective function for a global constrained optimization problem, has been followed by a large body of work in NLP. Following (Roth and Yih, 2004) that has formalized global decision problems in the context of IE as constrained optimization problems and solved these optimization problems using Integer Linear Programming (ILP) we have seen (Punyakanok et al., 2005; Barzilay and Lapata, 2006; Clarke and Lapata, ; Marciniak and Strube, 2005) and others.&lt;br /&gt;
We have also studied theoretically training paradigms for CCMs and have developed an understanding for the advantages of different training regimes. Recently we studied unsupervised learning in this framework and have shown that declarative constraints can be used to take advantage of unlabeled data when training conditional models.&lt;br /&gt;
&lt;br /&gt;
==Tutorials==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==External links==&lt;br /&gt;
&amp;lt;!-- Please keep this list in alphabetical order --&amp;gt;&lt;br /&gt;
* [http://tangra.si.umich.edu/~radev/webgraph/webgraph.pdf Bibliography of Webgraph Papers], also available in [http://tangra.si.umich.edu/~radev/webgraph/webgraph.bib bib format]&lt;br /&gt;
* [http://tangra.si.umich.edu/~radev/tut06/tut.pdf Graph-based Algorithms for Information Retrieval and Natural Language Processing], a tutorial at HLT-NAACL 2006&lt;br /&gt;
* [http://www.textgraphs.org/ws06 TextGraphs: Graph-based Algorithms for Natural Language Processing], a workshop at HLT-NAACL 2006&lt;br /&gt;
* [http://www.textgraphs.org/ws07 TextGraphs-2: Graph-based Algorithms for Natural Language Processing], a workshop at HLT-NAACL 2007&lt;/div&gt;</summary>
		<author><name>Goldan55</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/aclwiki/index.php?title=Acronyms&amp;diff=8019</id>
		<title>Acronyms</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/aclwiki/index.php?title=Acronyms&amp;diff=8019"/>
		<updated>2010-06-10T10:03:30Z</updated>

		<summary type="html">&lt;p&gt;Goldan55: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== A ==&lt;br /&gt;
* [[ACL]] = Association for Computational Linguistics&lt;br /&gt;
* AFNLP = Asian Federation of Natural Language Processing&lt;br /&gt;
* AI = Artificial Intelligence&lt;br /&gt;
* ALPAC = Automated Language Processing Advisory Committee&lt;br /&gt;
* ASR = Automatic Speech Recognition&lt;br /&gt;
&lt;br /&gt;
== C ==&lt;br /&gt;
* CAT = Computer Assisted/Aided Translation&lt;br /&gt;
* [[CBC]] = Clustering by Committee&lt;br /&gt;
* CCG = Combinatory Categorial Grammar&lt;br /&gt;
* [[CICLing]] = International &#039;&#039;&#039;C&#039;&#039;&#039;onference on &#039;&#039;&#039;I&#039;&#039;&#039;ntelligent text processing and &#039;&#039;&#039;C&#039;&#039;&#039;omputational &#039;&#039;&#039;Ling&#039;&#039;&#039;uistics&lt;br /&gt;
* CCM = Constrained Conditional Model&lt;br /&gt;
* CL = Computational Linguistics&lt;br /&gt;
* COBUILD = Collins Birmingham University International Language Database&lt;br /&gt;
* [[COLING]] = International Conference on Computational Linguistics&lt;br /&gt;
* CRF = Conditional Random Fields&lt;br /&gt;
&lt;br /&gt;
== D ==&lt;br /&gt;
* DRS = Discourse Representation Structure&lt;br /&gt;
* DRT = Discourse Representation Theory&lt;br /&gt;
&lt;br /&gt;
== E ==&lt;br /&gt;
* [[EACL]] = European chapter of the Association for Computational Linguistics &lt;br /&gt;
* [[EBMT]] = Example-based machine translation&lt;br /&gt;
* [[EM]] = Expectation Maximization&lt;br /&gt;
&lt;br /&gt;
== F ==&lt;br /&gt;
* FAHQMT = Fully Automated High-Quality Machine Translation&lt;br /&gt;
* FOL = First Order Logic&lt;br /&gt;
&lt;br /&gt;
== H ==&lt;br /&gt;
* HAMT = Human Assisted/Aided Machine Translation&lt;br /&gt;
* HLT = Human Language Technologies&lt;br /&gt;
* HMM = Hidden Markov Model&lt;br /&gt;
* HPSG = Head-Driven Phrase Structure Grammar&lt;br /&gt;
&lt;br /&gt;
== I ==&lt;br /&gt;
* IE = Information Extraction&lt;br /&gt;
* IR = Information Retrieval&lt;br /&gt;
* IST = Information Society Technologies&lt;br /&gt;
&lt;br /&gt;
== K ==&lt;br /&gt;
&lt;br /&gt;
* [[KR]] = Knowledge Representation&lt;br /&gt;
&lt;br /&gt;
== L ==&lt;br /&gt;
* LFG = Lexical Functional Grammar&lt;br /&gt;
* LSA = Latent Semantic Analysis; Linguistics Society of America&lt;br /&gt;
* LSI = Latent Semantic Indexing&lt;br /&gt;
&lt;br /&gt;
== M ==&lt;br /&gt;
* MAHT = Machine Assised/Aided Human Translation&lt;br /&gt;
* ME = Maximum Entropy&lt;br /&gt;
* MI = Mutual Information&lt;br /&gt;
* ML = Machine Learning&lt;br /&gt;
* MRD = Machine-Readable Dictionary&lt;br /&gt;
* MT = Mechanical Translation/Machine Translation&lt;br /&gt;
&lt;br /&gt;
== N ==&lt;br /&gt;
* NAACL = North American chapter of the Association for Computational Linguistics&lt;br /&gt;
* NE = Named Entity&lt;br /&gt;
* NEALT = Northern European Association for Language Technology&lt;br /&gt;
* NER = Named Entity Recognition&lt;br /&gt;
* NLG = Natural Language Generation&lt;br /&gt;
* NLP = Natural Language Processing&lt;br /&gt;
* NLU = Natural Language Understanding&lt;br /&gt;
* [http://www.languagemuseum.org/ NML] = National Museum of Language&lt;br /&gt;
&lt;br /&gt;
== P ==&lt;br /&gt;
* PLSA = Probabilistic Latent Semantic Analysis&lt;br /&gt;
* PMI = Pointwise Mutual Information&lt;br /&gt;
* POS = Part of Speech&lt;br /&gt;
&lt;br /&gt;
== R ==&lt;br /&gt;
* [[RTE]] = Recognising Textual Entailment&lt;br /&gt;
&lt;br /&gt;
== S ==&lt;br /&gt;
* [[SLT]] = Spoken Language Translation&lt;br /&gt;
* [[SVM]] = Support Vector Machine&lt;br /&gt;
&lt;br /&gt;
== T ==&lt;br /&gt;
* TAG = Tree-Adjoining Grammar&lt;br /&gt;
* TINLAP = Theoretical Issues in Natural Language Processing&lt;br /&gt;
* TLA = Three-letter acronym&lt;br /&gt;
* TMI = Theoretical and Methodological Issues (in Machine Translation)&lt;br /&gt;
* TREC = The Text REtrieval Conference&lt;br /&gt;
&lt;br /&gt;
== V ==&lt;br /&gt;
* VSM = Vector Space Model&lt;br /&gt;
&lt;br /&gt;
== W ==&lt;br /&gt;
* [[WSD]] = Word Sense Disambiguation&lt;/div&gt;</summary>
		<author><name>Goldan55</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/aclwiki/index.php?title=Textual_Entailment_Portal&amp;diff=8018</id>
		<title>Textual Entailment Portal</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/aclwiki/index.php?title=Textual_Entailment_Portal&amp;diff=8018"/>
		<updated>2010-06-10T09:58:52Z</updated>

		<summary type="html">&lt;p&gt;Goldan55: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page serves as a community portal for everything related to Textual Entailment. &lt;br /&gt;
&lt;br /&gt;
== Textual Entailment Resource Pool ==&lt;br /&gt;
[[Textual Entailment Resource Pool]]&lt;br /&gt;
&lt;br /&gt;
== PASCAL Challenges ==&lt;br /&gt;
&lt;br /&gt;
[[Recognizing Textual Entailment|Recognizing Textual Entailment (RTE)]] has been proposed recently as a generic task that captures major semantic inference needs across many natural language processing applications.&lt;br /&gt;
&lt;br /&gt;
== References on Textual Entailment ==&lt;br /&gt;
&#039;&#039;You are welcome to update this list with new papers on textual entailment (please keep the new references in the same format, and  maintain the alphabetical order).&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Workshops and Tutorials ===&lt;br /&gt;
&lt;br /&gt;
[http://l2r.cs.uiuc.edu/~cogcomp/presentations/RTE_NAACL_2010.zip NAACL 2010 Tutorial on Recognizing Textual Entailment, 2010]&lt;br /&gt;
&lt;br /&gt;
[http://acl.ldc.upenn.edu/W/W05/#W05-1200 ACL 2005 Workshop on Empirical Modeling of Semantic Equivalence and Entailment, 2005]&lt;br /&gt;
&lt;br /&gt;
[http://www.pascal-network.org/Challenges/RTE/ First PASCAL Recognising Textual Entailment Challenge (RTE-1), 2005]&lt;br /&gt;
&lt;br /&gt;
[http://www.pascal-network.org/Challenges/RTE2/ Second PASCAL Recognising Textual Entailment Challenge (RTE-2), 2006]&lt;br /&gt;
&lt;br /&gt;
[http://nlp.uned.es/QA/ave Answer Validation Exercise at CLEF 2006 (AVE 2006)]&lt;br /&gt;
&lt;br /&gt;
[http://www.pascal-network.org/Challenges/RTE3/ Third PASCAL Recognising Textual Entailment Challenge (RTE-3), 2007]&lt;br /&gt;
&lt;br /&gt;
=== Papers in recent conferences and other workshops ===&lt;br /&gt;
&lt;br /&gt;
L. Bentivogli, I. Dagan, H. Dang, D. Giampiccolo, M. Lo Leggio, and B. Magnini . 2009. Considering Discourse References in Textual Entailment Annotation. 5th International Conference on Generative Approaches to the Lexicon (GL 2009). [http://hlt.fbk.eu/sites/hlt.fbk.eu/files/GL2009_Bentivogli-et-al.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
J. Bos &amp;amp; K. Markert. 2005. Recognising Textual Entailment with Logical Inference. Proceedings of EMNLP 2005.&lt;br /&gt;
&lt;br /&gt;
R. Braz, R. Girju, V. Punyakanok, D. Roth, and M. Sammons. 2005. An Inference Model for Semantic Entailment in Natural Language. Twentieth National Conference on Artificial Intelligence (AAAI-05) &lt;br /&gt;
&lt;br /&gt;
R. Braz, R. Girju, V. Punyakanok, D. Roth, and M. Sammons. 2005. Knowledge Representation for Semantic Entailment and Question-Answering. IJCAI-05 Workshop on Knowledge and Reasoning for Answering Questions. &lt;br /&gt;
&lt;br /&gt;
C. Corley, A. Csomai and R. Mihalcea. 2005. Text Semantic Similarity, with Applications. &lt;br /&gt;
RANLP-05.&lt;br /&gt;
&lt;br /&gt;
I. Dagan and O. Glickman. 2004. Probabilistic textual entailment: Generic applied modeling of language variability. In PASCAL Workshop on Learning Methods for Text Understanding and Mining, Grenoble.&lt;br /&gt;
&lt;br /&gt;
I. Dagan, O. Glickman, A. Gliozzo, E. Marmorshtein and C. Strapparava. 2006. Direct Word Sense Matching for Lexical Substitution. COLING-ACL 2006&lt;br /&gt;
&lt;br /&gt;
R. Delmonte, 2005. VENSES - a Linguistically-Based System for Semantic Evaluation, PLN, Procesamiento del Lenguaje Natural, Revista n° 35, ISSN:1135-5948, pp. 449-450.&lt;br /&gt;
&lt;br /&gt;
R. Delmonte, 2005. Simulare la comprensione del linguaggio con VENSES. presented at Workshop &amp;quot;Scienze Cognitive Applicate&amp;quot;, Facolt? di Psicologia dell&#039;Universit? Roma &amp;quot;La Sapienza&amp;quot;, 12/13-12-2005.&lt;br /&gt;
&lt;br /&gt;
M. Geffet and I. Dagan. 2004. Feature Vector Quality and Distributional Similarity. Proceedings of The 20th International Conference on Computational Linguistics (COLING).&lt;br /&gt;
&lt;br /&gt;
M. Geffet and I. Dagan. 2005. &amp;quot;The Distributional Inclusion Hypotheses and Lexical Entailment&amp;quot;, ACL 2005, Michigan, USA. &lt;br /&gt;
&lt;br /&gt;
O. Glickman, I. Dagan and M. Koppel. 2005. A Probabilistic Classification Approach for Lexical Textual Entailment, Twentieth National Conference on Artificial Intelligence (AAAI-05) &lt;br /&gt;
&lt;br /&gt;
O. Glickman, E. Shnarch and I. Dagan. 2006. Lexical Reference: a Semantic Matching Subtask. EMNLP 2006 (poster).&lt;br /&gt;
&lt;br /&gt;
A. Haghighi, A. Y. Ng, and C. D. Manning. 2005. Robust Textual Inference via Graph Matching. HLT-EMNLP 2005.&lt;br /&gt;
&lt;br /&gt;
S. Harabagiu and A. Hickl. 2006. Methods for Using Textual Entailment in Open-Domain Question Answering. COLING-ACL 2006&lt;br /&gt;
&lt;br /&gt;
J. Herrera, A. Peñas, F. Verdejo, 2006. Textual Entailment Recognition Based on Dependency Analysis and WordNet. MLCW 2005. LNAI 3944. 231-239.&lt;br /&gt;
&lt;br /&gt;
V. Jijkoun and M. de Rijke. 2006. Recognizing Textual Entailment: Is Lexical Similarity Enough?,  In: I. Dagan, F. Dalche, J. Quinonero Candela, B. Magnini, editors, Evaluating Predictive Uncertainty, Textual Entailment and Object Recognition Systems, LNAI 3944, pages 449-460, Springer Verlag.&lt;br /&gt;
&lt;br /&gt;
M. Kouylekov and B. Magnini. 2005. Tree Edit Distance for Textual Entailment. RANLP 2005.&lt;br /&gt;
&lt;br /&gt;
B. MacCartney, T. Grenager, M. de Marneffe, D. Cer and C. D. Manning. 2006. Learning to Recognize Features of Valid Textual Entailments. HLT-NAACL 2006.&lt;br /&gt;
&lt;br /&gt;
M. Makatchev, P. W. Jordan, K. Vanlehn. 2004. Abductive Theorem Proving for Analyzing Student Explanations to Guide Feedback in Intelligent Tutoring Systems. Journal of Automated Reasoning, 32(3).   &lt;br /&gt;
&lt;br /&gt;
S. Mirkin, I. Dagan, M. Geffet. 2006. Integrating Pattern-based and Distributional Similarity Methods for Lexical Entailment Acquisition. COLING-ACL 2006 (poster) &lt;br /&gt;
&lt;br /&gt;
C. Monz and M. de Rijke. 2001. Light-Weight Entailment Checking for Computational Semantics,  In: P. Blackburn and M. Kohlhase, editors, International workshop on Inference in Computational Semantics (ICoS-3).&lt;br /&gt;
&lt;br /&gt;
R. Nairn, C. Condoravdi, and L. Karttunen. 2006. Computing relative polarity for textual inference. International workshop on Inference in Computational Semantics (ICoS-5).&lt;br /&gt;
&lt;br /&gt;
M. T. Pazienza, M. Pennacchiotti and F. M. Zanzotto . 2006. Discovering asymmetric entailment relations between verbs using selectional preferences. COLING-ACL 2006&lt;br /&gt;
&lt;br /&gt;
V. Pekar. 2006. Acquisition of Verb Entailment from Text. HLT-NAACL 2006&lt;br /&gt;
&lt;br /&gt;
A. Peñas, A. Rodrigo, F. Verdejo. 2006. SPARTE, a Test Suite for Recognising Textual Entailment in Spanish. Computational Linguistics and Intelligent Text Processing, CICLing 2006. LNCS 3878. 275-286&lt;br /&gt;
&lt;br /&gt;
R. Raina, A. Y. Ng, and C. Manning. 2005. Robust textual inference via learning and abductive reasoning. Twentieth National Conference on Artificial Intelligence (AAAI-05) &lt;br /&gt;
&lt;br /&gt;
L. Romano, M. Kouylekov, I. Szpektor, I. Dagan and A. Lavelli. 2006. Investigating a Generic Paraphrase-based Approach for Relation Extraction. EACL 2006. &lt;br /&gt;
&lt;br /&gt;
V. Rus, A. Graesser and K. Desai. 2005. Lexico-Syntactic Subsumption for Textual Entailment. RANLP 2005.&lt;br /&gt;
&lt;br /&gt;
R. Snow, L. Vanderwende and A. Menezes. 2006. Effectively Using Syntax for Recognizing False Entialment. HLT-NAACL 2006.&lt;br /&gt;
&lt;br /&gt;
M. Tatu and D. Moldovan. 2005. A Semantic Approach to Recognizing Textual Entailment. HLT-EMNLP 2005.&lt;br /&gt;
&lt;br /&gt;
M. Tatu and D. Moldovan. 2006. A Logic-based Semantic Approach to Recognizing Textual Entailment. COLING-ACL 2006 (poster). &lt;br /&gt;
&lt;br /&gt;
F. M. Zanzotto and A. Moschitti. 2006. Automatic learning of textual entailments with cross-pair similarities. COLING-ACL 2006&lt;br /&gt;
&lt;br /&gt;
Y. Mehdad, B. Magnini. 2009. A Word Overlap Baseline for the Recognizing Textual Entailment Task. Available at http://hlt.fbk.eu/sites/hlt.fbk.eu/files/baseline.pdf&lt;br /&gt;
&lt;br /&gt;
Rui Wang and Günter Neumann. 2007. Recognizing Textual Entailment Using a Subsequence Kernel Method. AAAI-07.&lt;br /&gt;
&lt;br /&gt;
Rui Wang and Yajing Zhang. 2008. Recognizing Textual Entailment with Temporal Expressions in Natural Language Texts. In Proceedings of the IEEE International Workshop on Semantic Computing and Applications (IWSCA-2008).&lt;br /&gt;
&lt;br /&gt;
Rui Wang and Günter Neumann. 2009. An Accuracy-Oriented Divide-and-Conquer Strategy for Recognizing Textual Entailment. TAC 2008 Workshop - RTE-4.&lt;br /&gt;
&lt;br /&gt;
Georgiana Dinu and Rui Wang. 2009. Inference Rules and their Application to Recognizing Textual Entailment. EACL-09.&lt;br /&gt;
&lt;br /&gt;
Rui Wang and Yi Zhang. 2009. Recognizing Textual Relatedness with Predicate-Argument Structures. EMNLP 2009.&lt;br /&gt;
&lt;br /&gt;
Shachar Mirkin, Ido Dagan, Eyal Shnarch. 2009. Evaluating the Inferential Utility of Lexical-Semantic Resources. EACL-09. [http://www.cs.biu.ac.il/~mirkins/publications/Inferential-Utility_Mirkin-DS_EACL09.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
Shachar Mirkin, Lucia Specia, Nicola Cancedda, Ido Dagan, Marc Dymetman and Idan Szpektor. 2009. Source-Language Entailment Modeling for Translating Unknown Terms. ACL-09. [http://www.cs.biu.ac.il/~mirkins/publications/TE4MT_ACL09_Mirkin-Specia-etal.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
Mark Sammons, Vinod Vydiswaran, and Dan Roth. 2010. Ask not what Textual Entailment can do for you.... ACL-10  [http://l2r.cs.uiuc.edu/~danr/Papers/SammonsVyRo10.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Journal papers ===&lt;br /&gt;
&lt;br /&gt;
I. Androutsopoulos and  P. Malakasiotis. 2010. A Survey of Paraphrasing and Textual Entailment Methods. Journal of Artificial Intelligence Research, vol. 38, pp. 135-187. [http://www.jair.org/papers/paper2985.html]&lt;br /&gt;
&lt;br /&gt;
[[Category:Textual Entailment Portal]]&lt;/div&gt;</summary>
		<author><name>Goldan55</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/aclwiki/index.php?title=Textual_Entailment_Resource_Pool&amp;diff=8017</id>
		<title>Textual Entailment Resource Pool</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/aclwiki/index.php?title=Textual_Entailment_Resource_Pool&amp;diff=8017"/>
		<updated>2010-06-10T09:56:36Z</updated>

		<summary type="html">&lt;p&gt;Goldan55: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Textual Entailment|Textual entailment]] systems rely on many different types of [[Natural Language Processing|NLP]] resources, including term banks, paraphrase lists, parsers, named-entity recognizers, etc. With so many resources being continuously released and improved, it can be difficult to know which particular resource to use when developing a system.&lt;br /&gt;
&lt;br /&gt;
In response, the [[Recognizing Textual Entailment|Recognizing Textual Entailment (RTE)]] shared task community initiated a new activity for building this &#039;&#039;Textual Entailment Resource Pool&#039;&#039;. RTE participants and any other member of the NLP community are encouraged to contribute to the pool.&lt;br /&gt;
&lt;br /&gt;
In an effort to determine the relative impact of the resources, RTE participants are strongly encouraged to report, whenever possible, the contribution to the overall performance of each utilized resource. Formal qualitative and quantitative results should be included in a separate section of the system report as well as posted on the talk pages of this Textual Entailment Resource Pool.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Adding&#039;&#039;&#039; a new resource is very easy. See how to &#039;&#039;&#039;use existing templates&#039;&#039;&#039; to do this in [[Help:Using Templates]].&lt;br /&gt;
&lt;br /&gt;
== Complete RTE Systems ==&lt;br /&gt;
&lt;br /&gt;
* [http://project.cgm.unive.it/html/venses.html VENSES] (from Ca&#039; Foscari University of Venice, Italy)&lt;br /&gt;
* [http://svn.ask.it.usyd.edu.au/trac/candc/wiki/nutcracker Nutcracker] (available for download)&lt;br /&gt;
* [http://l2r.cs.uiuc.edu/~cogcomp/kindleDemo.php Entailment Demo] (from the University of Illinois at Urbana-Champaign)&lt;br /&gt;
* [http://edits.fbk.eu/ EDITS - Edit Distance Textual Entailment Suite] (open source software developed by [http://hlt.fbk.eu/ Human Language Technology (HLT) group at FBK-Irst])&lt;br /&gt;
&lt;br /&gt;
== RTE data sets ==&lt;br /&gt;
* [http://www.coli.uni-saarland.de/projects/salsa/fate FrameNet manually annotated RTE 2006 Test Set.] Provided by  [http://www.coli.uni-saarland.de/projects/salsa/ SALSA project, Saarland University.]&lt;br /&gt;
* [http://www.cs.biu.ac.il/~nlp/files/RTE_2006_Aligned.zip Manually Word Aligned RTE 2006 Data Sets.] Provided by  [http://research.microsoft.com/nlp/ the Natural Language Processing Group, Microsoft Research.]&lt;br /&gt;
* [http://www-nlp.stanford.edu/projects/contradiction/ RTE data sets annotated for a 3-way decision: entails, contradicts, unknown.] Provided by Stanford NLP Group.&lt;br /&gt;
* [http://www.cs.utexas.edu/~pclark/bpi-test-suite/ BPI RTE data set] - 250 pairs, focusing on world knowledge. Provided jointly by [http://www.boeing.com/phantom/math_ct/index.html Boeing], [http://wordnet.cs.princeton.edu/ Princeton], and [http://www.isi.edu ISI].&lt;br /&gt;
* [http://hlt.fbk.eu/en/Technology/TE_Specialized_Data Textual Entailment Specialized Data Sets] - 90 RTE-5 Test Set pairs annotated with linguistic phenomena + 203 monothematic pairs (i.e. pairs where only one linguistic phenomenon is relevant to the entailment relation) created from the 90 annotated pairs. Provided jointly by [http://hlt.fbk.eu/en/home FBK-Irst], and [http://www.celct.it/ CELCT].&lt;br /&gt;
* [http://www.nist.gov/tac/data/ RTE-5 Search Pilot Data Set annotated with anaphora and coreference information] - RTE-5 Search Data Set annotated with anaphora/coreference information + Augmented RTE-5 Search Data Set, where all the referring expressions which need to be resolved in the entailing sentences are substituted by explicit expressions on the basis of the anaphora/coreference annotation. Provided by [http://www.celct.it/ CELCT] and distributed by [http://www.nist.gov/index.html NIST] at the [http://www.nist.gov/tac/data/ Past TAC Data] web page (2009 Search Pilot, annotated test/dev data).&lt;br /&gt;
&lt;br /&gt;
== Knowledge Resources ==&lt;br /&gt;
[[RTE Knowledge Resources]]&lt;br /&gt;
&lt;br /&gt;
== Tools ==&lt;br /&gt;
&lt;br /&gt;
=== Parsers ===&lt;br /&gt;
* [http://svn.ask.it.usyd.edu.au/trac/candc C&amp;amp;C parser for Combinatory Categorial Grammar]&lt;br /&gt;
* [[Minipar]]&lt;br /&gt;
* [http://l2r.cs.uiuc.edu/~cogcomp/asoftware.php?skey=SP Shallow Parser] - from the University of Illinois at Urbana-Champaign, see a [http://l2r.cs.uiuc.edu/~cogcomp/shallow_parse_demo.php web demo] of this tool&lt;br /&gt;
&lt;br /&gt;
=== Role Labelling ===&lt;br /&gt;
* [http://cemantix.org/assert ASSERT]&lt;br /&gt;
* [http://www.coli.uni-saarland.de/projects/salsa/shal/ Shalmaneser]&lt;br /&gt;
* [http://l2r.cs.uiuc.edu/~cogcomp/asoftware.php?skey=SRL Semantic Role Labeler] - from the University of Illinois at Urbana-Champaign, see a [http://l2r.cs.uiuc.edu/~cogcomp/srl-demo.php web demo] of this tool&lt;br /&gt;
&lt;br /&gt;
=== Entity Recognition Tools ===&lt;br /&gt;
* [http://l2r.cs.uiuc.edu/~cogcomp/asoftware.php?skey=NE Illinois Named Entity Tagger] - see a [http://l2r.cs.uiuc.edu/~cogcomp/ne_demo.php web demo] of this tool&lt;br /&gt;
* [http://l2r.cs.uiuc.edu/~cogcomp/asoftware.php?skey=CORANKER Illinois Multi-lingual Named Entity Discovery Tool] - see a [http://l2r.cs.uiuc.edu/~cogcomp/ne_matcher_demo.php web demo] of this tool&lt;br /&gt;
&lt;br /&gt;
=== Corpus Readers ===&lt;br /&gt;
* [http://nltk.org NLTK] provides a corpus reader for the data from RTE Challenges 1, 2, and 3 - see the [http://nltk.org/doc/guides/corpus.html#rte Corpus Readers] Guide for more information.&lt;br /&gt;
&lt;br /&gt;
=== Related Libraries ===&lt;br /&gt;
&lt;br /&gt;
* [http://www.semantilog.org/pypes.html PyPES] general purpose library containing evaluation environment for RTE and McPIET text inference engine based on the ERG (English Resource Grammar)&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
* [http://homepages.inf.ed.ac.uk/jbos/rte/ Textual Entailment site by Johan Bos]&lt;br /&gt;
* [http://ai-nlp.info.uniroma2.it/te/ Textual Entailment at the University of Rome &amp;quot;Tor Vergata&amp;quot;]&lt;br /&gt;
[[Category:Textual Entailment Portal]]&lt;br /&gt;
* [http://l2r.cs.uiuc.edu/~cogcomp/entailment-module-demos.php Illinois Textual Entailment System Component demos]&lt;/div&gt;</summary>
		<author><name>Goldan55</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/aclwiki/index.php?title=Textual_Entailment_Portal&amp;diff=8016</id>
		<title>Textual Entailment Portal</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/aclwiki/index.php?title=Textual_Entailment_Portal&amp;diff=8016"/>
		<updated>2010-06-10T09:54:29Z</updated>

		<summary type="html">&lt;p&gt;Goldan55: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page serves as a community portal for everything related to Textual Entailment. &lt;br /&gt;
&lt;br /&gt;
== Textual Entailment Resource Pool ==&lt;br /&gt;
[[Textual Entailment Resource Pool]]&lt;br /&gt;
&lt;br /&gt;
== PASCAL Challenges ==&lt;br /&gt;
&lt;br /&gt;
[[Recognizing Textual Entailment|Recognizing Textual Entailment (RTE)]] has been proposed recently as a generic task that captures major semantic inference needs across many natural language processing applications.&lt;br /&gt;
&lt;br /&gt;
== References on Textual Entailment ==&lt;br /&gt;
&#039;&#039;You are welcome to update this list with new papers on textual entailment (please keep the new references in the same format, and  maintain the alphabetical order).&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Workshops and Tutorials ===&lt;br /&gt;
&lt;br /&gt;
[http://l2r.cs.uiuc.edu/~cogcomp/presentations/RTE_NAACL_2010.zip NAACL 2010 Tutorial on Recognizing Textual Entailment, 2010]&lt;br /&gt;
&lt;br /&gt;
[http://acl.ldc.upenn.edu/W/W05/#W05-1200 ACL 2005 Workshop on Empirical Modeling of Semantic Equivalence and Entailment, 2005]&lt;br /&gt;
&lt;br /&gt;
[http://www.pascal-network.org/Challenges/RTE/ First PASCAL Recognising Textual Entailment Challenge (RTE-1), 2005]&lt;br /&gt;
&lt;br /&gt;
[http://www.pascal-network.org/Challenges/RTE2/ Second PASCAL Recognising Textual Entailment Challenge (RTE-2), 2006]&lt;br /&gt;
&lt;br /&gt;
[http://nlp.uned.es/QA/ave Answer Validation Exercise at CLEF 2006 (AVE 2006)]&lt;br /&gt;
&lt;br /&gt;
[http://www.pascal-network.org/Challenges/RTE3/ Third PASCAL Recognising Textual Entailment Challenge (RTE-3), 2007]&lt;br /&gt;
&lt;br /&gt;
=== Papers in recent conferences and other workshops ===&lt;br /&gt;
&lt;br /&gt;
L. Bentivogli, I. Dagan, H. Dang, D. Giampiccolo, M. Lo Leggio, and B. Magnini . 2009. Considering Discourse References in Textual Entailment Annotation. 5th International Conference on Generative Approaches to the Lexicon (GL 2009). [http://hlt.fbk.eu/sites/hlt.fbk.eu/files/GL2009_Bentivogli-et-al.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
J. Bos &amp;amp; K. Markert. 2005. Recognising Textual Entailment with Logical Inference. Proceedings of EMNLP 2005.&lt;br /&gt;
&lt;br /&gt;
R. Braz, R. Girju, V. Punyakanok, D. Roth, and M. Sammons. 2005. An Inference Model for Semantic Entailment in Natural Language. Twentieth National Conference on Artificial Intelligence (AAAI-05) &lt;br /&gt;
&lt;br /&gt;
R. Braz, R. Girju, V. Punyakanok, D. Roth, and M. Sammons. 2005. Knowledge Representation for Semantic Entailment and Question-Answering. IJCAI-05 Workshop on Knowledge and Reasoning for Answering Questions. &lt;br /&gt;
&lt;br /&gt;
C. Corley, A. Csomai and R. Mihalcea. 2005. Text Semantic Similarity, with Applications. &lt;br /&gt;
RANLP-05.&lt;br /&gt;
&lt;br /&gt;
I. Dagan and O. Glickman. 2004. Probabilistic textual entailment: Generic applied modeling of language variability. In PASCAL Workshop on Learning Methods for Text Understanding and Mining, Grenoble.&lt;br /&gt;
&lt;br /&gt;
I. Dagan, O. Glickman, A. Gliozzo, E. Marmorshtein and C. Strapparava. 2006. Direct Word Sense Matching for Lexical Substitution. COLING-ACL 2006&lt;br /&gt;
&lt;br /&gt;
R. Delmonte, 2005. VENSES - a Linguistically-Based System for Semantic Evaluation, PLN, Procesamiento del Lenguaje Natural, Revista n° 35, ISSN:1135-5948, pp. 449-450.&lt;br /&gt;
&lt;br /&gt;
R. Delmonte, 2005. Simulare la comprensione del linguaggio con VENSES. presented at Workshop &amp;quot;Scienze Cognitive Applicate&amp;quot;, Facolt? di Psicologia dell&#039;Universit? Roma &amp;quot;La Sapienza&amp;quot;, 12/13-12-2005.&lt;br /&gt;
&lt;br /&gt;
M. Geffet and I. Dagan. 2004. Feature Vector Quality and Distributional Similarity. Proceedings of The 20th International Conference on Computational Linguistics (COLING).&lt;br /&gt;
&lt;br /&gt;
M. Geffet and I. Dagan. 2005. &amp;quot;The Distributional Inclusion Hypotheses and Lexical Entailment&amp;quot;, ACL 2005, Michigan, USA. &lt;br /&gt;
&lt;br /&gt;
O. Glickman, I. Dagan and M. Koppel. 2005. A Probabilistic Classification Approach for Lexical Textual Entailment, Twentieth National Conference on Artificial Intelligence (AAAI-05) &lt;br /&gt;
&lt;br /&gt;
O. Glickman, E. Shnarch and I. Dagan. 2006. Lexical Reference: a Semantic Matching Subtask. EMNLP 2006 (poster).&lt;br /&gt;
&lt;br /&gt;
A. Haghighi, A. Y. Ng, and C. D. Manning. 2005. Robust Textual Inference via Graph Matching. HLT-EMNLP 2005.&lt;br /&gt;
&lt;br /&gt;
S. Harabagiu and A. Hickl. 2006. Methods for Using Textual Entailment in Open-Domain Question Answering. COLING-ACL 2006&lt;br /&gt;
&lt;br /&gt;
J. Herrera, A. Peñas, F. Verdejo, 2006. Textual Entailment Recognition Based on Dependency Analysis and WordNet. MLCW 2005. LNAI 3944. 231-239.&lt;br /&gt;
&lt;br /&gt;
V. Jijkoun and M. de Rijke. 2006. Recognizing Textual Entailment: Is Lexical Similarity Enough?,  In: I. Dagan, F. Dalche, J. Quinonero Candela, B. Magnini, editors, Evaluating Predictive Uncertainty, Textual Entailment and Object Recognition Systems, LNAI 3944, pages 449-460, Springer Verlag.&lt;br /&gt;
&lt;br /&gt;
M. Kouylekov and B. Magnini. 2005. Tree Edit Distance for Textual Entailment. RANLP 2005.&lt;br /&gt;
&lt;br /&gt;
B. MacCartney, T. Grenager, M. de Marneffe, D. Cer and C. D. Manning. 2006. Learning to Recognize Features of Valid Textual Entailments. HLT-NAACL 2006.&lt;br /&gt;
&lt;br /&gt;
M. Makatchev, P. W. Jordan, K. Vanlehn. 2004. Abductive Theorem Proving for Analyzing Student Explanations to Guide Feedback in Intelligent Tutoring Systems. Journal of Automated Reasoning, 32(3).   &lt;br /&gt;
&lt;br /&gt;
S. Mirkin, I. Dagan, M. Geffet. 2006. Integrating Pattern-based and Distributional Similarity Methods for Lexical Entailment Acquisition. COLING-ACL 2006 (poster) &lt;br /&gt;
&lt;br /&gt;
C. Monz and M. de Rijke. 2001. Light-Weight Entailment Checking for Computational Semantics,  In: P. Blackburn and M. Kohlhase, editors, International workshop on Inference in Computational Semantics (ICoS-3).&lt;br /&gt;
&lt;br /&gt;
R. Nairn, C. Condoravdi, and L. Karttunen. 2006. Computing relative polarity for textual inference. International workshop on Inference in Computational Semantics (ICoS-5).&lt;br /&gt;
&lt;br /&gt;
M. T. Pazienza, M. Pennacchiotti and F. M. Zanzotto . 2006. Discovering asymmetric entailment relations between verbs using selectional preferences. COLING-ACL 2006&lt;br /&gt;
&lt;br /&gt;
V. Pekar. 2006. Acquisition of Verb Entailment from Text. HLT-NAACL 2006&lt;br /&gt;
&lt;br /&gt;
A. Peñas, A. Rodrigo, F. Verdejo. 2006. SPARTE, a Test Suite for Recognising Textual Entailment in Spanish. Computational Linguistics and Intelligent Text Processing, CICLing 2006. LNCS 3878. 275-286&lt;br /&gt;
&lt;br /&gt;
R. Raina, A. Y. Ng, and C. Manning. 2005. Robust textual inference via learning and abductive reasoning. Twentieth National Conference on Artificial Intelligence (AAAI-05) &lt;br /&gt;
&lt;br /&gt;
L. Romano, M. Kouylekov, I. Szpektor, I. Dagan and A. Lavelli. 2006. Investigating a Generic Paraphrase-based Approach for Relation Extraction. EACL 2006. &lt;br /&gt;
&lt;br /&gt;
V. Rus, A. Graesser and K. Desai. 2005. Lexico-Syntactic Subsumption for Textual Entailment. RANLP 2005.&lt;br /&gt;
&lt;br /&gt;
R. Snow, L. Vanderwende and A. Menezes. 2006. Effectively Using Syntax for Recognizing False Entialment. HLT-NAACL 2006.&lt;br /&gt;
&lt;br /&gt;
M. Tatu and D. Moldovan. 2005. A Semantic Approach to Recognizing Textual Entailment. HLT-EMNLP 2005.&lt;br /&gt;
&lt;br /&gt;
M. Tatu and D. Moldovan. 2006. A Logic-based Semantic Approach to Recognizing Textual Entailment. COLING-ACL 2006 (poster). &lt;br /&gt;
&lt;br /&gt;
F. M. Zanzotto and A. Moschitti. 2006. Automatic learning of textual entailments with cross-pair similarities. COLING-ACL 2006&lt;br /&gt;
&lt;br /&gt;
Y. Mehdad, B. Magnini. 2009. A Word Overlap Baseline for the Recognizing Textual Entailment Task. Available at http://hlt.fbk.eu/sites/hlt.fbk.eu/files/baseline.pdf&lt;br /&gt;
&lt;br /&gt;
Rui Wang and Günter Neumann. 2007. Recognizing Textual Entailment Using a Subsequence Kernel Method. AAAI-07.&lt;br /&gt;
&lt;br /&gt;
Rui Wang and Yajing Zhang. 2008. Recognizing Textual Entailment with Temporal Expressions in Natural Language Texts. In Proceedings of the IEEE International Workshop on Semantic Computing and Applications (IWSCA-2008).&lt;br /&gt;
&lt;br /&gt;
Rui Wang and Günter Neumann. 2009. An Accuracy-Oriented Divide-and-Conquer Strategy for Recognizing Textual Entailment. TAC 2008 Workshop - RTE-4.&lt;br /&gt;
&lt;br /&gt;
Georgiana Dinu and Rui Wang. 2009. Inference Rules and their Application to Recognizing Textual Entailment. EACL-09.&lt;br /&gt;
&lt;br /&gt;
Rui Wang and Yi Zhang. 2009. Recognizing Textual Relatedness with Predicate-Argument Structures. EMNLP 2009.&lt;br /&gt;
&lt;br /&gt;
Shachar Mirkin, Ido Dagan, Eyal Shnarch. 2009. Evaluating the Inferential Utility of Lexical-Semantic Resources. EACL-09. [http://www.cs.biu.ac.il/~mirkins/publications/Inferential-Utility_Mirkin-DS_EACL09.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
Shachar Mirkin, Lucia Specia, Nicola Cancedda, Ido Dagan, Marc Dymetman and Idan Szpektor. 2009. Source-Language Entailment Modeling for Translating Unknown Terms. ACL-09. [http://www.cs.biu.ac.il/~mirkins/publications/TE4MT_ACL09_Mirkin-Specia-etal.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
=== Journal papers ===&lt;br /&gt;
&lt;br /&gt;
I. Androutsopoulos and  P. Malakasiotis. 2010. A Survey of Paraphrasing and Textual Entailment Methods. Journal of Artificial Intelligence Research, vol. 38, pp. 135-187. [http://www.jair.org/papers/paper2985.html]&lt;br /&gt;
&lt;br /&gt;
[[Category:Textual Entailment Portal]]&lt;/div&gt;</summary>
		<author><name>Goldan55</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/aclwiki/index.php?title=Textual_Entailment_Portal&amp;diff=8015</id>
		<title>Textual Entailment Portal</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/aclwiki/index.php?title=Textual_Entailment_Portal&amp;diff=8015"/>
		<updated>2010-06-10T09:53:19Z</updated>

		<summary type="html">&lt;p&gt;Goldan55: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page serves as a community portal for everything related to Textual Entailment. &lt;br /&gt;
&lt;br /&gt;
== Textual Entailment Resource Pool ==&lt;br /&gt;
[[Textual Entailment Resource Pool]]&lt;br /&gt;
&lt;br /&gt;
== PASCAL Challenges ==&lt;br /&gt;
&lt;br /&gt;
[[Recognizing Textual Entailment|Recognizing Textual Entailment (RTE)]] has been proposed recently as a generic task that captures major semantic inference needs across many natural language processing applications.&lt;br /&gt;
&lt;br /&gt;
== References on Textual Entailment ==&lt;br /&gt;
&#039;&#039;You are welcome to update this list with new papers on textual entailment (please keep the new references in the same format, and  maintain the alphabetical order).&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Workshops and Tutorials ===&lt;br /&gt;
&lt;br /&gt;
[http://l2r.cs.uiuc.edu/~cogcomp/presentations/RTE_NAACL_2010.zip&lt;br /&gt;
 NAACL 2010 Tutorial on Recognizing Textual Entailment, 2010]&lt;br /&gt;
&lt;br /&gt;
[http://acl.ldc.upenn.edu/W/W05/#W05-1200 ACL 2005 Workshop on Empirical Modeling of Semantic Equivalence and Entailment, 2005]&lt;br /&gt;
&lt;br /&gt;
[http://www.pascal-network.org/Challenges/RTE/ First PASCAL Recognising Textual Entailment Challenge (RTE-1), 2005]&lt;br /&gt;
&lt;br /&gt;
[http://www.pascal-network.org/Challenges/RTE2/ Second PASCAL Recognising Textual Entailment Challenge (RTE-2), 2006]&lt;br /&gt;
&lt;br /&gt;
[http://nlp.uned.es/QA/ave Answer Validation Exercise at CLEF 2006 (AVE 2006)]&lt;br /&gt;
&lt;br /&gt;
[http://www.pascal-network.org/Challenges/RTE3/ Third PASCAL Recognising Textual Entailment Challenge (RTE-3), 2007]&lt;br /&gt;
&lt;br /&gt;
=== Papers in recent conferences and other workshops ===&lt;br /&gt;
&lt;br /&gt;
L. Bentivogli, I. Dagan, H. Dang, D. Giampiccolo, M. Lo Leggio, and B. Magnini . 2009. Considering Discourse References in Textual Entailment Annotation. 5th International Conference on Generative Approaches to the Lexicon (GL 2009). [http://hlt.fbk.eu/sites/hlt.fbk.eu/files/GL2009_Bentivogli-et-al.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
J. Bos &amp;amp; K. Markert. 2005. Recognising Textual Entailment with Logical Inference. Proceedings of EMNLP 2005.&lt;br /&gt;
&lt;br /&gt;
R. Braz, R. Girju, V. Punyakanok, D. Roth, and M. Sammons. 2005. An Inference Model for Semantic Entailment in Natural Language. Twentieth National Conference on Artificial Intelligence (AAAI-05) &lt;br /&gt;
&lt;br /&gt;
R. Braz, R. Girju, V. Punyakanok, D. Roth, and M. Sammons. 2005. Knowledge Representation for Semantic Entailment and Question-Answering. IJCAI-05 Workshop on Knowledge and Reasoning for Answering Questions. &lt;br /&gt;
&lt;br /&gt;
C. Corley, A. Csomai and R. Mihalcea. 2005. Text Semantic Similarity, with Applications. &lt;br /&gt;
RANLP-05.&lt;br /&gt;
&lt;br /&gt;
I. Dagan and O. Glickman. 2004. Probabilistic textual entailment: Generic applied modeling of language variability. In PASCAL Workshop on Learning Methods for Text Understanding and Mining, Grenoble.&lt;br /&gt;
&lt;br /&gt;
I. Dagan, O. Glickman, A. Gliozzo, E. Marmorshtein and C. Strapparava. 2006. Direct Word Sense Matching for Lexical Substitution. COLING-ACL 2006&lt;br /&gt;
&lt;br /&gt;
R. Delmonte, 2005. VENSES - a Linguistically-Based System for Semantic Evaluation, PLN, Procesamiento del Lenguaje Natural, Revista n° 35, ISSN:1135-5948, pp. 449-450.&lt;br /&gt;
&lt;br /&gt;
R. Delmonte, 2005. Simulare la comprensione del linguaggio con VENSES. presented at Workshop &amp;quot;Scienze Cognitive Applicate&amp;quot;, Facolt? di Psicologia dell&#039;Universit? Roma &amp;quot;La Sapienza&amp;quot;, 12/13-12-2005.&lt;br /&gt;
&lt;br /&gt;
M. Geffet and I. Dagan. 2004. Feature Vector Quality and Distributional Similarity. Proceedings of The 20th International Conference on Computational Linguistics (COLING).&lt;br /&gt;
&lt;br /&gt;
M. Geffet and I. Dagan. 2005. &amp;quot;The Distributional Inclusion Hypotheses and Lexical Entailment&amp;quot;, ACL 2005, Michigan, USA. &lt;br /&gt;
&lt;br /&gt;
O. Glickman, I. Dagan and M. Koppel. 2005. A Probabilistic Classification Approach for Lexical Textual Entailment, Twentieth National Conference on Artificial Intelligence (AAAI-05) &lt;br /&gt;
&lt;br /&gt;
O. Glickman, E. Shnarch and I. Dagan. 2006. Lexical Reference: a Semantic Matching Subtask. EMNLP 2006 (poster).&lt;br /&gt;
&lt;br /&gt;
A. Haghighi, A. Y. Ng, and C. D. Manning. 2005. Robust Textual Inference via Graph Matching. HLT-EMNLP 2005.&lt;br /&gt;
&lt;br /&gt;
S. Harabagiu and A. Hickl. 2006. Methods for Using Textual Entailment in Open-Domain Question Answering. COLING-ACL 2006&lt;br /&gt;
&lt;br /&gt;
J. Herrera, A. Peñas, F. Verdejo, 2006. Textual Entailment Recognition Based on Dependency Analysis and WordNet. MLCW 2005. LNAI 3944. 231-239.&lt;br /&gt;
&lt;br /&gt;
V. Jijkoun and M. de Rijke. 2006. Recognizing Textual Entailment: Is Lexical Similarity Enough?,  In: I. Dagan, F. Dalche, J. Quinonero Candela, B. Magnini, editors, Evaluating Predictive Uncertainty, Textual Entailment and Object Recognition Systems, LNAI 3944, pages 449-460, Springer Verlag.&lt;br /&gt;
&lt;br /&gt;
M. Kouylekov and B. Magnini. 2005. Tree Edit Distance for Textual Entailment. RANLP 2005.&lt;br /&gt;
&lt;br /&gt;
B. MacCartney, T. Grenager, M. de Marneffe, D. Cer and C. D. Manning. 2006. Learning to Recognize Features of Valid Textual Entailments. HLT-NAACL 2006.&lt;br /&gt;
&lt;br /&gt;
M. Makatchev, P. W. Jordan, K. Vanlehn. 2004. Abductive Theorem Proving for Analyzing Student Explanations to Guide Feedback in Intelligent Tutoring Systems. Journal of Automated Reasoning, 32(3).   &lt;br /&gt;
&lt;br /&gt;
S. Mirkin, I. Dagan, M. Geffet. 2006. Integrating Pattern-based and Distributional Similarity Methods for Lexical Entailment Acquisition. COLING-ACL 2006 (poster) &lt;br /&gt;
&lt;br /&gt;
C. Monz and M. de Rijke. 2001. Light-Weight Entailment Checking for Computational Semantics,  In: P. Blackburn and M. Kohlhase, editors, International workshop on Inference in Computational Semantics (ICoS-3).&lt;br /&gt;
&lt;br /&gt;
R. Nairn, C. Condoravdi, and L. Karttunen. 2006. Computing relative polarity for textual inference. International workshop on Inference in Computational Semantics (ICoS-5).&lt;br /&gt;
&lt;br /&gt;
M. T. Pazienza, M. Pennacchiotti and F. M. Zanzotto . 2006. Discovering asymmetric entailment relations between verbs using selectional preferences. COLING-ACL 2006&lt;br /&gt;
&lt;br /&gt;
V. Pekar. 2006. Acquisition of Verb Entailment from Text. HLT-NAACL 2006&lt;br /&gt;
&lt;br /&gt;
A. Peñas, A. Rodrigo, F. Verdejo. 2006. SPARTE, a Test Suite for Recognising Textual Entailment in Spanish. Computational Linguistics and Intelligent Text Processing, CICLing 2006. LNCS 3878. 275-286&lt;br /&gt;
&lt;br /&gt;
R. Raina, A. Y. Ng, and C. Manning. 2005. Robust textual inference via learning and abductive reasoning. Twentieth National Conference on Artificial Intelligence (AAAI-05) &lt;br /&gt;
&lt;br /&gt;
L. Romano, M. Kouylekov, I. Szpektor, I. Dagan and A. Lavelli. 2006. Investigating a Generic Paraphrase-based Approach for Relation Extraction. EACL 2006. &lt;br /&gt;
&lt;br /&gt;
V. Rus, A. Graesser and K. Desai. 2005. Lexico-Syntactic Subsumption for Textual Entailment. RANLP 2005.&lt;br /&gt;
&lt;br /&gt;
R. Snow, L. Vanderwende and A. Menezes. 2006. Effectively Using Syntax for Recognizing False Entialment. HLT-NAACL 2006.&lt;br /&gt;
&lt;br /&gt;
M. Tatu and D. Moldovan. 2005. A Semantic Approach to Recognizing Textual Entailment. HLT-EMNLP 2005.&lt;br /&gt;
&lt;br /&gt;
M. Tatu and D. Moldovan. 2006. A Logic-based Semantic Approach to Recognizing Textual Entailment. COLING-ACL 2006 (poster). &lt;br /&gt;
&lt;br /&gt;
F. M. Zanzotto and A. Moschitti. 2006. Automatic learning of textual entailments with cross-pair similarities. COLING-ACL 2006&lt;br /&gt;
&lt;br /&gt;
Y. Mehdad, B. Magnini. 2009. A Word Overlap Baseline for the Recognizing Textual Entailment Task. Available at http://hlt.fbk.eu/sites/hlt.fbk.eu/files/baseline.pdf&lt;br /&gt;
&lt;br /&gt;
Rui Wang and Günter Neumann. 2007. Recognizing Textual Entailment Using a Subsequence Kernel Method. AAAI-07.&lt;br /&gt;
&lt;br /&gt;
Rui Wang and Yajing Zhang. 2008. Recognizing Textual Entailment with Temporal Expressions in Natural Language Texts. In Proceedings of the IEEE International Workshop on Semantic Computing and Applications (IWSCA-2008).&lt;br /&gt;
&lt;br /&gt;
Rui Wang and Günter Neumann. 2009. An Accuracy-Oriented Divide-and-Conquer Strategy for Recognizing Textual Entailment. TAC 2008 Workshop - RTE-4.&lt;br /&gt;
&lt;br /&gt;
Georgiana Dinu and Rui Wang. 2009. Inference Rules and their Application to Recognizing Textual Entailment. EACL-09.&lt;br /&gt;
&lt;br /&gt;
Rui Wang and Yi Zhang. 2009. Recognizing Textual Relatedness with Predicate-Argument Structures. EMNLP 2009.&lt;br /&gt;
&lt;br /&gt;
Shachar Mirkin, Ido Dagan, Eyal Shnarch. 2009. Evaluating the Inferential Utility of Lexical-Semantic Resources. EACL-09. [http://www.cs.biu.ac.il/~mirkins/publications/Inferential-Utility_Mirkin-DS_EACL09.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
Shachar Mirkin, Lucia Specia, Nicola Cancedda, Ido Dagan, Marc Dymetman and Idan Szpektor. 2009. Source-Language Entailment Modeling for Translating Unknown Terms. ACL-09. [http://www.cs.biu.ac.il/~mirkins/publications/TE4MT_ACL09_Mirkin-Specia-etal.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
=== Journal papers ===&lt;br /&gt;
&lt;br /&gt;
I. Androutsopoulos and  P. Malakasiotis. 2010. A Survey of Paraphrasing and Textual Entailment Methods. Journal of Artificial Intelligence Research, vol. 38, pp. 135-187. [http://www.jair.org/papers/paper2985.html]&lt;br /&gt;
&lt;br /&gt;
[[Category:Textual Entailment Portal]]&lt;/div&gt;</summary>
		<author><name>Goldan55</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/aclwiki/index.php?title=Named_entity_recognizers&amp;diff=7867</id>
		<title>Named entity recognizers</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/aclwiki/index.php?title=Named_entity_recognizers&amp;diff=7867"/>
		<updated>2010-04-04T01:34:59Z</updated>

		<summary type="html">&lt;p&gt;Goldan55: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;[[Software]] - Named entity recognizers&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;!-- Please keep this list in alphabetical order --&amp;gt;&lt;br /&gt;
*[http://balie.sourceforge.net/ Balie] Baseline implementation of named entity recognition.&lt;br /&gt;
*[http://gate.ac.uk/ GATE] includes the ANNIE gazeteer-based NER subsystem. &lt;br /&gt;
*[http://www-tsujii.is.s.u-tokyo.ac.jp/GENIA/tagger/ GENiA]- part-of-speech tagging, shallow parsing, and named entity recognition for biomedical text. C++, BSD license.&lt;br /&gt;
* [http://www.aueb.gr/users/ion/software/GREEK_NERC_v2.tar.gz Greek named entity recognizer (version 2)] It currently identifies temporal expressions, person names, and organization names; see [http://www.aueb.gr/users/ion/publications.html here] for publications describing the recognizer.&lt;br /&gt;
*[http://www.alias-i.com/lingpipe/ LingPipe]&lt;br /&gt;
*[http://nlp.stanford.edu/software/CRF-NER.shtml Stanford NER] Conditional Random Fields based NER. Also incorporates distributional similarity based features extracted from the English Gigaword corpus.&lt;br /&gt;
*[http://l2r.cs.uiuc.edu/~cogcomp/asoftware.php?skey=FLBJNE Illinois NER] Java-based Illinois NER tagger. Uses gazetteers extracted from Wikipedia, word-class model built from unlabeled text and extensively uses non-local features. Achieves 90.8F1 score on the CoNLL03 shared task data and is robust on other datasets. Try the [http://l2r.cs.uiuc.edu/~cogcomp/LbjNer.php Illinois-NER-Demo]&lt;br /&gt;
* [http://l2r.cs.uiuc.edu/~cogcomp/asoftware.php?skey=NE Older version of Illinois  NER] - identifies/classifies entities as Person, Location, Organization and Misc (this last category relates to languages and nationalities); fast and robust; try the [http://l2r.cs.uiuc.edu/~cogcomp/ne_demo.php demo]&lt;br /&gt;
[[Category:Software]]&lt;/div&gt;</summary>
		<author><name>Goldan55</name></author>
	</entry>
	<entry>
		<id>https://www.aclweb.org/aclwiki/index.php?title=Named_entity_recognizers&amp;diff=7866</id>
		<title>Named entity recognizers</title>
		<link rel="alternate" type="text/html" href="https://www.aclweb.org/aclwiki/index.php?title=Named_entity_recognizers&amp;diff=7866"/>
		<updated>2010-04-04T01:31:38Z</updated>

		<summary type="html">&lt;p&gt;Goldan55: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;[[Software]] - Named entity recognizers&#039;&#039;&#039;&lt;br /&gt;
&amp;lt;!-- Please keep this list in alphabetical order --&amp;gt;&lt;br /&gt;
*[http://balie.sourceforge.net/ Balie] Baseline implementation of named entity recognition.&lt;br /&gt;
*[http://gate.ac.uk/ GATE] includes the ANNIE gazeteer-based NER subsystem. &lt;br /&gt;
*[http://www-tsujii.is.s.u-tokyo.ac.jp/GENIA/tagger/ GENiA]- part-of-speech tagging, shallow parsing, and named entity recognition for biomedical text. C++, BSD license.&lt;br /&gt;
* [http://www.aueb.gr/users/ion/software/GREEK_NERC_v2.tar.gz Greek named entity recognizer (version 2)] It currently identifies temporal expressions, person names, and organization names; see [http://www.aueb.gr/users/ion/publications.html here] for publications describing the recognizer.&lt;br /&gt;
*[http://www.alias-i.com/lingpipe/ LingPipe]&lt;br /&gt;
*[http://nlp.stanford.edu/software/CRF-NER.shtml Stanford NER] Conditional Random Fields based NER. Also incorporates distributional similarity based features extracted from the English Gigaword corpus.&lt;br /&gt;
*[http://l2r.cs.uiuc.edu/~cogcomp/asoftware.php?skey=FLBJNE Illinois NER] Java-based Illinois NER tagger. Uses gazetteers extracted from Wikipedia, word-class model built from unlabeled text and extensively uses non-local features. Achieves 90.8F1 score on the CoNLL03 shared task data and is robust on other datasets. Try the [http://l2r.cs.uiuc.edu/~cogcomp/LbjNer.php LBJ-NER-Demo]&lt;br /&gt;
* [http://l2r.cs.uiuc.edu/~cogcomp/asoftware.php?skey=NE Older version of Illinois  NER] - identifies/classifies entities as Person, Location, Organization and Misc (this last category relates to languages and nationalities); fast and robust; try the [http://l2r.cs.uiuc.edu/~cogcomp/ne_demo.php demo]&lt;br /&gt;
[[Category:Software]]&lt;/div&gt;</summary>
		<author><name>Goldan55</name></author>
	</entry>
</feed>