SemEval-2017 Task 2: Monolingual and Cross-lingual Word Similarity
SemEval-2017 Task 2: Monolingual and Cross-lingual Word Similarity
SemEval-2017 Task 2: Monolingual and Cross-lingual Word Similarity
Semantic Textual Similarity (STS) measures the degree of equivalence in the underlying semantics of paired snippets of text. While making such an assessment is trivial for humans, constructing algorithms and computational models that mimic human level performance represents a difficult and deep natural language understanding problem. The 2017 STS shared task involves multilingual and cross-lingual evaluation of Arabic, Spanish and English data as well as a surprise language track to explore methods for cross-lingual transfer.
Call for Shared Task Participation
SemEval 2016 Task 1: Semantic Textual Similarity (STS)
Semantic Textual Similarity (STS) measures the degree of equivalence in the underlying semantics of paired snippets of text. While making such an assessment is trivial for humans, constructing algorithms and computational models that mimic human level performance represents a difficult and deep natural language understanding (NLU) problem.
BioASQ challenge on large-scale biomedical semantic indexing and question answering
(part of the CLEF 2014 QA track to take place in Sheffield, UK, 15-18 September, 2014)
Web site: http://bioasq.org/
twitter: https://twitter.com/bioasq
CLEF-QA site: http://nlp.uned.es/clef-qa/
The BioASQ challenge consists of two different tasks (Task 2a and Task 2b).
If you are interested in any of the following areas:
* Large-scale and hierarchical classification
* Machine learning
* Semantic Indexing, semantic similarity