Anatole Gershman


2024

pdf bib
PropBank goes Public: Incorporation into Wikidata
Elizabeth Spaulding | Kathryn Conger | Anatole Gershman | Mahir Morshed | Susan Windisch Brown | James Pustejovsky | Rosario Uceda-Sosa | Sijia Ge | Martha Palmer
Proceedings of The 18th Linguistic Annotation Workshop (LAW-XVIII)

This paper presents the first integration of PropBank role information into Wikidata, in order to provide a novel resource for information extraction, one combining Wikidata’s ontological metadata with PropBank’s rich argument structure encoding for event classes. We discuss a technique for PropBank augmentation to existing eventive Wikidata items, as well as identification of gaps in Wikidata’s coverage based on manual examination of over 11,300 PropBank rolesets. We propose five new Wikidata properties to integrate PropBank structure into Wikidata so that the annotated mappings can be added en masse. We then outline the methodology and challenges of this integration, including annotation with the combined resources.

2023

pdf bib
CHARD: Clinical Health-Aware Reasoning Across Dimensions for Text Generation Models
Steven Y. Feng | Vivek Khetan | Bogdan Sacaleanu | Anatole Gershman | Eduard Hovy
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics

We motivate and introduce CHARD: Clinical Health-Aware Reasoning across Dimensions, to investigate the capability of text generation models to act as implicit clinical knowledge bases and generate free-flow textual explanations about various health-related conditions across several dimensions. We collect and present an associated dataset, CHARDat, consisting of explanations about 52 health conditions across three clinical dimensions. We conduct extensive experiments using BART and T5 along with data augmentation, and perform automatic, human, and qualitative analyses. We show that while our models can perform decently, CHARD is very challenging with strong potential for further exploration.

pdf bib
The DARPA Wikidata Overlay: Wikidata as an ontology for natural language processing
Elizabeth Spaulding | Kathryn Conger | Anatole Gershman | Rosario Uceda-Sosa | Susan Windisch Brown | James Pustejovsky | Peter Anick | Martha Palmer
Proceedings of the 19th Joint ACL-ISO Workshop on Interoperable Semantics (ISA-19)

With 102,530,067 items currently in its crowd-sourced knowledge base, Wikidata provides NLP practitioners a unique and powerful resource for inference and reasoning over real-world entities. However, because Wikidata is very entity focused, events and actions are often labeled with eventive nouns (e.g., the process of diagnosing a person’s illness is labeled “diagnosis”), and the typical participants in an event are not described or linked to that event concept (e.g., the medical professional or patient). Motivated by a need for an adaptable, comprehensive, domain-flexible ontology for information extraction, including identifying the roles entities are playing in an event, we present a curated subset of Wikidata in which events have been enriched with PropBank roles. To enable richer narrative understanding between events from Wikidata concepts, we have also provided a comprehensive mapping from temporal Qnodes and Pnodes to the Allen Interval Temporal Logic relations.

pdf bib
Template Filling for Controllable Commonsense Reasoning
Dheeraj Rajagopal | Vivek Khetan | Bogdan Sacaleanu | Anatole Gershman | Andrew E. Fano Fano | Eduard Hovy
Findings of the Association for Computational Linguistics: IJCNLP-AACL 2023 (Findings)

2015

pdf bib
Extending a Single-Document Summarizer to Multi-Document: a Hierarchical Approach
Luís Marujo | Ricardo Ribeiro | David Martins de Matos | João Neto | Anatole Gershman | Jaime Carbonell
Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics

pdf bib
Matrix Factorization with Knowledge Graph Propagation for Unsupervised Spoken Language Understanding
Yun-Nung Chen | William Yang Wang | Anatole Gershman | Alexander Rudnicky
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

pdf bib
Automatic Keyword Extraction on Twitter
Luís Marujo | Wang Ling | Isabel Trancoso | Chris Dyer | Alan W. Black | Anatole Gershman | David Martins de Matos | João Neto | Jaime Carbonell
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

2014

pdf bib
Resources for the Detection of Conventionalized Metaphors in Four Languages
Lori Levin | Teruko Mitamura | Brian MacWhinney | Davida Fromm | Jaime Carbonell | Weston Feely | Robert Frederking | Anatole Gershman | Carlos Ramirez
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

This paper describes a suite of tools for extracting conventionalized metaphors in English, Spanish, Farsi, and Russian. The method depends on three significant resources for each language: a corpus of conventionalized metaphors, a table of conventionalized conceptual metaphors (CCM table), and a set of extraction rules. Conventionalized metaphors are things like “escape from poverty” and “burden of taxation”. For each metaphor, the CCM table contains the metaphorical source domain word (such as “escape”) the target domain word (such as “poverty”) and the grammatical construction in which they can be found. The extraction rules operate on the output of a dependency parser and identify the grammatical configurations (such as a verb with a prepositional phrase complement) that are likely to contain conventional metaphors. We present results on detection rates for conventional metaphors and analysis of the similarity and differences of source domains for conventional metaphors in the four languages.

pdf bib
Metaphor Detection with Cross-Lingual Model Transfer
Yulia Tsvetkov | Leonid Boytsov | Anatole Gershman | Eric Nyberg | Chris Dyer
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2013

pdf bib
Cross-Lingual Metaphor Detection Using Common Semantic Features
Yulia Tsvetkov | Elena Mukomel | Anatole Gershman
Proceedings of the First Workshop on Metaphor in NLP

2012

pdf bib
Recognition of Named-Event Passages in News Articles
Luis Marujo | Wang Ling | Anatole Gershman | Jaime Carbonell | João P. Neto | David Matos
Proceedings of COLING 2012: Demonstration Papers

pdf bib
Supervised Topical Key Phrase Extraction of News Stories using Crowdsourcing, Light Filtering and Co-reference Normalization
Luís Marujo | Anatole Gershman | Jaime Carbonell | Robert Frederking | João P. Neto
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

Fast and effective automated indexing is critical for search and personalized services. Key phrases that consist of one or more words and represent the main concepts of the document are often used for the purpose of indexing. In this paper, we investigate the use of additional semantic features and pre-processing steps to improve automatic key phrase extraction. These features include the use of signal words and freebase categories. Some of these features lead to significant improvements in the accuracy of the results. We also experimented with 2 forms of document pre-processing that we call light filtering and co-reference normalization. Light filtering removes sentences from the document, which are judged peripheral to its main content. Co-reference normalization unifies several written forms of the same named entity into a unique form. We also needed a “Gold Standard” ― a set of labeled documents for training and evaluation. While the subjective nature of key phrase selection precludes a true “Gold Standard”, we used Amazon's Mechanical Turk service to obtain a useful approximation. Our data indicates that the biggest improvements in performance were due to shallow semantic features, news categories, and rhetorical signals (nDCG 78.47% vs. 68.93%). The inclusion of deeper semantic features such as Freebase sub-categories was not beneficial by itself, but in combination with pre-processing, did cause slight improvements in the nDCG scores.

2010

pdf bib
CONE: Metrics for Automatic Evaluation of Named Entity Co-Reference Resolution
Bo Lin | Rushin Shah | Robert Frederking | Anatole Gershman
Proceedings of the 2010 Named Entities Workshop