Dan Simonson

Also published as: Daniel Simonson


2021

pdf bib
Supervised Identification of Participant Slots in Contracts
Dan Simonson
Proceedings of the Natural Legal Language Processing Workshop 2021

This paper presents a technique for the identification of participant slots in English language contracts. Taking inspiration from unsupervised slot extraction techniques, the system presented here uses a supervised approach to identify terms used to refer to a genre-specific slot in novel contracts. We evaluate the system in multiple feature configurations to demonstrate that the best performing system in both genres of contracts omits the exact mention form from consideration—even though such mention forms are often the name of the slot under consideration—and is instead based solely on the dependency label and parent; in other words, a more reliable quantification of a party’s role in a contract is found in what they do rather than what they are named.

2019

pdf bib
The Extent of Repetition in Contract Language
Dan Simonson | Daniel Broderick | Jonathan Herr
Proceedings of the Natural Legal Language Processing Workshop 2019

Contract language is repetitive (Anderson and Manns, 2017), but so is all language (Zipf, 1949). In this paper, we measure the extent to which contract language in English is repetitive compared with the language of other English language corpora. Contracts have much smaller vocabulary sizes compared with similarly sized non-contract corpora across multiple contract types, contain 1/5th as many hapax legomena, pattern differently on a log-log plot, use fewer pronouns, and contain sentences that are about 20% more similar to one another than in other corpora. These suggest that the study of contracts in natural language processing controls for some linguistic phenomena and allows for more in depth study of others.

2018

pdf bib
Narrative Schema Stability in News Text
Dan Simonson | Anthony Davis
Proceedings of the 27th International Conference on Computational Linguistics

We investigate the stability of narrative schemas (Chambers and Jurafsky, 2009) automatically induced from a news corpus, representing recurring narratives in a corpus. If such techniques produce meaningful results, we should expect that small changes to the corpus will result in only small changes to the induced schemas. We describe experiments involving successive ablation of a corpus and cross-validation at each stage of ablation, on schemas generated by three different techniques over a general news corpus and topically-specific subcorpora. We also develop a method for evaluating the similarity between sets of narrative schemas, and thus the stability of the schema induction algorithms. This stability analysis affirms the heterogeneous/homogeneous document category hypothesis first presented in Simonson and Davis (2016), whose technique is problematically limited. Additionally, increased ablation leads to increasing stability, so the smaller the remaining corpus, the more stable schema generation appears to be. We surmise that as a corpus grows larger, novel and more varied narratives continue to appear and stability declines, though at some point this decline levels off as new additions to the corpus consist essentially of “more of the same.”

2016

pdf bib
Different Flavors of GUM: Evaluating Genre and Sentence Type Effects on Multilayer Corpus Annotation Quality
Amir Zeldes | Dan Simonson
Proceedings of the 10th Linguistic Annotation Workshop held in conjunction with ACL 2016 (LAW-X 2016)

pdf bib
NASTEA: Investigating Narrative Schemas through Annotated Entities
Dan Simonson | Anthony Davis
Proceedings of the 2nd Workshop on Computing News Storylines (CNS 2016)

2015

pdf bib
Interactions between Narrative Schemas and Document Categories
Dan Simonson | Anthony Davis
Proceedings of the First Workshop on Computing News Storylines

2013

pdf bib
Toward Fine-grained Annotation of Modality in Text
Aynat Rubinstein | Hillary Harner | Elizabeth Krawczyk | Daniel Simonson | Graham Katz | Paul Portner
Proceedings of the IWCS 2013 Workshop on Annotation of Modal Meanings in Natural Language (WAMM)