Liang Zhou


2015

pdf bib
Using Topic Modeling and Similarity Thresholds to Detect Events
Nathan Keane | Connie Yee | Liang Zhou
Proceedings of the 3rd Workshop on EVENTS: Definition, Detection, Coreference, and Representation

pdf bib
Modeling and Characterizing Social Media Topics Using the Gamma Distribution
Connie Yee | Nathan Keane | Liang Zhou
Proceedings of the 3rd Workshop on EVENTS: Definition, Detection, Coreference, and Representation

2014

pdf bib
Improving Twitter Sentiment Analysis with Topic-Based Mixture Modeling and Semi-Supervised Training
Bing Xiang | Liang Zhou
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

2007

pdf bib
A Semi-Automatic Evaluation Scheme: Automated Nuggetization for Manual Annotation
Liang Zhou | Namhee Kwon | Eduard Hovy
Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers

pdf bib
Text Comparison Using Machine-Generated Nuggets
Liang Zhou
Proceedings of Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT)

2006

pdf bib
Re-evaluating Machine Translation Results with Paraphrase Support
Liang Zhou | Chin-Yew Lin | Eduard Hovy
Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing

pdf bib
Proceedings of the Analyzing Conversations in Text and Speech
Eduard Hovy | Klaus Zechner | Liang Zhou
Proceedings of the Analyzing Conversations in Text and Speech

pdf bib
Automated Summarization Evaluation with Basic Elements.
Eduard Hovy | Chin-Yew Lin | Liang Zhou | Junichi Fukumoto
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

As part of evaluating a summary automati-cally, it is usual to determine how much of the contents of one or more human-produced “ideal” summaries it contains. Past automated methods such as ROUGE compare using fixed word ngrams, which are not ideal for a variety of reasons. In this paper we describe a framework in which summary evaluation measures can be instantiated and compared, and we implement a specific evaluation method using very small units of content, called Basic Elements that address some of the shortcomings of ngrams. This method is tested on DUC 2003, 2004, and 2005 systems and produces very good correlations with human judgments.

pdf bib
Summarizing Answers for Complicated Questions
Liang Zhou | Chin-Yew Lin | Eduard Hovy
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

Recent work in several computational linguistics (CL) applications (especially question answering) has shown the value of semantics (in fact, many people argue that the current performance ceiling experienced by so many CL applications derives from their inability to perform any kind of semantic processing). But the absence of a large semantic information repository that provides representations for sentences prevents the training of statistical CL engines and thus hampers the development of such semantics-enabled applications. This talk refers to recent work in several projects that seek to annotate large volumes of text with shallower or deeper representations of some semantic phenomena. It describes one of the essential problems—creating, managing, and annotating (at large scale) the meanings of words, and outlines the Omega ontology, being built at ISI, that acts as term repository. The talk illustrates how one can proceed from words via senses to concepts, and how the annotation process can help verify good concept decisions and expose bad ones. Much of this work is performed in the context of the OntoNotes project, joint with BBN, the Universities of Colorado and Pennsylvania, and ISI, that is working to build a corpus of about 1M words (English, Chinese, and Arabic), annotated for shallow semantics, over the next few years.

pdf bib
ParaEval: Using Paraphrases to Evaluate Summaries Automatically
Liang Zhou | Chin-Yew Lin | Dragos Stefan Munteanu | Eduard Hovy
Proceedings of the Human Language Technology Conference of the NAACL, Main Conference

2005

pdf bib
Digesting Virtual “Geek” Culture: The Summarization of Technical Internet Relay Chats
Liang Zhou | Eduard Hovy
Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05)

pdf bib
Classummary: Introducing Discussion Summarization to Online Classrooms
Liang Zhou | Erin Shaw | Chin-Yew Lin | Eduard Hovy
Proceedings of HLT/EMNLP 2005 Interactive Demonstrations

2004

pdf bib
Template-Filtered Headline Summarization
Liang Zhou | Eduard Hovy
Text Summarization Branches Out

pdf bib
Multi-Document Biography Summarization
Liang Zhou | Miruna Ticrea | Eduard Hovy
Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing

2003

pdf bib
A Web-Trained Extraction Summarization System
Liang Zhou | Eduard Hovy
Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics