Automated Summarization Evaluation with Basic Elements
- Eduard Hovy ,
- Chin-Yew Lin ,
- Liang Zhou ,
- Junichi Fukumoto
As part of evaluating a summary automatically, it is usual to determine how much of the contents of one or more human-produced ‘ideal’ summaries it contains. Past automated methods such as ROUGE compare using fixed word ngrams, which are not ideal for a variety of reasons. In this paper we describe a framework in which summary evaluation measures can be instantiated and compared, and we implement a specific evaluation method using very small units of content, called Basic Elements, that address some of the shortcomings of ngrams. This method is tested on DUC 2003, 2004, and 2005 systems and produces very good correlations with human judgments.