Artificial Companions as dialogue agents

COMPANIONS is an EU project that aims to change the way we think about the relationships of people to computers and the Internet by developing a virtual conversational ‘Companion’. This is intended as an agent or ‘presence’ that stays with a user for long periods of time, developing a relationship and ‘knowing’ its owners preferences and wishes. The Companion communicates with the user primarily through speech. This paper describes the functionality and system modules of the Senior Companion, one of two initial prototypes built in the first two years of the project. The Senior Companion provides a multimodal interface for eliciting and retrieving personal information from the elderly user through a n ongoing conversation about their photographs. The Companion will, through conversation, elicit their life memories, often prompted by discussion of their photographs, who is in them, where they were taken etc. The aim is that the Companion should come to know a great deal about its user, their tastes, likes, dislikes, emotional reactions etc, through long periods of conversation. It is a further assumption that most life information will be stored on the internet (as in the Memories for Life project : http://www.memoriesforlife.org/) and the SC is linked directly to photo inventories in Facebook, to gain initial information about people and relationships, as well as to Wikipedia to enable it to respond about places mentioned in conversations about images. The overall aim of the SC, not yet achieved, is to produce a coherent life narrative for its user from these materials, although its short term goals are to assist, amuse, entertain and gain the trust of the user. The Senior Companion uses Information Extraction to get content from the speech input, rather than conventional parsing, and retains utterance content, extracted internet information and ontologies all in RDF formalism over which it does primitive reasoning about people. It has a dialogue manager virtual machine intended to capture mixed dialogue initiative, between Companion and user, a platform which can be a basis for later replacement by learned components.

Speaker Details

Yorick Wilks is Professor of Artificial Intelligence at the University of Sheffield, and is also a Senior Research Fellow at the Oxford Internet Institute at Balliol College, Oxford. He studied math and philosophy at Cambridge, was a researcher at Stanford AI Laboratory, and then Professor of Computer Science and Linguistics at the University of Essex, before moving back to the US for ten years to run a successful and self-funded AI laboratory in New Mexico. He has published numerous articles and nine books in that area of artificial intelligence, among which are “Electric Words: dictionaries, computers and meanings” (1996 with Brian Slator and Louise Guthrie) from MIT Press, “Machine Translation: its scope and limits”, in 2008 from Springer, and, most recently, “Artificial Companions: social, philosophical and technical perspectives”. He is a Fellow of the American and European Associations for Artificial Intelligence, a member of the UK Computing Research Council and on the boards of some fifteen AI-related journals. He designed CONVERSE, the dialogue system that won the Loebner prize in New York in 1998, and was founding Coordinator of the EU 6th Framework integrated project COMPANIONS (4 years, 15 sites, 13meuro) on conversational assistants as personalised and permanent web interfaces. In 2008 he was awarded the Zampolli Prize at LREC-08 in Marrakech, and the ACL Lifetime Achievement Award at ACL08 in Columbus, OH. In 2009 he was awarded the Lovelace Medal by the British Computer Society. His work on computer language understanding has been embodied in a range of projects and products, including large Japanese machine translation systems.

Date:
Speakers:
Yorick Wilks
Affiliation:
University of Sheffield
    • Portrait of Jeff Running

      Jeff Running