Politeness Strategies in the Design of Voice Agents for Mental Health

CHI 2019 Workshop |

There is growth in the development of conversational agents or chatbots to support (self-)management in mental health. Previous work has shown how perceptions of conversational agents as caring or polite both can contribute to a sense of empathy and aid disclosure of sensitive information; but also risk inviting misperceptions of their emotional capabilities. Recent research suggests that we need to better understand how the design of dialogue systems may impact people’s perceptions of a conversational agent, and through this their readiness to engage or to openly disclose about their mental health. In this paper, we suggest the use of Brown and Levinson’ politeness strategies to create dialogue templates for a mental health ‘mood log’, which has been shown to be beneficial way for technology to support mental health self-management, as a theoretical underpinning to the design of conversational dialogue structure.