Conversations with Data

Automatic translation of natural language to structured commands to interact with data and services has been the “holy grail” of human-computer interaction, information retrieval and natural language understanding for decades. However, early attempts in building such natural language interfaces to data did not achieve the expected success due to factors including limitations in language understanding capability, extensibility and explainability. The last 5 years have seen a major resurgence of natural language understanding (NLU) systems in the form of virtual assistants, dialogue systems, semantic parsing, question answering and program synthesis systems.

The horizon of these systems has also been significantly expanding from databases to knowledge bases, robots, Internet of Things (via virtual assistants like Siri and Alexa), Web service APIs, general programmatic contexts and more. This has been driven by two revolutions: (1) In the big data era, and as digitization continues to grow, there is a rapidly growing demand for improved digital enablement via interfaces that allow a person to express what they want, through natural language, and connect them to the ever-expanding data sources, services and devices in the computing world and (2) the deep learning revolution has brought us from feature engineering to a world of neural architectures and data engineering, supporting significantly improved language understanding, adaptability and robustness. Despite significant progress, many such systems are not yet really ready for use. Their accuracy is not yet high enough to be reliable on complex tasks.

This project tackles this problem with a two-pronged approach:

(a) increase accuracy by leveraging rich contextual data, and

(b) enable users to interactively use the system, refining and repairing mistakes in order to reach their goals.