AI, Cognition and the Economy

AI, Cognition, and the Economy (AICE)

AICE Accelerator Pilot collaborations

AICE has funded a series of pilot collaborations to accelerate discovery of early insights into the quickly evolving influence Generative AI is having on how people think and work and what that might mean for jobs in the future.

Human-AI interaction and user experience

  • Northeastern University logo

    Northeastern University: Vedant Swain (PI)
    Microsoft: Javier Hernandez, Mary Czerwinski
    Area(s) of impact:

    *Accelerate Foundation Models Research collaboration

    To make AI agents more empathetic towards worker’s goals, the agent needs to (i) understand broader wellbeing goals beyond saving time, (ii) maintain latitudinal and longitudinal awareness of workers’ context outside their task, and (iii) provide workers suggestions to meet those goals by preempting opportunities in their work context. In this project, we propose to prototype and study Pro-Pilot, an enhancement over the existing Copilot that introduces a new Human-AI interaction framework that builds empathy.

  • University of Texas at Austin logo

    University of Texas at Austin: Desmond Ong (PI), Jessy Li (PI)
    Microsoft: Jina Suh, Mary Czerwinski, Javier Hernandez
    Area(s) of impact:

    *Accelerate Foundation Models Research collaboration

    The Digital Empathy pilot aims to investigate emotional intelligence in Large Foundation Model (LFM) -driven systems and to develop and study a series of empathic AI agents to understand and augment human performance and wellbeing. Until now, there has been very little empirical evidence of how empathic LFM systems are or the psychological implications of these systems during human-AI interactions. The project will contribute to a comprehensive survey of the research opportunities and priorities concerning empathy in AI systems and a research platform for the systematic evaluation of empathic agents.

  • University College London logo
    University of Waterloo logo

    University College London, University of Waterloo: Clara Colombatto (PI)
    Microsoft: Lev Tankelevitch, Sean Rintel
    Area(s) of impact:

    *Accelerate Foundation Models Research collaboration

    The aim of this pilot is to investigate the metacognitive demands and opportunities involved in working with Generative AI (GenAI). Early usability studies of GenAI systems suggest that they require high metacognitive monitoring and control: awareness, understanding, and control of one’s own thought processes. For example, iterative prompting, output evaluation, and adjustment depend on users’ calibrated self-confidence in their ability to complete these tasks. These demands are likely exacerbated by GenAI models’ unique features, such as their flexibility and non-determinism. At the same time, GenAI offers opportunities to adaptively support users in their workflows via metacognitive scaffolding (e.g., in conversational interfaces). In this project, we propose to leverage insights from the psychology and neuroscience of metacognition and decision-making to study human-AI interactions and their potential for trustworthy collaboration. This past work has highlighted that successful collaboration hinges on sharing not just our cognitive states (e.g. what we believe), but also metacognitive estimates (e.g. our confidence in ourselves and one another). Humans routinely signal their metacognitive states explicitly (e.g., via verbal estimates) or implicitly (e.g., via speech prosody).  Recent studies have found that humans tend to misattribute metacognitive states to AI (compared to other humans), highlighting the importance of developing robust and accurate methods for signalling metacognitive states in human-GenAI interactions. If signatures of metacognition are helpful in human-human interactions, we will explore whether they might they also facilitate collaboration and trust in human-GenAI interactions.

  • University College London logo

    University College London: Jeremy Skipper (PI), Viktor Kewenig
    Microsoft: Lev Tankelevitch
    Area(s) of impact: Labor markets

    *Accelerate Foundation Models Research collaboration

    The proposed study aims to explore interventions that can bolster metacognitive skills, focusing on improving the effectiveness of prompts used in interacting with GenAI systems, thereby enhancing task performance and user experience. By focusing on metacognitive interventions for prompting, the study intends to empower individuals with strategies that enhance their interaction with generative AI, specifically to improve problem solving capabilities, and facilitate learning and decision-making processes.


AI in jobs, teamwork and collaboration

  • MIT Sloan School of Management logo

    MIT Sloan School of Management: Danielle Li (PI)
    Microsoft: Sonia Jaffe, Eleanor Dillon
    Area(s) of impact: Labor markets; new capabilities and productivity gains

    Will generative AI reduce the barrier to entry into software development by making it easier for less skilled people to take on routine programming tasks, thereby expanding programming opportunities? Or will higher-skilled developers shortcut routine tasks with AI, thereby reducing the demand for programmers? How does AI affect team collaboration, for example, can people with complementary skills, such as managers and designers, do more programming work with the AI tools?

  • Haas School of Business, UC Berkeley logo

    Haas School of Business, UC Berkeley: David Holtz (PI)
    Microsoft: Siddharth Suri
    Area(s) of impact: New industries and innovation; new capabilities and productivity gains

    As LLMs rise in popularity, prompting is going to take on a more important role in daily life for everyone. Studying how people learn to prompt engineer better is going to be a key research question for everyone using these models going forward. We will analyze how people learn to write prompts using generative AI models to complete various tasks. We want to understand the relationship between the importance of prompt engineering and the complexity of the task and the generative AI model used. We also want to understand whether insights and advice about prompt engineering can be effectively transferred between users to improve prompt engineering effectiveness.

  • Aarhus University logo

    Aarhus University: Jens Emil Grønbæk (PI), Clemens Klokmose (PI)
    Microsoft: Sean Rintel, Payod Panda, Lev Tankelevitch
    Area(s) of impact:

    The overall goal of this collaborative project is to understand how LLMs might increase productivity and dynamically generate tailor meeting interfaces to match goals customized to team needs. We aim to explore how the combination of LLMs and a rule-based declarative programming model support end-user adaptability of meeting software. From this we hope to establish the core principles for designing goal-adaptive meeting interfaces. Once that is established, we aim to study the cognitive and productivity impact/s of goal-adaptive meeting interfaces.


Societal and economic impacts of AI

  • University of Pretoria: Vukosi Marivate (PI), Idris Abdulmumin
    Microsoft: Jacki O’Neill
    Area(s) of impact:

    *Accelerate Foundation Models Research collaboration

    The goal of this study is to extend the capabilities of foundation models [6] for ESL-speaking African communities by enhancing their ability to understand and generate content that accurately reflects the continent’s socio-cultural specifics. This involves adapting models to users’ specific needs, and linguistic styles; making the models more accessible and equitable for underrepresented groups. Through localized tuning and feedback, the project seeks to reduce performance gaps and tailor foundation models for diverse uses such as in legal, finance, and agriculture in Africa. The overarching goal is to provide users with the agency to customize foundation (language) models to their unique situations.

  • Catholic University of Argentina logo placeholder

    Catholic University of Argentina: Alicia Caballero (PI), Ma. Lourdes Puente Olivera (PI)
    Microsoft: Scott Counts
    Area(s) of impact: Labor markets; Equity

    The Observatorio de la Deuda Social Argentina at Catholic University of Argentina collects socioeconomic data and would like to extend its field work and other data collection infrastructure to collect AI related data, i.e., questions about AI usage and expectation for impact of AI. The debt survey is the largest country-wide survey in Argentina that is not administered by the government; the survey is akin to the American Community Survey census in scope. The gift will support data processing and analysis of the survey data.

  • Penn State University logo

    Penn State University: Nageeb Ali (PI)
    Microsoft: Nicole Immorlica, Brendan Lucier
    Area(s) of impact:

    Generative AI relies on large amounts of training data. This has always been a key input for AI, and traditionally is sourced by scraping the web for data byproducts and/or hiring data workers.  Generative AI introduces a new twist though, namely the producers of the data now must compete with the downstream uses of their data.  For example, a graphic designer might find that the AI trained on her own work is now replacing her.  This creates a “hold-up problem”: if actions are easily replicable and become one’s future competition, then there is not going to be much incentive to put in effort to create high-quality content. Hence, for there to be high-quality generative AI, there needs to be a commitment to offering future returns to investment.  Otherwise, it’ll be low quality.  The commitment problem is exacerbated by competition. Suppose that the original data / task could have been done by Ann, Bob, or Carol. If they could all commit together — as in a union — then there is hope. But if parties cannot commit, then this would make it even harder to create incentives for effort as now the firm needs any one of them to give their data, and they will compete.  We’re interested in studying this data supply chain, the implicit norms around data use, when they ought to be formalized, mechanisms that can resolve hold-up problems, and the social consequences of various designs.

  • MIT Sloan School of Management logo

    MIT Sloan School of Management: Mert Demirer (PI), John Horton (PI)
    Microsoft: Brendan Lucier, Nicole Immorlica
    Area(s) of impact: Labor markets; New industries and innovation

    *Accelerate Foundation Models Research collaboration

    As the adoption of generative AI tools becomes more widespread, it is crucial to anticipate the macroeconomic effects on labor and production.  This requires both a whole-market view and a detailed accounting of the differences between jobs.  We will approach this challenge by treating jobs as interconnected sequences of tasks that vary in how easily they can be automated and overseen.  This results in some jobs being “more automatable” than others—even accounting for the level of skill required to complete the job manually—and suggests jobs where human-AI collaboration might be especially useful.  We will use these models to study the general equilibrium impact of advances in AI automation across different job domains.



*Accelerate Foundation Models Research collaboration