The Impact of Priming on User Trust and Interaction with Artificial Intelligence: A Study by MIT and Arizona State University

by Santiago Fernandez
0 comment
Priming in AI User Interactions

The Impact of Priming on User Trust and Interaction with Artificial Intelligence: A Study by MIT and Arizona State University

The trust and engagement that users have with Artificial Intelligence (AI) are notably swayed by prior information given about the AI’s characteristics, according to research conducted by MIT and Arizona State University.

The research indicates that preliminary information can shape a user’s assumptions about an AI agent’s intentions, thereby affecting their subsequent interactions with the system.

Initial beliefs about an AI entity, such as a chatbot, have substantial repercussions on how individuals engage with it and how they assess its reliability, empathy, and efficacy, reveals a novel study.

The Role of Preliminary Information in Shaping User Experience

The study by MIT and Arizona State University demonstrates that users’ viewpoints about a chatbot designed for mental health support could be molded by indicating in advance whether the AI is empathetic, neutral, or manipulative. This preconception subsequently influenced their communication with the AI, although they were conversing with the identical agent.

The majority of users who were informed that the AI was compassionate endorsed this view and also awarded it higher performance assessments compared to those who were led to believe it was manipulative. Meanwhile, less than half of those told the agent was manipulative considered the chatbot genuinely malicious, suggesting that individuals might extend the benefit of the doubt to AI, as they do with fellow humans.

Reciprocal Influence in AI Conversations

The study discovered a self-reinforcing cycle between a user’s conceptual understanding of an AI entity and the responses from the AI system. Conversational sentiments became increasingly positive if the user assumed the AI to be empathetic, while the opposite occurred for those who believed the AI had malicious intentions.

According to Pat Pataranutaporn, a graduate student at MIT’s Fluid Interfaces group and a co-author of the study, not only does describing an AI’s attributes affect a user’s mental model, but it also shifts their behavior. This alteration in behavior subsequently affects the AI’s responses, thereby completing the loop.

Significance of AI Presentation in Public Discourse

This study, recently published in Nature Machine Intelligence, accentuates the necessity for scrutinizing how AI is portrayed in the media and popular culture, as these have substantial influence over our conceptual frameworks. The authors caution that similar priming techniques could be employed to mislead the public about an AI’s capabilities or intentions.

Pattie Maes, professor at MIT, contends that AI is not just a technological challenge but also a matter of human factors. How AI is described can profoundly impact its efficacy when deployed in human interactions.

Subjective Perception vs. Technological Reality

The researchers explored to what extent the perception of empathy and effectiveness in AI is a result of the technology itself or stems from individual interpretation. They also investigated the possibility of influencing a user’s subjective perception through priming.

Long-Term Impact and Future Research

The findings suggest that preliminary information can significantly influence user perceptions and interactions with AI. This may lead to an inflated level of trust and potential following of incorrect advice. Going forward, the researchers aim to examine how counteracting user bias could affect AI interactions and to explore the application of these findings in areas like mental health treatment.

The research was partially sponsored by the Media Lab, the Harvard-MIT Program in Health Sciences and Technology, Accenture, and KBTG.

Reference: “Influencing human–AI interaction by priming beliefs about AI can increase perceived trustworthiness, empathy and effectiveness” by Pat Pataranutaporn, Ruby Liu, Ed Finn, and Pattie Maes, published on 2 October 2023 in Nature Machine Intelligence. DOI: 10.1038/s42256-023-00720-7

Frequently Asked Questions (FAQs) about Priming in AI User Interactions

What is the main focus of the MIT and Arizona State University study?

The main focus of the study is to understand how priming or preliminary information can affect user interactions and trust levels with Artificial Intelligence (AI), particularly chatbots designed for mental health support.

What is priming and how does it affect user interactions with AI?

Priming is the act of providing users with initial information that shapes their expectations and beliefs about a given AI agent. In the context of this study, priming significantly influenced how users interacted with the AI, as well as their perceptions of its trustworthiness, empathy, and effectiveness.

What was the role of the Fluid Interfaces group at MIT in this study?

The Fluid Interfaces group at MIT was directly involved in the research. Pat Pataranutaporn, a graduate student from this group, was a co-lead author of the study. The group aims to understand the interplay between human and machine interfaces, which is directly relevant to the study’s focus on user-AI interactions.

How did user beliefs affect the effectiveness of the AI?

Most users who were primed to believe that the AI was empathetic gave it higher performance ratings. Conversely, users who were led to believe the AI was manipulative tended to rate it lower, suggesting that initial beliefs have a substantial impact on perceived effectiveness.

What was the impact of negative priming statements?

Less than half of the users who were primed with negative information about the AI believed it to be malicious. This suggests that people may extend the benefit of the doubt to AI, similar to how they do with fellow humans.

What are the future directions for this research?

The researchers plan to examine how AI-user interactions would be influenced if AI agents were designed to counteract some user biases. They are also interested in applying their findings in practical areas like mental health treatments and exploring how a user’s mental model of an AI changes over time.

Who funded this research?

The research was partially sponsored by the Media Lab, the Harvard-MIT Program in Health Sciences and Technology, Accenture, and KBTG.

What caution does the study offer regarding AI priming?

The study raises concerns that priming techniques could be used to deceive users about an AI’s capabilities or intentions, potentially leading to misplaced trust and the following of incorrect advice.

More about Priming in AI User Interactions

  • Study Publication in Nature Machine Intelligence
  • MIT Fluid Interfaces Group
  • Arizona State University’s Center for Science and Imagination
  • The Harvard-MIT Program in Health Sciences and Technology
  • Accenture’s Involvement in AI Research
  • KBTG Official Website

You may also like

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.

SciTechPost is a web resource dedicated to providing up-to-date information on the fast-paced world of science and technology. Our mission is to make science and technology accessible to everyone through our platform, by bringing together experts, innovators, and academics to share their knowledge and experience.


Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

© 2023 SciTechPost