Designing Trustworthy and Explainable Embodied AI Agents for Collaborative VR Tasks

PhD Research Proposal

The integration of intelligent Artificial Intelligence (AI) agents into Virtual Reality (VR) environments holds significant potential for enhancing collaborative tasks. When these AI agents are represented with a virtual body (i.e., embodied), they can act as more intuitive assistants, teammates, or facilitators, augmenting human capabilities and creating more dynamic and engaging immersive experiences. However, for truly effective and natural human-AI collaboration in VR, it is crucial that users develop trust in these embodied agents and clearly understand their actions and reasoning, especially given the nuances of non-verbal communication and the potential for the “uncanny valley” effect. This research proposes to investigate the design principles and interaction techniques for creating trustworthy and explainable embodied AI agents that can seamlessly collaborate with humans in VR environments.

Scholarships available

https://euraxess.ec.europa.eu/jobs/359396

https://euraxess.ec.europa.eu/jobs/359736

Motivation and Problem Statement

EWhile AI agents are becoming increasingly sophisticated, their integration into collaborative VR scenarios, particularly with embodiment, raises critical questions about trust, transparency, and the fidelity of human-like interaction. Users may be hesitant to rely on agents whose decision-making processes are opaque or whose behavior, including non-verbal cues, seems unnatural or unpredictable. The “uncanny valley” phenomenon, where highly human-like but imperfect virtual characters can elicit feelings of unease or revulsion, poses a specific challenge to building trust with embodied agents. Lack of trust or understanding can hinder effective collaboration and limit the potential benefits of AI in VR. Furthermore, understanding why an embodied AI agent performs a particular action, especially when it involves gestures, gaze, or spatial movements, is essential for users to learn from the agent, identify potential errors, and build a robust mental model of its capabilities and limitations.

The core problem this research addresses is how to design embodied AI agents in VR that are both trustworthy and explainable to human collaborators. This includes investigating the role of virtual body language, facial expressions, and spatial interaction in conveying intent and reasoning. It also involves developing interface designs that leverage embodiment to foster trust and transparency, enabling agents to adapt their behavior based on nuanced human feedback, and addressing the specific ethical and psychological considerations (like the uncanny valley) inherent in human-AI collaboration with embodied agents in immersive environments.

Research Questions

This research aims to answer the following key questions:

  • Role of Embodiment in Trust and Understanding: How does the virtual embodiment of an AI agent (e.g., degree of anthropomorphism, visual fidelity, expressiveness) influence user trust, perceived intelligence, and understanding of the agent’s actions and intentions in collaborative VR tasks? What are the thresholds for triggering the “uncanny valley” effect in embodied VR agents, and how does it impact trust and collaboration?
  • Non-Verbal Communication for Explainability: What non-verbal cues and interaction styles (e.g., gaze, gestures, posture, facial expressions) can embodied AI agents effectively utilize to communicate their goals, intentions, and the reasoning behind their actions to human collaborators in a clear and intuitive manner within VR?
  • Interface Design for Transparent Embodied AI: What interface designs and interaction techniques within VR can foster transparency in human-embodied AI collaborative tasks? How can the interface provide insights into the agent’s internal state, decision-making process, and confidence levels through or in conjunction with its embodied presence?
  • Adaptive Embodied Behavior and Feedback: How can embodied AI agents in VR adapt their verbal and non-verbal behavior and communication style based on real-time human feedback and the evolving dynamics of the human-AI collaboration? What mechanisms allow users to intuitively provide feedback and influence the agent’s future embodied actions?
  • Impact on Collaborative Performance and User Experience: How do different approaches to designing trustworthy and explainable embodied AI agents affect objective measures of collaborative performance (e.g., task accuracy, efficiency) and subjective measures of user experience (e.g., co-presence, comfort, engagement, perceived coordination) in VR?
  • Design Guidelines for Embodied Trustworthy XAI: Based on empirical investigation, what generalizable design guidelines and a conceptual framework can be established for creating trustworthy and explainable embodied AI agents for collaborative VR tasks, considering both technical capabilities and human psychological factors?

Research Methodology

This research will employ a human-centered design approach, combining the iterative development of embodied AI agent prototypes with rigorous qualitative and quantitative user evaluation. The methodology will involve the following phases:

  • Phase 1: Literature Review and Conceptual Framework Development: A comprehensive review of research in human-AI interaction, trust in automation, explainable AI (XAI), collaborative VR, social robotics, virtual agents/avatars, non-verbal communication, and the uncanny valley phenomenon. This will inform the development of a theoretical framework for designing trustworthy and explainable embodied AI agents for collaborative VR tasks.
  • Phase 2: Design and Implementation of Embodied AI Agents: Designing and implementing embodied AI agents with varying degrees of anthropomorphism, behavioral expressiveness, and explainability for specific collaborative VR tasks (e.g., joint object manipulation, guided training, complex problem-solving scenarios). This will involve exploring different AI architectures, XAI techniques, and animation/rendering pipelines for realistic and expressive embodiment.
  • Phase 3: Design of VR Interfaces for Embodied Trust and Explanation: Developing VR interface elements and interaction techniques that highlight and clarify the embodied agent’s actions, reasoning, and internal state. This will include designing visual representations of AI reasoning that integrate with the agent’s body, and exploring the effectiveness of verbal explanations alongside non-verbal cues.
  • Phase 4: User Studies and Evaluation: Conducting a series of controlled user studies to assess user trust, understanding, collaborative performance, and subjective experience when collaborating with different embodied AI agents in VR. These studies will specifically investigate the impact of various embodiment characteristics, explanation techniques, and agent behaviors on human-agent interaction, including measures of comfort and perception related to the uncanny valley. Both quantitative (e.g., task performance metrics, trust scales, eye-tracking data, physiological responses) and qualitative data (e.g., user interviews, questionnaires, think-aloud protocols) will be collected.
  • Phase 5: Data Analysis and Guideline Generation: Analyzing the collected data to identify key factors influencing trust, understanding, and collaborative effectiveness with embodied AI agents in VR. Based on the findings, developing actionable design guidelines and recommendations for creating trustworthy and explainable embodied AI agents in immersive environments. The research outcomes will be disseminated through peer-reviewed publications and conference presentations.

Work Plan (Example – 4 Year PhD)

  • Year 1: In-depth literature review focusing on embodied AI, trust, XAI, and social VR. Formalization of the theoretical framework. Preliminary design concepts for embodied AI agents and their communication modalities in VR. Exploration of animation and character rigging techniques suitable for expressive AI agents.
  • Year 2: Implementation of core embodied AI agent behaviors for a chosen collaborative VR task. Development of prototype VR environments incorporating these agents with varying levels of explainability through their embodiment and associated interface elements. Design and pilot testing of initial user study protocols.
  • Year 3: Conducting the main user studies to evaluate the impact of different embodied agent designs and explanation techniques on user trust, understanding, and collaborative performance. Focus on quantitative and qualitative data collection regarding human perception and interaction with embodied agents.
  • Year 4: In-depth data analysis, refinement of design guidelines for trustworthy and explainable embodied AI agents in VR. Exploration of the broader implications for human-AI interaction in immersive settings, including ethical considerations. Thesis writing and submission; publication of research results in leading venues.

Expected Contributions

This research is expected to make significant contributions to the fields of Human-Computer Interaction, Artificial Intelligence, and Virtual Reality by:

  • Providing novel design principles and interaction techniques for creating trustworthy and explainable embodied AI agents for collaborative tasks in VR.
  • Developing effective methods for communicating AI intent and reasoning through a combination of verbal and non-verbal cues from an embodied presence.
  • Contributing to a deeper understanding of the factors that influence user trust and perception of agency in embodied AI agents within immersive environments, particularly concerning the “uncanny valley.”
  • Exploring the unique challenges and opportunities of integrating Explainable AI (XAI) with virtual embodiment in VR.
  • Generating practical design guidelines and recommendations for researchers and practitioners developing human-embodied AI collaborative VR experiences.
  • Advancing the state-of-the-art in designing psychologically effective and socially intelligent AI companions for immersive environments.