My work is focused on the study and development of computational models for multimodal, physically situated interaction. The long-term question that drives my research agenda is how can we create systems that reason more deeply about their surroundings and seamlessly participate in interactions and collaborations with people in the physical world? Examples include human-robot interactive systems, embodied conversational agents, intelligent spaces, AR/VR, etc.
Physically situated interaction hinges critically on the ability to model and reason about the dynamics of human interactions, including processes such as conversational engagement, turn-taking, grounding, interaction planning and action-coordination. Creating robust solutions that function in the real-world brings to the fore numerous AI challenges. Example questions include issues of representation (e.g., what are useful formalisms for creating actionable, robust models for multiparty dialog and interaction), machine learning methods for multimodal inference from streaming sensory data, predictive modeling, decision making and planning under uncertainty and temporal constraints, etc. My work aims to address such challenges and create systems that operate and collaborate with people in the physical world.