Logo: University of Southern California

Events Calendar


  • PhD Defense - Ross Mead

    Thu, Dec 03, 2015 @ 02:00 PM - 04:00 PM

    Thomas Lord Department of Computer Science

    University Calendar


    PhD Defense - Ross Mead

    Title: Situated Proxemics and Multimodal Communication: Space, Speech, and Gesture in Human-Robot Interaction

    Committee: Maja Mataric, Gaurav Sukhatme, Gisele Ragusa (external member)

    Abstract:
    To facilitate face-to-face human-robot interaction (HRI), a sociable robot must employ multimodal communication mechanisms similar to those used by humans: speech production (via speakers), speech recognition (via microphones), gesture production (via physical embodiment), and gesture recognition (via cameras or motion trackers). Like any other signals, these social signals are affected by distance and interference present in the medium through which they travel. People often compensate for this attenuation by adjusting the production of their social signals to compensate for these effects-”for example, by speaking louder, using more broad gestures, or moving closer. How can a sociable robot do the same?

    This dissertation investigates how social (speech and gesture) and environmental (loud noises and reduced visibility) factors influence positioning and communication between humans and sociable robots. Specifically, this research answers the following questions: 1) How should a robot dynamically adjust its position (proxemics) to maximize its automated recognition of human social signals? 2) How should a robot adjust its own communication behaviors to maximize human perceptions of its social signals? 3) How can a robot quickly adapt its models of proxemic and communication behavior to differences in human social signal perception?

    This research formalizes an extensible unifying framework for situated proxemics and multimodal communication in HRI. The framework considers how both humans and robots experience social signals in face-to-face interactions. Data collections were conducted to inform probabilistic graphical models based on the framework that predict how speech and gesture are produced (transmitted) and perceived (received) by both humans and robots at different distances and under environmental interference.

    This work integrates the resulting data-driven models into autonomous proxemic behavior and multimodal communication control system for sociable robots. The robot control system selects positioning parameters to maximize its ability to automatically recognize natural human speech and gestures. Furthermore, the robot control system can dynamically adjust its own speech and gestures to maximize human perceptions of its social signals. Experiments were conducted that successfully evaluated user acceptance of the autonomous robot proxemic control system, demonstrating that human users are willing to adapt their behavior preferences in exchange for improved robot performance in social contexts.

    This research establishes a foundational component of HRI, enabling the development of robust controllers for socially intelligent robots in complex environments.
    Furthermore, this work has implications for technology personalization in socially assistive contexts with people with special needs, such as older adults, children with autism spectrum disorders, and people with hearing or visual impairments or sensitivities.

    Location: Ronald Tutor Hall of Engineering (RTH) - 406

    Audiences: Everyone Is Invited

    Contact: Lizsl De Leon

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File

Return to Calendar