Select a calendar:
Filter December Events by Event Type:
Events for December 03, 2015
-
MFD - Chemical Engineering and Materials Science Lyman L. Handy Series: Walter Illma
Thu, Dec 03, 2015 @ 12:45 PM - 02:00 PM
Mork Family Department of Chemical Engineering and Materials Science
Conferences, Lectures, & Seminars
Speaker: Walter Illman, Univ of Waterloo
Talk Title: Significant Improvement to Imaging Hydraulic Heterogeneity in Heterogeneous Geologic Media via Hydraulic Tomography
Series: MFD Lyman L. Handy
Host: Prof. Behnam Jafarpour
Location: James H. Zumberge Hall Of Science (ZHS) - 159
Audiences: Everyone Is Invited
Contact: Jason Ordonez
-
PhD Defense - Ross Mead
Thu, Dec 03, 2015 @ 02:00 PM - 04:00 PM
Thomas Lord Department of Computer Science
University Calendar
PhD Defense - Ross Mead
Title: Situated Proxemics and Multimodal Communication: Space, Speech, and Gesture in Human-Robot Interaction
Committee: Maja Mataric, Gaurav Sukhatme, Gisele Ragusa (external member)
Abstract:
To facilitate face-to-face human-robot interaction (HRI), a sociable robot must employ multimodal communication mechanisms similar to those used by humans: speech production (via speakers), speech recognition (via microphones), gesture production (via physical embodiment), and gesture recognition (via cameras or motion trackers). Like any other signals, these social signals are affected by distance and interference present in the medium through which they travel. People often compensate for this attenuation by adjusting the production of their social signals to compensate for these effects-”for example, by speaking louder, using more broad gestures, or moving closer. How can a sociable robot do the same?
This dissertation investigates how social (speech and gesture) and environmental (loud noises and reduced visibility) factors influence positioning and communication between humans and sociable robots. Specifically, this research answers the following questions: 1) How should a robot dynamically adjust its position (proxemics) to maximize its automated recognition of human social signals? 2) How should a robot adjust its own communication behaviors to maximize human perceptions of its social signals? 3) How can a robot quickly adapt its models of proxemic and communication behavior to differences in human social signal perception?
This research formalizes an extensible unifying framework for situated proxemics and multimodal communication in HRI. The framework considers how both humans and robots experience social signals in face-to-face interactions. Data collections were conducted to inform probabilistic graphical models based on the framework that predict how speech and gesture are produced (transmitted) and perceived (received) by both humans and robots at different distances and under environmental interference.
This work integrates the resulting data-driven models into autonomous proxemic behavior and multimodal communication control system for sociable robots. The robot control system selects positioning parameters to maximize its ability to automatically recognize natural human speech and gestures. Furthermore, the robot control system can dynamically adjust its own speech and gestures to maximize human perceptions of its social signals. Experiments were conducted that successfully evaluated user acceptance of the autonomous robot proxemic control system, demonstrating that human users are willing to adapt their behavior preferences in exchange for improved robot performance in social contexts.
This research establishes a foundational component of HRI, enabling the development of robust controllers for socially intelligent robots in complex environments.
Furthermore, this work has implications for technology personalization in socially assistive contexts with people with special needs, such as older adults, children with autism spectrum disorders, and people with hearing or visual impairments or sensitivities.
Location: Ronald Tutor Hall of Engineering (RTH) - 406
Audiences: Everyone Is Invited
Contact: Lizsl De Leon
-
CS Colloquium: Kyunghyun Cho (NYU) - Neural machine translation - Progress Report
Thu, Dec 03, 2015 @ 04:00 PM - 05:00 PM
Thomas Lord Department of Computer Science
Conferences, Lectures, & Seminars
Speaker: Kyunghyun Cho, NYU
Talk Title: Neural machine translation -- Progress Report
Series: CS Colloquium
Abstract: This lecture satisfies requirements for CSCI 591: Computer Science Research Colloquium
Neural machine translation is a recently proposed framework for machine translation, which is purely based on neural networks. Neural machine translation radically departs from the existing, widely-used, often phrase-based statistical machine translation by viewing the task of machine translation as a supervised, structured output prediction problem and solving it with recurrent neural networks. In this talk, I will describe in detail what neural machine translation is and discuss recent advances which have made it possible for neural machine translation system to be competitive with the conventional statistical approach. I will conclude the talk by presenting my view on the future of machine translation and a big question of "is natural language special?"
The lecture will be available to stream HERE.
Biography: Kyunghyun Cho is an assistant professor of Computer Science and Data Science at New York University (NYU). Previously, he was a postdoctoral researcher at the University of Montreal under the supervision of Prof. Yoshua Bengio after obtaining a doctorate degree at Aalto University (Finland) in early 2014. Kyunghyun's main research interests include neural networks, generative models and their applications, especially, to language understanding.
Host: Yan Liu
Webcast: https://bluejeans.com/506861099Location: Henry Salvatori Computer Science Center (SAL) - 101
WebCast Link: https://bluejeans.com/506861099
Audiences: Everyone Is Invited
Contact: Assistant to CS chair