Logo: University of Southern California

Events Calendar



Select a calendar:



Filter May Events by Event Type:


SUNMONTUEWEDTHUFRISAT

Events for May 03, 2017

  • Using Theory to Reveal Function in Large Brain Circuits

    Wed, May 03, 2017 @ 10:00 AM - 12:00 PM

    Ming Hsieh Department of Electrical and Computer Engineering

    Conferences, Lectures, & Seminars


    Speaker: Friedrich Sommer, UC Berkeley

    Talk Title: Using Theory to Reveal Function in Large Brain Circuits

    Abstract: Current technology provides a virtual deluge of information about brain structure and physiology. Our laboratory focuses on developing new theoretical frameworks and analytical methods that take advantage of this accelerated rate of data influx to address central problems in neuroscience. I will discuss three different projects.
    High-density multi-electrode recordings monitor the spike trains of individual neurons with unparalleled temporal accuracy and also provide spatially distributed information about local field potentials (LFPs), slow signals generated by groups of neurons. In hippocampus, the relative timing between the spikes of a certain class of neurons (place cells) and a 10 Hz signal present in the LFP (the theta wave) carries information about the animal's position in the environment. Using data obtained in the Buzsaki laboratory, we developed a novel approach to decode the animal's position precisely from the LFP alone. Further, we were able to extract LFP place components, like place cells, neatly tile the spatial environment. The LFP is far simpler to record than spike trains, and is feasible to obtain from human patients. Thus, our results can be leveraged to build robust brain computer interfaces.
    Integration of information across regions and modalities is a fundamental working principle of the brain. We developed a novel method to estimate integrated information. The method can be applied to recordings with large numbers (thousands) of channels. We recently provided the first estimate of integrated information in a whole animal, the behaving nematode (C-elegans). Further, we found that the mesoscopic mouse connectome integrates significantly more information than other network architectures, suggesting that integrated information is a plausible force for driving evolution.
    Theoretical principles, such as Hebbian plasticity, error-based, and reward-based learning give insight into how the brain form sensory codes, object categories, and develop strategies to obtain rewards. However, we lack principles to understand how the brain guides the body to explore the environment efficiently such that it is possible to form models of the world from small numbers of observations. We proposed a novel principle that selects actions leading to the sensory observations that best improve the current model of the environment. This principle can be cast in a formal framework based on defining the information gain of the model. The resulting algorithm generates models of novel environments with greater speed than previously achieved. On one hand, the new principle generates testable predictions about how brains control action/perception loops, on the other it has technical applications in robotics and artificial intelligence.

    Biography: Friedrich T. Sommer holds a Ph.D. in Physics from the University of Dusseldorf and a habilitation in Computer Science from the University of Ulm. After completing postocdoctoral work at MIT and the University of Tuebingen, he joined the department of Computer Science at the University of Ulm in 1998 as an Assistant Professor. He became a Principal Investigator at the Redwood Neuroscience Institute in Menlo Park in 2003 before joining the University of California, Berkeley in 2005, where he is an Adjunct Professor at the Redwood Center for Theoretical Neuroscience and the Helen Wills Neuroscience Institute.

    Host: Shrikanth Narayanan & Richard Leady

    Location: Hughes Aircraft Electrical Engineering Center (EEB) - 132

    Audiences: Everyone Is Invited

    Contact: Tanya Acevedo-Lam/EE-Systems

    OutlookiCal
  • Speech Technology Research and Applications at LPTV

    Wed, May 03, 2017 @ 02:00 PM - 04:00 PM

    Ming Hsieh Department of Electrical and Computer Engineering

    Conferences, Lectures, & Seminars


    Speaker: Nestor Becerra Yoma, Universidad de Chile in Santiago

    Talk Title: Speech Technology Research and Applications at LPTV

    Abstract: In this talk I will describe the research I have carried out in the Speech Processing and Transmission Laboratory (LPTV, Laboratorio de Procesamiento y Transmisión de Voz) in the last 17 years. LPTV is located at Universidad de Chile and was founded by me in 2000. I will discuss the seminar work on uncertainty and how the first results were achieved. As far as we know, those are the first uncertainty modelling in HMM. I will talk about our experience with speech technology for telephone applications and second language learning. Some relevant papers on stochastic Weighted Viterbi, multi-classifier fusion, CAPT and VoIP will be discussed. I will describe our state-of-the-art robotic platform that we have implemented to pursue our research on voice-based human-robot interaction. In this context, the locally normalized features will be presented to address the time varying channel problem. I will show demos and discuss ideas on voice-based HRI. Finally, I will summarize our results on multidisciplinary research on signal processing.

    Biography: Néstor Becerra Yoma received the PhD degree from University of Edinburgh, UK, and the M.Sc. and B.Sc. degrees from UNICAMP (Campinas State University), Sao Paulo, Brazil, all of them in Electrical Engineering, in 1998, 1993 and 1986, respectively. From 2000, he has been a Professor at the Department of Electrical Engineering, Universidad de Chile, in Santiago, where he is currently lecturing on telecommunications and speech processing. In 2011 he was promoted to the Full Professor position. From 2016 to 2017 he was a visiting professor at CMU, USA. At Universidad de Chile he started the Speech Processing and Transmission Laboratory to carry out research on speech technology applications on human-robot interaction, language learning, Internet and telephone line. His research interest also includes multidisciplinary research on signal processing in fields such as astronomy, mining and volcanology. He is the author of about 40 journal articles, 40 conference papers and three patents. Professor Becerra Yoma was an associate editor of the IEEE Transactions on Speech and Audio Processing from for four years.

    Host: Shrikanth Narayanan

    Location: Hughes Aircraft Electrical Engineering Center (EEB) - 132

    Audiences: Everyone Is Invited

    Contact: Tanya Acevedo-Lam/EE-Systems

    OutlookiCal
  • Speech Technology Research and Applications at LPTV

    Wed, May 03, 2017 @ 02:00 PM - 04:00 PM

    Ming Hsieh Department of Electrical and Computer Engineering

    Conferences, Lectures, & Seminars


    Speaker: Nestor Becerra Yoma, Universidad de Chile in Santiago

    Talk Title: Speech Technology Research and Applications at LPTV

    Abstract: In this talk I will describe the research I have carried out in the Speech Processing and Transmission Laboratory (LPTV, Laboratorio de Procesamiento y Transmisión de Voz) in the last 17 years. LPTV is located at Universidad de Chile and was founded by me in 2000. I will discuss the seminar work on uncertainty and how the first results were achieved. As far as we know, those are the first uncertainty modelling in HMM. I will talk about our experience with speech technology for telephone applications and second language learning. Some relevant papers on stochastic Weighted Viterbi, multi-classifier fusion, CAPT and VoIP will be discussed. I will describe our state-of-the-art robotic platform that we have implemented to pursue our research on voice-based human-robot interaction. In this context, the locally normalized features will be presented to address the time varying channel problem. I will show demos and discuss ideas on voice-based HRI. Finally, I will summarize our results on multidisciplinary research on signal processing.

    Biography: Néstor Becerra Yoma received the PhD degree from University of Edinburgh, UK, and the M.Sc. and B.Sc. degrees from UNICAMP (Campinas State University), Sao Paulo, Brazil, all of them in Electrical Engineering, in 1998, 1993 and 1986, respectively. From 2000, he has been a Professor at the Department of Electrical Engineering, Universidad de Chile, in Santiago, where he is currently lecturing on telecommunications and speech processing. In 2011 he was promoted to the Full Professor position. From 2016 to 2017 he was a visiting professor at CMU, USA. At Universidad de Chile he started the Speech Processing and Transmission Laboratory to carry out research on speech technology applications on human-robot interaction, language learning, Internet and telephone line. His research interest also includes multidisciplinary research on signal processing in fields such as astronomy, mining and volcanology. He is the author of about 40 journal articles, 40 conference papers and three patents. Professor Becerra Yoma was an associate editor of the IEEE Transactions on Speech and Audio Processing from for four years.

    Host: Shrikanth Narayanan

    Location: Hughes Aircraft Electrical Engineering Center (EEB) - 132

    Audiences: Everyone Is Invited

    Contact: Tanya Acevedo-Lam/EE-Systems

    OutlookiCal
  • Speech Technology Research and Applications at LPTV

    Wed, May 03, 2017 @ 02:00 PM - 04:00 PM

    Ming Hsieh Department of Electrical and Computer Engineering

    Conferences, Lectures, & Seminars


    Speaker: Nestor Becerra Yoma, Universidad de Chile in Santiago

    Talk Title: Speech Technology Research and Applications at LPTV

    Abstract: In this talk I will describe the research I have carried out in the Speech Processing and Transmission Laboratory (LPTV: Laboratorio de Procesamiento y Transmisión de Voz) in the last 17 years. LPTV is located at Universidad de Chile and was founded by me in 2000. I will discuss the seminar work on uncertainty and how the first results were achieved. As far as we know, those are the first uncertainty modelling in HMM. I will talk about our experience with speech technology for telephone applications and second language learning. Some relevant papers on stochastic Weighted Viterbi, multi-classifier fusion, CAPT and VoIP will be discussed. I will describe our state-of-the-art robotic platform that we have implemented to pursue our research on voice-based human-robot interaction. In this context, the locally normalized features will be presented to address the time varying channel problem. I will show demos and discuss ideas on voice-based HRI. Finally, I will summarize our results on multidisciplinary research on signal processing.

    Biography: Néstor Becerra Yoma received the PhD degree from University of Edinburgh, UK, and the M.Sc. and B.Sc. degrees from UNICAMP (Campinas State University), Sao Paulo, Brazil, all of them in Electrical Engineering, in 1998, 1993 and 1986, respectively. From 2000, he has been a Professor at the Department of Electrical Engineering, Universidad de Chile, in Santiago, where he is currently lecturing on telecommunications and speech processing. In 2011 he was promoted to the Full Professor position. From 2016 to 2017 he was a visiting professor at CMU, USA. At Universidad de Chile he started the Speech Processing and Transmission Laboratory to carry out research on speech technology applications on human-robot interaction, language learning, Internet and telephone line. His research interest also includes multidisciplinary research on signal processing in fields such as astronomy, mining and volcanology. He is the author of about 40 journal articles, 40 conference papers and three patents. Professor Becerra Yoma was an associate editor of the IEEE Transactions on Speech and Audio Processing from for four years.

    Host: Shrikanth Narayanan

    Location: Hughes Aircraft Electrical Engineering Center (EEB) - 132

    Audiences: Everyone Is Invited

    Contact: Tanya Acevedo-Lam/EE-Systems

    OutlookiCal
  • Speech Technology Research and Applications at LPTV

    Wed, May 03, 2017 @ 02:00 PM - 04:00 PM

    Ming Hsieh Department of Electrical and Computer Engineering

    Conferences, Lectures, & Seminars


    Speaker: Nestor Becerra Yoma, Universidad de Chile in Santiago

    Talk Title: Speech Technology Research and Applications at LPTV

    Abstract: In this talk I will describe the research I have carried out in the Speech Processing and Transmission Laboratory (LPTV) in the last 17 years. LPTV is located at Universidad de Chile and was founded by me in 2000. I will discuss the seminar work on uncertainty and how the first results were achieved. As far as we know, those are the first uncertainty modelling in HMM. I will talk about our experience with speech technology for telephone applications and second language learning. Some relevant papers on stochastic Weighted Viterbi, multi-classifier fusion, CAPT and VoIP will be discussed. I will describe our state-of-the-art robotic platform that we have implemented to pursue our research on voice-based human-robot interaction. In this context, the locally normalized features will be presented to address the time varying channel problem. I will show demos and discuss ideas on voice-based HRI. Finally, I will summarize our results on multidisciplinary research on signal processing.

    Biography: Néstor Becerra Yoma received the PhD degree from University of Edinburgh, UK, and the M.Sc. and B.Sc. degrees from UNICAMP (Campinas State University), Sao Paulo, Brazil, all of them in Electrical Engineering, in 1998, 1993 and 1986, respectively. From 2000, he has been a Professor at the Department of Electrical Engineering, Universidad de Chile, in Santiago, where he is currently lecturing on telecommunications and speech processing. In 2011 he was promoted to the Full Professor position. From 2016 to 2017 he was a visiting professor at CMU, USA. At Universidad de Chile he started the Speech Processing and Transmission Laboratory to carry out research on speech technology applications on human-robot interaction, language learning, Internet and telephone line. His research interest also includes multidisciplinary research on signal processing in fields such as astronomy, mining and volcanology. He is the author of about 40 journal articles, 40 conference papers and three patents. Professor Becerra Yoma was an associate editor of the IEEE Transactions on Speech and Audio Processing from for four years.

    Host: Shrikanth Narayanan

    Location: Hughes Aircraft Electrical Engineering Center (EEB) - 132

    Audiences: Everyone Is Invited

    Contact: Tanya Acevedo-Lam/EE-Systems

    OutlookiCal
  • Speech Technology Research and Applications at LPTV

    Wed, May 03, 2017 @ 02:00 PM - 04:00 PM

    Ming Hsieh Department of Electrical and Computer Engineering

    Conferences, Lectures, & Seminars


    Speaker: Nestor Becerra Yoma, Universidad de Chile in Santiago

    Talk Title: Speech Technology Research and Applications at LPTV

    Abstract: In this talk I will describe the research I have carried out in the Speech Processing and Transmission Laboratory (LPTV, Laboratorio de Procesamiento y Transmision de Voz) in the last 17 years. LPTV is located at Universidad de Chile and was founded by me in 2000. I will discuss the seminar work on uncertainty and how the first results were achieved. As far as we know, those are the first uncertainty modelling in HMM. I will talk about our experience with speech technology for telephone applications and second language learning. Some relevant papers on stochastic Weighted Viterbi, multi-classifier fusion, CAPT and VoIP will be discussed. I will describe our state-of-the-art robotic platform that we have implemented to pursue our research on voice-based human-robot interaction. In this context, the locally normalized features will be presented to address the time varying channel problem. I will show demos and discuss ideas on voice-based HRI. Finally, I will summarize our results on multidisciplinary research on signal processing.

    Biography: Nestor Becerra Yoma received his PhD degree from University of Edinburgh, UK, and the M.Sc. and B.Sc. degrees from UNICAMP (Campinas State University), Sao Paulo, Brazil, all of them in Electrical Engineering, in 1998, 1993 and 1986, respectively. From 2000, he has been a Professor at the Department of Electrical Engineering, Universidad de Chile, in Santiago, where he is currently lecturing on telecommunications and speech processing. In 2011 he was promoted to the Full Professor position. From 2016 to 2017 he was a visiting professor at CMU, USA. At Universidad de Chile he started the Speech Processing and Transmission Laboratory to carry out research on speech technology applications on human-robot interaction, language learning, Internet and telephone line. His research interest also includes multidisciplinary research on signal processing in fields such as astronomy, mining and volcanology. He is the author of about 40 journal articles, 40 conference papers and three patents. Professor Becerra Yoma was an associate editor of the IEEE Transactions on Speech and Audio Processing from for four years.

    Host: Shrikanth Narayanan

    Location: Hughes Aircraft Electrical Engineering Center (EEB) - 132

    Audiences: Everyone Is Invited

    Contact: Tanya Acevedo-Lam/EE-Systems

    OutlookiCal
  • PhD Defense - Elaine Short

    Wed, May 03, 2017 @ 02:00 PM - 04:00 PM

    Thomas Lord Department of Computer Science

    University Calendar


    PhD Candidate: Elaine Short

    Title: Managing Multi-Party Social Dynamics for Socially Assistive Robotics

    Date: 05/03/17
    Time: 2-4pm
    Location: RTH 406

    Committee:

    Maja Matarić (Chair)
    David Traum
    Gaurav Sukhatme
    Gisele Ragusa (External)

    Abstract:

    This dissertation presents a domain-independent computational model of moderation of multi-party human-machine interactions that enables a robot or virtual agent to act as a moderator in a group interaction.
    A moderator is defined in this work as an agent that regulates social and task outcomes in a goal-oriented social interaction. This model has multiple applications in human-machine interaction: groups of people often require some management or facilitation to ensure smooth and productive interaction, especially when the context is emotionally fraught or the participants do not know each other well. A particularly relevant application domain for moderation is in Socially Assistive Robotics (SAR), where systems are frequently deployed without complex speech understanding or dialogue management, but where group interactions can benefit from a moderator's participation. The evaluation of the model focuses on intergenerational interactions, but the model is applicable to various other SAR domains as well, including group therapy, informal teaching between peers, and social skills therapy.

    Moderation is formalized as a decision-making problem, where measures of task performance and positive social interaction in a group are maximized through the behavior of a social moderator. This framework provides a basis for the development of a series of control algorithms for robot moderators to assist groups of people in improving task performance and managing the social dynamics of interactions in diverse domains. Based on reliably-sensed features of the interaction such as task state and voice activity, the moderator takes social actions that can predictably alter task performance and the social dynamics of the interaction. Thus the moderator is able to support human-human interaction in unpredictable, open-ended, real-world contexts.

    The model of moderation provides a framework for developing algorithms that enable robots to moderate group interactions without the need for speech recognition; it complements dialogue systems and human-computer interaction, providing conversational agents with additional strategies for managing dynamics of group interaction. Four algorithms are developed based on the model: a basic moderation algorithm, a task-goal-based moderation algorithm, a social-feature-based moderation algorithm, and a combined algorithm that takes into account both task goals and social features. These algorithms are validated in both peer-group interactions and inter-generational family interactions where the moderator supports interactions including members of multiple generations within the same family. The work is intended for short- and long-term deployments of socially assistive robots and virtual agents, and can be applied across assistive domains to facilitate social interactions and improve task performance.

    Location: Ronald Tutor Hall of Engineering (RTH) - 406

    Audiences: Everyone Is Invited

    Contact: Lizsl De Leon

    OutlookiCal