Logo: University of Southern California

Events Calendar



Select a calendar:



Filter May Events by Event Type:


SUNMONTUEWEDTHUFRISAT

Events for the 1st week of May

  • Seminars in Biomedical Engineering

    Mon, May 01, 2017 @ 12:30 PM - 01:50 PM

    Alfred E. Mann Department of Biomedical Engineering

    Conferences, Lectures, & Seminars


    Speaker: Joe Zhong, USC

    Talk Title: Microfluidics for Cancer

    Host: Qifa Zhou

    Location: Olin Hall of Engineering (OHE) - 122

    Audiences: Everyone Is Invited

    Contact: Mischalgrace Diasanta

    OutlookiCal
  • Center for Cyber-Physical Systems and Internet of Things and Ming Hsieh Institute for Electrical Engineering Joint Seminar Series on Cyber-Physical Systems

    Mon, May 01, 2017 @ 02:00 PM - 03:00 PM

    Ming Hsieh Department of Electrical and Computer Engineering

    Conferences, Lectures, & Seminars


    Speaker: Xiaoqing Jin , Senior Engineer, Toyota Motors North America R&D

    Talk Title: Logic Driven Data Science

    Abstract: Data science together with machine learning is prevalent in almost every sector of industry. Many popular techniques, such as deep learning with artificial neural networks, have shown their capabilities in achieving incredible performance and accuracy in helping make Cyber Physical Systems (CPS) smarter. However, data scientists or engineers usually find it challenging to interpret the artifacts learned using such procedures. Also, due to the proliferation of sensors, control engineers have to combat the data deluge problem. They need to process, analyze, and identify structure or logical relations from intractably large amounts of time series data within limited amount of time. Typical machine learning techniques rely on similarity measures defined on complex feature spaces of signals and may overlook the embedded logical structure. In this talk, we explore data analysis from the logical perspective and introduce supervised and unsupervised learning procedures that utilize Parametric Signal Temporal Logic (PSTL) templates to discover temporal and spatial relations in signal space. The resulting methods not only perform data analysis but also generate formal artifacts to give engineers abstract understanding of the results. We will demonstrate our techniques in many domains, such as automotive testing, medical devices, and online education systems.

    Biography: Xiaoqing Jin is a Senior Engineer at Toyota Motors North America R&D. She received her Ph.D. from the University of California at Riverside on topics including symbolic model checking, stochastic model checking, and formal verification and validation for hybrid systems. She began her career in doing advanced research at Toyota where she was responsible for researching and developing techniques and tools to help design and analysis of industrial cyber-physical systems, such as control systems for internal combustion engine vehicles and fuel cell electric vehicles. Her research interests are in the broad area of hybrid systems, temporal logics, machine learning, data analysis, control theory, dynamical systems, and automotive control systems.

    Host: Paul Bogdan

    Location: Hughes Aircraft Electrical Engineering Center (EEB) - 248

    Audiences: Everyone Is Invited

    Contact: Estela Lopez

    OutlookiCal
  • First Mini-Workshop on Cyber-Physical Security and Privacy

    Tue, May 02, 2017 @ 09:00 AM - 12:00 PM

    Ming Hsieh Department of Electrical and Computer Engineering

    Workshops & Infosessions


    First Mini-Workshop on Cyber-Physical Security and Privacy
    USC Viterbi Center of Cyber-Physical Systems and the Internet of Things (CCI)
    Tuesday, May 2, 2017, 9am - 12pm, EEB 132

    Welcome to a mini-workshop consisting of exciting research talks by the following set of Viterbi faculty and researchers working on Cyber-Physical Security and Privacy from many perspectives, including cryptography, algorithms and protocols, data management, systems engineering, and CPS design.

    * Cliff Neuman, Director, Center for Computer Systems Security, USC/ISI
    * Alefiya Hussain, Computer Scientist, USC/ISI
    * Muhammad Naveed, Assistant Professor, Computer Science
    * Aleksandra Korolova, Assistant Professor, Computer Science
    * Shahram Ghandeharizadeh, Professor, Computer Science
    * Neno Medvidovic, Professor, Computer Science
    * Neil Siegel, Professor, Industrial Systems Engineering
    * Pierluigi Nuzzo, Assistant Professor, Electrical Engineering

    It is a great opportunity to hear from a stellar collection of our own
    faculty about their research in this area of growing importance.

    Don't miss it!

    Location: Hughes Aircraft Electrical Engineering Center (EEB) - 132

    Audiences: Everyone Is Invited

    Contact: Brienne Moore

    OutlookiCal
  • USC Stem Cell Seminar: Ashley Seifert, University of Kentucky

    Tue, May 02, 2017 @ 11:00 AM - 12:00 PM

    Alfred E. Mann Department of Biomedical Engineering

    Conferences, Lectures, & Seminars


    Speaker: Ashley Seifert, University of Kentucky

    Talk Title: TBD

    Series: Eli and Edythe Broad Center for Regenerative Medicine and Stem Cell Research at USC Distinguished Speakers Series

    Host: USC Stem Cell

    More Info: http://stemcell.usc.edu/events

    Webcast: http://keckmedia.usc.edu/stem-cell-seminar

    Location: Eli & Edythe Broad CIRM Center for Regenerative Medicine & Stem Cell Resch. (BCC) - First Floor Conference Room

    WebCast Link: http://keckmedia.usc.edu/stem-cell-seminar

    Audiences: Everyone Is Invited

    Contact: Cristy Lytal/USC Stem Cell

    Event Link: http://stemcell.usc.edu/events

    OutlookiCal
  • Using Theory to Reveal Function in Large Brain Circuits

    Wed, May 03, 2017 @ 10:00 AM - 12:00 PM

    Ming Hsieh Department of Electrical and Computer Engineering

    Conferences, Lectures, & Seminars


    Speaker: Friedrich Sommer, UC Berkeley

    Talk Title: Using Theory to Reveal Function in Large Brain Circuits

    Abstract: Current technology provides a virtual deluge of information about brain structure and physiology. Our laboratory focuses on developing new theoretical frameworks and analytical methods that take advantage of this accelerated rate of data influx to address central problems in neuroscience. I will discuss three different projects.
    High-density multi-electrode recordings monitor the spike trains of individual neurons with unparalleled temporal accuracy and also provide spatially distributed information about local field potentials (LFPs), slow signals generated by groups of neurons. In hippocampus, the relative timing between the spikes of a certain class of neurons (place cells) and a 10 Hz signal present in the LFP (the theta wave) carries information about the animal's position in the environment. Using data obtained in the Buzsaki laboratory, we developed a novel approach to decode the animal's position precisely from the LFP alone. Further, we were able to extract LFP place components, like place cells, neatly tile the spatial environment. The LFP is far simpler to record than spike trains, and is feasible to obtain from human patients. Thus, our results can be leveraged to build robust brain computer interfaces.
    Integration of information across regions and modalities is a fundamental working principle of the brain. We developed a novel method to estimate integrated information. The method can be applied to recordings with large numbers (thousands) of channels. We recently provided the first estimate of integrated information in a whole animal, the behaving nematode (C-elegans). Further, we found that the mesoscopic mouse connectome integrates significantly more information than other network architectures, suggesting that integrated information is a plausible force for driving evolution.
    Theoretical principles, such as Hebbian plasticity, error-based, and reward-based learning give insight into how the brain form sensory codes, object categories, and develop strategies to obtain rewards. However, we lack principles to understand how the brain guides the body to explore the environment efficiently such that it is possible to form models of the world from small numbers of observations. We proposed a novel principle that selects actions leading to the sensory observations that best improve the current model of the environment. This principle can be cast in a formal framework based on defining the information gain of the model. The resulting algorithm generates models of novel environments with greater speed than previously achieved. On one hand, the new principle generates testable predictions about how brains control action/perception loops, on the other it has technical applications in robotics and artificial intelligence.

    Biography: Friedrich T. Sommer holds a Ph.D. in Physics from the University of Dusseldorf and a habilitation in Computer Science from the University of Ulm. After completing postocdoctoral work at MIT and the University of Tuebingen, he joined the department of Computer Science at the University of Ulm in 1998 as an Assistant Professor. He became a Principal Investigator at the Redwood Neuroscience Institute in Menlo Park in 2003 before joining the University of California, Berkeley in 2005, where he is an Adjunct Professor at the Redwood Center for Theoretical Neuroscience and the Helen Wills Neuroscience Institute.

    Host: Shrikanth Narayanan & Richard Leady

    Location: Hughes Aircraft Electrical Engineering Center (EEB) - 132

    Audiences: Everyone Is Invited

    Contact: Tanya Acevedo-Lam/EE-Systems

    OutlookiCal
  • Speech Technology Research and Applications at LPTV

    Wed, May 03, 2017 @ 02:00 PM - 04:00 PM

    Ming Hsieh Department of Electrical and Computer Engineering

    Conferences, Lectures, & Seminars


    Speaker: Nestor Becerra Yoma, Universidad de Chile in Santiago

    Talk Title: Speech Technology Research and Applications at LPTV

    Abstract: In this talk I will describe the research I have carried out in the Speech Processing and Transmission Laboratory (LPTV, Laboratorio de Procesamiento y Transmisión de Voz) in the last 17 years. LPTV is located at Universidad de Chile and was founded by me in 2000. I will discuss the seminar work on uncertainty and how the first results were achieved. As far as we know, those are the first uncertainty modelling in HMM. I will talk about our experience with speech technology for telephone applications and second language learning. Some relevant papers on stochastic Weighted Viterbi, multi-classifier fusion, CAPT and VoIP will be discussed. I will describe our state-of-the-art robotic platform that we have implemented to pursue our research on voice-based human-robot interaction. In this context, the locally normalized features will be presented to address the time varying channel problem. I will show demos and discuss ideas on voice-based HRI. Finally, I will summarize our results on multidisciplinary research on signal processing.

    Biography: Néstor Becerra Yoma received the PhD degree from University of Edinburgh, UK, and the M.Sc. and B.Sc. degrees from UNICAMP (Campinas State University), Sao Paulo, Brazil, all of them in Electrical Engineering, in 1998, 1993 and 1986, respectively. From 2000, he has been a Professor at the Department of Electrical Engineering, Universidad de Chile, in Santiago, where he is currently lecturing on telecommunications and speech processing. In 2011 he was promoted to the Full Professor position. From 2016 to 2017 he was a visiting professor at CMU, USA. At Universidad de Chile he started the Speech Processing and Transmission Laboratory to carry out research on speech technology applications on human-robot interaction, language learning, Internet and telephone line. His research interest also includes multidisciplinary research on signal processing in fields such as astronomy, mining and volcanology. He is the author of about 40 journal articles, 40 conference papers and three patents. Professor Becerra Yoma was an associate editor of the IEEE Transactions on Speech and Audio Processing from for four years.

    Host: Shrikanth Narayanan

    Location: Hughes Aircraft Electrical Engineering Center (EEB) - 132

    Audiences: Everyone Is Invited

    Contact: Tanya Acevedo-Lam/EE-Systems

    OutlookiCal
  • Speech Technology Research and Applications at LPTV

    Wed, May 03, 2017 @ 02:00 PM - 04:00 PM

    Ming Hsieh Department of Electrical and Computer Engineering

    Conferences, Lectures, & Seminars


    Speaker: Nestor Becerra Yoma, Universidad de Chile in Santiago

    Talk Title: Speech Technology Research and Applications at LPTV

    Abstract: In this talk I will describe the research I have carried out in the Speech Processing and Transmission Laboratory (LPTV, Laboratorio de Procesamiento y Transmisión de Voz) in the last 17 years. LPTV is located at Universidad de Chile and was founded by me in 2000. I will discuss the seminar work on uncertainty and how the first results were achieved. As far as we know, those are the first uncertainty modelling in HMM. I will talk about our experience with speech technology for telephone applications and second language learning. Some relevant papers on stochastic Weighted Viterbi, multi-classifier fusion, CAPT and VoIP will be discussed. I will describe our state-of-the-art robotic platform that we have implemented to pursue our research on voice-based human-robot interaction. In this context, the locally normalized features will be presented to address the time varying channel problem. I will show demos and discuss ideas on voice-based HRI. Finally, I will summarize our results on multidisciplinary research on signal processing.

    Biography: Néstor Becerra Yoma received the PhD degree from University of Edinburgh, UK, and the M.Sc. and B.Sc. degrees from UNICAMP (Campinas State University), Sao Paulo, Brazil, all of them in Electrical Engineering, in 1998, 1993 and 1986, respectively. From 2000, he has been a Professor at the Department of Electrical Engineering, Universidad de Chile, in Santiago, where he is currently lecturing on telecommunications and speech processing. In 2011 he was promoted to the Full Professor position. From 2016 to 2017 he was a visiting professor at CMU, USA. At Universidad de Chile he started the Speech Processing and Transmission Laboratory to carry out research on speech technology applications on human-robot interaction, language learning, Internet and telephone line. His research interest also includes multidisciplinary research on signal processing in fields such as astronomy, mining and volcanology. He is the author of about 40 journal articles, 40 conference papers and three patents. Professor Becerra Yoma was an associate editor of the IEEE Transactions on Speech and Audio Processing from for four years.

    Host: Shrikanth Narayanan

    Location: Hughes Aircraft Electrical Engineering Center (EEB) - 132

    Audiences: Everyone Is Invited

    Contact: Tanya Acevedo-Lam/EE-Systems

    OutlookiCal
  • Speech Technology Research and Applications at LPTV

    Wed, May 03, 2017 @ 02:00 PM - 04:00 PM

    Ming Hsieh Department of Electrical and Computer Engineering

    Conferences, Lectures, & Seminars


    Speaker: Nestor Becerra Yoma, Universidad de Chile in Santiago

    Talk Title: Speech Technology Research and Applications at LPTV

    Abstract: In this talk I will describe the research I have carried out in the Speech Processing and Transmission Laboratory (LPTV: Laboratorio de Procesamiento y Transmisión de Voz) in the last 17 years. LPTV is located at Universidad de Chile and was founded by me in 2000. I will discuss the seminar work on uncertainty and how the first results were achieved. As far as we know, those are the first uncertainty modelling in HMM. I will talk about our experience with speech technology for telephone applications and second language learning. Some relevant papers on stochastic Weighted Viterbi, multi-classifier fusion, CAPT and VoIP will be discussed. I will describe our state-of-the-art robotic platform that we have implemented to pursue our research on voice-based human-robot interaction. In this context, the locally normalized features will be presented to address the time varying channel problem. I will show demos and discuss ideas on voice-based HRI. Finally, I will summarize our results on multidisciplinary research on signal processing.

    Biography: Néstor Becerra Yoma received the PhD degree from University of Edinburgh, UK, and the M.Sc. and B.Sc. degrees from UNICAMP (Campinas State University), Sao Paulo, Brazil, all of them in Electrical Engineering, in 1998, 1993 and 1986, respectively. From 2000, he has been a Professor at the Department of Electrical Engineering, Universidad de Chile, in Santiago, where he is currently lecturing on telecommunications and speech processing. In 2011 he was promoted to the Full Professor position. From 2016 to 2017 he was a visiting professor at CMU, USA. At Universidad de Chile he started the Speech Processing and Transmission Laboratory to carry out research on speech technology applications on human-robot interaction, language learning, Internet and telephone line. His research interest also includes multidisciplinary research on signal processing in fields such as astronomy, mining and volcanology. He is the author of about 40 journal articles, 40 conference papers and three patents. Professor Becerra Yoma was an associate editor of the IEEE Transactions on Speech and Audio Processing from for four years.

    Host: Shrikanth Narayanan

    Location: Hughes Aircraft Electrical Engineering Center (EEB) - 132

    Audiences: Everyone Is Invited

    Contact: Tanya Acevedo-Lam/EE-Systems

    OutlookiCal
  • Speech Technology Research and Applications at LPTV

    Wed, May 03, 2017 @ 02:00 PM - 04:00 PM

    Ming Hsieh Department of Electrical and Computer Engineering

    Conferences, Lectures, & Seminars


    Speaker: Nestor Becerra Yoma, Universidad de Chile in Santiago

    Talk Title: Speech Technology Research and Applications at LPTV

    Abstract: In this talk I will describe the research I have carried out in the Speech Processing and Transmission Laboratory (LPTV) in the last 17 years. LPTV is located at Universidad de Chile and was founded by me in 2000. I will discuss the seminar work on uncertainty and how the first results were achieved. As far as we know, those are the first uncertainty modelling in HMM. I will talk about our experience with speech technology for telephone applications and second language learning. Some relevant papers on stochastic Weighted Viterbi, multi-classifier fusion, CAPT and VoIP will be discussed. I will describe our state-of-the-art robotic platform that we have implemented to pursue our research on voice-based human-robot interaction. In this context, the locally normalized features will be presented to address the time varying channel problem. I will show demos and discuss ideas on voice-based HRI. Finally, I will summarize our results on multidisciplinary research on signal processing.

    Biography: Néstor Becerra Yoma received the PhD degree from University of Edinburgh, UK, and the M.Sc. and B.Sc. degrees from UNICAMP (Campinas State University), Sao Paulo, Brazil, all of them in Electrical Engineering, in 1998, 1993 and 1986, respectively. From 2000, he has been a Professor at the Department of Electrical Engineering, Universidad de Chile, in Santiago, where he is currently lecturing on telecommunications and speech processing. In 2011 he was promoted to the Full Professor position. From 2016 to 2017 he was a visiting professor at CMU, USA. At Universidad de Chile he started the Speech Processing and Transmission Laboratory to carry out research on speech technology applications on human-robot interaction, language learning, Internet and telephone line. His research interest also includes multidisciplinary research on signal processing in fields such as astronomy, mining and volcanology. He is the author of about 40 journal articles, 40 conference papers and three patents. Professor Becerra Yoma was an associate editor of the IEEE Transactions on Speech and Audio Processing from for four years.

    Host: Shrikanth Narayanan

    Location: Hughes Aircraft Electrical Engineering Center (EEB) - 132

    Audiences: Everyone Is Invited

    Contact: Tanya Acevedo-Lam/EE-Systems

    OutlookiCal
  • Speech Technology Research and Applications at LPTV

    Wed, May 03, 2017 @ 02:00 PM - 04:00 PM

    Ming Hsieh Department of Electrical and Computer Engineering

    Conferences, Lectures, & Seminars


    Speaker: Nestor Becerra Yoma, Universidad de Chile in Santiago

    Talk Title: Speech Technology Research and Applications at LPTV

    Abstract: In this talk I will describe the research I have carried out in the Speech Processing and Transmission Laboratory (LPTV, Laboratorio de Procesamiento y Transmision de Voz) in the last 17 years. LPTV is located at Universidad de Chile and was founded by me in 2000. I will discuss the seminar work on uncertainty and how the first results were achieved. As far as we know, those are the first uncertainty modelling in HMM. I will talk about our experience with speech technology for telephone applications and second language learning. Some relevant papers on stochastic Weighted Viterbi, multi-classifier fusion, CAPT and VoIP will be discussed. I will describe our state-of-the-art robotic platform that we have implemented to pursue our research on voice-based human-robot interaction. In this context, the locally normalized features will be presented to address the time varying channel problem. I will show demos and discuss ideas on voice-based HRI. Finally, I will summarize our results on multidisciplinary research on signal processing.

    Biography: Nestor Becerra Yoma received his PhD degree from University of Edinburgh, UK, and the M.Sc. and B.Sc. degrees from UNICAMP (Campinas State University), Sao Paulo, Brazil, all of them in Electrical Engineering, in 1998, 1993 and 1986, respectively. From 2000, he has been a Professor at the Department of Electrical Engineering, Universidad de Chile, in Santiago, where he is currently lecturing on telecommunications and speech processing. In 2011 he was promoted to the Full Professor position. From 2016 to 2017 he was a visiting professor at CMU, USA. At Universidad de Chile he started the Speech Processing and Transmission Laboratory to carry out research on speech technology applications on human-robot interaction, language learning, Internet and telephone line. His research interest also includes multidisciplinary research on signal processing in fields such as astronomy, mining and volcanology. He is the author of about 40 journal articles, 40 conference papers and three patents. Professor Becerra Yoma was an associate editor of the IEEE Transactions on Speech and Audio Processing from for four years.

    Host: Shrikanth Narayanan

    Location: Hughes Aircraft Electrical Engineering Center (EEB) - 132

    Audiences: Everyone Is Invited

    Contact: Tanya Acevedo-Lam/EE-Systems

    OutlookiCal
  • PhD Defense - Elaine Short

    Wed, May 03, 2017 @ 02:00 PM - 04:00 PM

    Thomas Lord Department of Computer Science

    University Calendar


    PhD Candidate: Elaine Short

    Title: Managing Multi-Party Social Dynamics for Socially Assistive Robotics

    Date: 05/03/17
    Time: 2-4pm
    Location: RTH 406

    Committee:

    Maja Matarić (Chair)
    David Traum
    Gaurav Sukhatme
    Gisele Ragusa (External)

    Abstract:

    This dissertation presents a domain-independent computational model of moderation of multi-party human-machine interactions that enables a robot or virtual agent to act as a moderator in a group interaction.
    A moderator is defined in this work as an agent that regulates social and task outcomes in a goal-oriented social interaction. This model has multiple applications in human-machine interaction: groups of people often require some management or facilitation to ensure smooth and productive interaction, especially when the context is emotionally fraught or the participants do not know each other well. A particularly relevant application domain for moderation is in Socially Assistive Robotics (SAR), where systems are frequently deployed without complex speech understanding or dialogue management, but where group interactions can benefit from a moderator's participation. The evaluation of the model focuses on intergenerational interactions, but the model is applicable to various other SAR domains as well, including group therapy, informal teaching between peers, and social skills therapy.

    Moderation is formalized as a decision-making problem, where measures of task performance and positive social interaction in a group are maximized through the behavior of a social moderator. This framework provides a basis for the development of a series of control algorithms for robot moderators to assist groups of people in improving task performance and managing the social dynamics of interactions in diverse domains. Based on reliably-sensed features of the interaction such as task state and voice activity, the moderator takes social actions that can predictably alter task performance and the social dynamics of the interaction. Thus the moderator is able to support human-human interaction in unpredictable, open-ended, real-world contexts.

    The model of moderation provides a framework for developing algorithms that enable robots to moderate group interactions without the need for speech recognition; it complements dialogue systems and human-computer interaction, providing conversational agents with additional strategies for managing dynamics of group interaction. Four algorithms are developed based on the model: a basic moderation algorithm, a task-goal-based moderation algorithm, a social-feature-based moderation algorithm, and a combined algorithm that takes into account both task goals and social features. These algorithms are validated in both peer-group interactions and inter-generational family interactions where the moderator supports interactions including members of multiple generations within the same family. The work is intended for short- and long-term deployments of socially assistive robots and virtual agents, and can be applied across assistive domains to facilitate social interactions and improve task performance.

    Location: Ronald Tutor Hall of Engineering (RTH) - 406

    Audiences: Everyone Is Invited

    Contact: Lizsl De Leon

    OutlookiCal
  • AI Seminar

    Thu, May 04, 2017 @ 11:00 AM - 12:00 PM

    Information Sciences Institute

    Conferences, Lectures, & Seminars


    Speaker: Yan Liu, Associate Professor, USC

    Talk Title: Deep Learning Models for Time Series Data Analysis with Applications to Healthcare

    Abstract: Many emerging applications of big data involve time series data. We'll discuss a collection of deep learning models to effectively analyze and model large-scale time series data. We'll show experiment results to demonstrate the effectiveness of our models in healthcare.

    Biography: Yan Liu is an associate professor in Computer Science Department at University of Southern California from 2010. Before that, she was a Research Staff Member at IBM Research. She received her M.Sc and Ph.D. degree from Carnegie Mellon University in 2004 and 2007. Her research interest includes developing scalable machine learning and data mining algorithms for time series data and structured data with applications to social media analysis, computational biology, climate modeling and health care. She has received several awards, including NSF CAREER Award, Okawa Foundation Research Award, ACM Dissertation Award Honorable Mention, Best Paper Award in SIAM Data Mining Conference, Yahoo, IBM and Facebook Faculty Award and the winner of several data mining competitions, such as KDD Cup and INFORMS data mining competition.

    Host: Mayank Kejriwal

    More Info: http://webcastermshd.isi.edu/Mediasite/Play/5447fbec7809488a9444c23f8b3619ce1d

    Location: Information Science Institute (ISI) - 11th floor large conference room

    Audiences: Everyone Is Invited

    Contact: Kary LAU

    Event Link: http://webcastermshd.isi.edu/Mediasite/Play/5447fbec7809488a9444c23f8b3619ce1d

    OutlookiCal
  • The FuzzyLog Approach to Building Distributed Services

    Thu, May 04, 2017 @ 04:00 PM - 05:00 PM

    Ming Hsieh Department of Electrical and Computer Engineering

    Conferences, Lectures, & Seminars


    Speaker: Mahesh Balakrishnan, Yale University

    Talk Title: The FuzzyLog Approach to Building Distributed Services

    Abstract: Control plane applications such as coordination services, SDN controllers, filesystem namespaces, and big data schedulers have strong requirements for consistency as well as performance. Building such applications is currently a black art, requiring a slew of complex distributed protocols that are inefficient when layered and difficult to combine. The shared log approach (seen in the Corfu, Tango, and CorfuDB systems) achieves simplicity for distributed applications by replacing complex protocols with a single shared log; however, it does so by introducing a global ordering over all updates in the system, which can be expensive, unnecessary, and sometimes impossible. We propose the FuzzyLog abstraction, which provides applications the simplicity of a shared log without its drawbacks. The FuzzyLog allows applications to construct and access a durable, iterable partial order of updates in the system. FuzzyLog applications retain the simplicity of their shared log counterparts while extracting parallelism, providing a range of consistency guarantees and tolerating network partitions.

    Biography: Mahesh Balakrishnan is an Associate Professor (pre-tenure) at Yale University since Fall 2015. He received a PhD in Computer Science from Cornell University in 2009. He worked at Microsoft Research Silicon Valley from 2008 to 2014, where he co-led the CORFU and Tango projects on shared log systems, and briefly at VMware Research in 2015. His research interests span distributed systems, storage and networking. Currently, his research centers on new abstractions that simplify the construction of fast, reliable and consistent systems, while hiding the complexity of concurrency, failures and hardware details from programmers. He has published 35+ peer-reviewed papers in systems conferences such as SOSP, NSDI and FAST and journals such as TOCS. His current research is funded by NSF, Facebook Awards, and a VMware Early Career faculty grant.

    Host: Xuehai Qian, x04459, xuehai.qian@usc.edu

    Audiences: Everyone Is Invited

    Contact: Gerrielyn Ramos

    OutlookiCal
  • Computer Science Doctoral Preview Day

    Fri, May 05, 2017 @ 10:00 AM - 02:00 PM

    Viterbi School of Engineering Graduate Admission

    Receptions & Special Events


    Join us in Los Angeles on May 5th for our Computer Science Department Doctoral Preview at the USC Viterbi School of Engineering.

    This event is a great opportunity for students to learn about the PhD program and the various research areas in CS at one of the top-ranked institutions in the nation. Travel grants may be available for highly qualified students traveling from outside the Southern California area.
    More info and registration

    Location: Ronald Tutor Hall of Engineering (RTH) -

    Audiences: Everyone Is Invited

    Contact: USC Viterbi Graduate & Professional Programs

    OutlookiCal
  • NL Seminar - REPRESENTATION LEARNING FOR HUMAN AFFECT RECOGNITION-PhD Proposal Practice Talk

    Fri, May 05, 2017 @ 03:00 PM - 04:00 PM

    Information Sciences Institute

    Conferences, Lectures, & Seminars


    Speaker: Sayan Ghosh, USC/ICT

    Talk Title: REPRESENTATION LEARNING FOR HUMAN AFFECT RECOGNITION-PhD Proposal Practice Talk

    Series: Natural Language Seminar

    Abstract: Recent advances in end-to-end representation learning have made impressive strides in achieving state-of-the-art results in perception problems on speech, image and natural language. However, the area of affect understanding has mostly relied on off-the-shelf features to solve problems in emotion recognition, multi-modal fusion and generative modeling of affective speech and language. The potential impact of representation learning approaches to this area remains ripe for exploration. My thesis proposal is an important step in this direction. Firstly, I present an overview of my work on AU (Action Unit) detection, speech emotion recognition and glottal inverse filtering through speech modeling. Secondly, I introduce Affect LM, a novel neural language model for affective text generation which exploits prior knowledge through a dictionary of emotionally colored words such as the LIWC tool. Finally, I state some upcoming problems in representation learning for affect from speech and multi-modal language modeling which I plan to work on for the remainder of my degree.



    Biography: Sayan is a fourth-year PhD student at the University of Southern California, working at the Behavior Analytics and Machine Learning Group at the ICT Institute for Creative Technologies with Prof. Stefan Scherer. He is working on research towards building learning systems for better sensing of human behavior and emotion, and integrating deep learning techniques with human affect. His areas of interest include, but are not limited to deep learning, machine perception, affective computing, speech/signal processing, and generative modeling.

    Host: Nima Pourdamghani

    More Info: http://nlg.isi.edu/nl-seminar/

    Location: Information Science Institute (ISI) - 11th Flr Conf Rm # 1135, Marina Del Rey

    Audiences: Everyone Is Invited

    Contact: Peter Zamar

    Event Link: http://nlg.isi.edu/nl-seminar/

    OutlookiCal