Logo: University of Southern California

Events Calendar



Select a calendar:



Filter December Events by Event Type:


SUNMONTUEWEDTHUFRISAT

Events for December 07, 2012

  • Repeating EventMeet USC: Admission Presentation, Campus Tour, & Engineering Talk

    Fri, Dec 07, 2012

    Viterbi School of Engineering Undergraduate Admission

    Receptions & Special Events


    This half day program is designed for prospective freshmen and family members. Meet USC includes an information session on the University and the Admission process; a student led walking tour of campus and a meeting with us in the Viterbi School. Meet USC is designed to answer all of your questions about USC, the application process and financial aid. Reservations are required for Meet USC. This program occurs twice, once at 8:30 a.m. and again at 12:30 p.m. Please visit https://esdweb.esd.usc.edu/unresrsvp/MeetUSC.aspx to check availability and make an appointment. Be sure to list an Engineering major as your "intended major" on the webform!

    Location: Ronald Tutor Campus Center (TCC) -

    Audiences: Prospective Freshmen Students and Families

    View All Dates

    Contact: Viterbi Admission

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • Multi-view Learning of Speech Features Using Articulatory Measurements

    Fri, Dec 07, 2012 @ 10:30 AM - 12:00 PM

    Ming Hsieh Department of Electrical and Computer Engineering

    Conferences, Lectures, & Seminars


    Speaker: Karen Livescu, Toyota Technological Institute at Chicago

    Talk Title: Multi-view Learning of Speech Features Using Articulatory Measurements

    Abstract: Articulatory information has been used in automatic speech recognition in a number of ways. For example, phonetic recognition can be improved if articulatory measurements are available at test time. However, it is usually not feasible to measure articulation at test time, due to the expense and inconvenience of the machinery involved. In this work, we ask whether it is possible to use articulatory measurements that are available only at training time to help learn which aspects of the acoustic feature vector are useful. We apply ideas from multi-view learning, in which multiple “views” of the data are available for training but possibly not for prediction (testing). In our case, the views are acoustics on the one hand and articulatory measurements on the other. In particular, we use canonical correlation analysis (CCA) and kernel CCA (KCCA), which find projections of vectors in each view that are maximally correlated with projections of vectors in the other view.
    A typical approach to acoustic feature vector generation in speech recognition is to first construct a very high-dimensional feature vector by concatenating multiple consecutive frames of raw features (MFCCs, PLPs, etc.), and then to reduce dimensionality using either an unsupervised transformation such as principal components analysis, a linear supervised transformation such as linear discriminant analysis and its extensions, or a nonlinear supervised transformation (e.g. using neural networks). Our approach here is unsupervised transformation learning, but using the second view (the articulatory measurements) as a form of “soft supervision”. The approach we take, using CCA and KCCA, avoids some of the disadvantages of other unsupervised approaches, such as PCA, which are sensitive to noise and data scaling, and possibly of supervised approaches, which are more task-specific.
    This talk will cover the basic techniques, as well as several issues that come up in their application, such as large-scale data issues, speaker-independence, and combination of the learned features with standard ones. The talk will include our results to date, showing that the approach can be used to improve performance on tasks such as phonetic classification and recognition.
    Joint work with Raman Arora (TTIC), Sujeeth Bharadwaj (UIUC), and Mark Hasegawa-Johnson (UIUC)


    Biography: Karen Livescu is an Assistant Professor at the Toyota Technological Institute at Chicago (TTIC). She completed her PhD in 2005 at MIT and spent the next two years as a post-doctoral lecturer in the MIT EECS department. Karen's interests are in speech and language processing, with a slant toward combining machine learning with knowledge from linguistics and speech science. Her recent work has been on articulatory models, multi-view learning, nearest-neighbor approaches, and automatic sign language recognition.

    Host: Kartik Audhkhasi, Prof. Shrikanth Narayanan

    Location: Ronald Tutor Hall of Engineering (RTH) - 320

    Audiences: Everyone Is Invited

    Contact: Mary Francis

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • Recent Work at National Taiwan University on Spoken Content Retrieval and Computer-Assisted Language Learning

    Fri, Dec 07, 2012 @ 02:00 PM - 03:30 PM

    Ming Hsieh Department of Electrical and Computer Engineering

    Conferences, Lectures, & Seminars


    Speaker: Prof. Lin-shan Lee, National Taiwan University

    Talk Title: Recent Work at National Taiwan University on Spoken Content Retrieval and Computer-Assisted Language Learning

    Abstract: In this talk some recent work on speech processing research at National Taiwan University (NTU) will be summarized. The first part of the talk will be on retrieval of spoken content semantically related to the query but not necessarily including the query terms. This problem has been widely studied in text information retrieval. In this talk we will show how text-based techniques can be translated into approaches for spoken content. The second part of the talk will be on a dialogue game framework for computer-assisted language learning. This work extends our previous work of NTU Chinese, a Chinese pronunciation evaluation software tool, to develop a dialogue game. We use reinforcement learning to train the dialogue manager with a goal to offer learning materials adapted to the learning progress of each individual learner.

    Biography: Lin-shan Lee has been a professor of National Taiwan University since 1982. He developed several early versions of Chinese spoken language systems (text-to-speech synthesis, large vocabulary continuous speech recognition, etc.) in the world, and was elected as an IEEE Fellow in 1992. His recent work is more focused on speech recognition fundamentals (features, models, robustness, etc.) and network environment problems (e.g. retrieval, semantic analysis and educational issues of spoken content). He served as the Distinguished Lecturer of IEEE Signal Processing Society (SPS) (2007-08), a member of the Overview Paper Editorial Board of IEEE SPS (since 2009), and the general chair of ICASSP 2009 at Taipei. He also served as a Board member of the International Speech Communication Association (ISCA) (2001-09), in which he initiated and coordinated (2006-09) the ISCA Distinguished Lecturers Program. He was elected as an ISCA Fellow in 2010.

    Host: Dogan Can and Prof. Shrikanth Narayanan

    Location: RTH 320

    Audiences: Everyone Is Invited

    Contact: Mary Francis

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • Integrated Systems Seminar Series

    Fri, Dec 07, 2012 @ 03:00 PM - 04:30 PM

    Ming Hsieh Department of Electrical and Computer Engineering

    Conferences, Lectures, & Seminars


    Speaker: Ali Afsahi, Broadcom

    Talk Title: Integrated CMOS Power Amplifier for WLAN Applications

    Abstract: Consumer demand for WiFi-enabled products has increased over the last several years, and analyst reports conclude that the sales momentum is expected to continue for the next several years. From laptops to cell phones to television sets, our desire to share
    broadband information is making wireless connectivity a must-have feature for a range of devices. In order to keep the cost down and have smaller form-factor especially for embedded applications, the transceiver has been integrated with the baseband PHY and MAC as an SoC solution in a pure CMOS process. One of the main contributors to the cost and size of the solution is the external power amplifier (PA). Recently, there has
    been a significant amount of effort to implement PAs in CMOS technology. However, the low supply voltage, lossy substrate and lower breakdown voltage make the design of a linear, high power and reliable PA quite challenging in CMOS technology. In addition, the advanced modulation schemes such as OFDM, possesses a very high peak-to-average ratio (PAR) and requires a very linear power amplifier to preserve the integrity of the
    signal. This talk will go over the design of a fully integrated, reliable, linear and efficient CMOS power amplifier for Wireless LAN applications while meeting the stringent requirements of WLAN standards. In addition, several linearization and power combining
    techniques will be discussed to boost the output power efficiently for watt-level applications.

    Biography: Ali Afsahi received the B.S. degree in electrical engineering from the University of Tehran, Tehran, Iran, in 1998, and the M.S.E. degree in RF/analog IC design from Arizona State University, Tempe, AZ, in 2006. He is currently pursuing the Ph.D. degree part-time at the University of California at San Diego. He joined Broadcom Corporation in 2004 as an RFIC designer. Since then, he has been involved in the design of various blocks for wireless LAN transceivers. He is currently a manager, IC design engineering, leading the transmitter and CMOS power amplifier designs for all of the Broadcom’s WLAN products. He has more than 30 issued and pending patents. He is on his fourth year serving as a member of RFIC Technical Program Committee. Mr. Afsahi was the recipient of the 2009 Analog Devices Outstanding Designer Award for his work on CMOS power amplifiers.

    Host: Prof. Hossein Hashemi, Prof. Mahta Moghaddam, Prof. Mike Chen

    More Info: http://mhi.usc.edu/activities/integrated-systems/

    Location: Hughes Aircraft Electrical Engineering Center (EEB) - 248

    Audiences: Everyone Is Invited

    Contact: Hossein Hashemi

    Event Link: http://mhi.usc.edu/activities/integrated-systems/

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • Machine Learning for Speech and Language Processing

    Fri, Dec 07, 2012 @ 03:30 PM - 05:00 PM

    Ming Hsieh Department of Electrical and Computer Engineering

    Conferences, Lectures, & Seminars


    Speaker: Professor Jen-Tzung Chien, National Chiao Tung University, Hsinchu

    Talk Title: Machine Learning for Speech and Language Processing

    Abstract: In this lecture, I will present a series of machine learning approaches to various applications relevant to speech and language processing including acoustic modelling, language modelling, blind source separation and document summarization. In general, speech and language processing involves extensive knowledge of statistical models which are learnt from observation data. However, in the real world, observation data are inevitably acquired from heterogeneous environments in the presence of mislabeled, misaligned, mismatched and ill-posed conditions. The estimated models suffer from large complexity, ambiguity and uncertainty. Model regularization becomes a crucial issue when constructing the speech and text models for different information systems. In statistical machine learning, the uncertainty and sparse coding algorithms provide attractive and effective solution to model regularization. This lecture will address recent works on Bayesian and sparse learning. In particular, I will present Bayesian sensing hidden Markov models and Dirichlet class language models for speech recognition, online Gaussian process for blind source separation, and Bayesian nonparametrics for document summarization. In these works, robust models are established against improper model assumption, over-determined model complexity, ambient noise interference, and nonstationary environment variations. Finally, I will point out some potential topics on machine learning for speech and language processing.

    Biography: Jen-Tzung Chien received his Ph.D. degree in electrical engineering from National Tsing Hua University, Hsinchu, Taiwan, in 1997. During 1997-2012, he was with the National Cheng Kung University, Tainan, Taiwan. Since 2012, he has been with the Department of Electrical and Computer Engineering, National Chiao Tung University, Hsinchu, where he is currently a Distinguished Professor. He held the Visiting Researcher positions at the Panasonic Technologies Inc., Santa Barbara, CA, the Tokyo Institute of Technology, Tokyo, Japan, the Georgia Institute of Technology, Atlanta, GA, the Microsoft Research Asia, Beijing, China, and the IBM T. J. Watson Research Center, Yorktown Heights, NY. His research interests include machine learning, speech recognition and blind source separation.
    Dr. Chien served as the associate editor of the IEEE Signal Processing Letters, in 2008-2011, and the tutorial speaker of the ICASSP, in 2012. He is appointed as the APSIPA Distinguished Lecturer for 2012-2013. He was a co-recipient of the Best Paper Award of the IEEE Automatic Speech Recognition and Understanding Workshop in 2011. He received the Distinguished Research Awards from the National Science Council, Taiwan, in 2006 and 2010.

    Host: Kartik Audhkhasi, Dr. Maarten Van Segbroeck, Prof. Shrikanth Narayanan

    Location: Ronald Tutor Hall of Engineering (RTH) - 320

    Audiences: Everyone Is Invited

    Contact: Mary Francis

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File