Logo: University of Southern California

Events Calendar



Select a calendar:



Filter May Events by Event Type:



Events for May 28, 2024

  • PhD Dissertation Defense - Myrl Marmarelis

    Tue, May 28, 2024 @ 02:00 PM - 04:00 PM

    Thomas Lord Department of Computer Science

    University Calendar


    Title: Robust Causal Inference with Machine Learning on Observational Data
     
    Date and Time: Tuesday, May 28th - 2:00pm - 4:00pm
     
    Committee: Aram Galstyan (Chair), Greg Ver Steeg, Fred Morstatter, Shanghua Teng, and Roger Ghanem (external)
     
    Abstract: 
    The rise of artificial intelligence and deep learning has led to unprecedented capabilities in prediction. As these black-box algorithms are deployed in different parts of society, it is becoming increasingly clear that predictions alone do not always translate to enabling effective decisions, policies, or reliable forecasts in a changing world. What is often needed is a stronger understanding of a system than a predictive model of observations can offer. This deficit arises when attempting to predict the system’s behavior in novel situations. Causal inference refers to a set of theoretical frameworks and practical methods for identifying cause-and-effect structures from data. Knowledge of this structure can help anticipate what would happen in a novel situation, like subjecting the system to intervention. Much work in causal inference is concerned with finding the minimal assumptions required to answer specific causal questions, like estimating the effect of a certain treatment. The more reasonable and relaxed the assumptions of a causal-inference method, the more applicable it is to diverse datasets and machine learning. There are many methodological aspects to performing causal inference on observational data—that is, without the ability to perform experiments. Of fundamental significance is having workable representations of the system that can be learned from data. Closely related to the quality of the representations is the ability to make downstream causal estimates robust to confounding. Confounders in a system are common structures that might confuse apparent relations between cause and effect, or treatment and outcome.
     
    In this dissertation, I propose methods for addressing these problems in challenging machine-learning contexts. I introduce an improved representation of single-cell RNA sequencing data for inference tasks in medicine and biology. Looking for high-dimensional interactions in biological processes leads to better resolution of phenotypes. More broadly, I make numerous contributions towards increased robustness of machine learning to hidden or observed confounding. I address sensitivity of dose-response curves to hidden confounding, prediction of interventional outcomes under hidden confounding; robust effect estimation for continuous-valued and multivariate interventions, and estimation for interventions that might only encourage treatment as a function of susceptibility.
     

    Location: Information Science Institute (ISI) - 553

    Audiences: Everyone Is Invited

    Contact: Myrl Marmarelis

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • MHI Seminar - Karen Livescu - Tuesday, May 28th at 3pm in EEB 248 & Zoom

    Tue, May 28, 2024 @ 03:00 PM - 05:00 PM

    Ming Hsieh Department of Electrical and Computer Engineering

    Conferences, Lectures, & Seminars


    Speaker: Karen Livescu, Professor TTI-Chicago

    Talk Title: What Do Pre-Trained Speech Representation Models Know?

    Abstract: Pre-trained speech representation models have become ubiquitous in speech processing over the past few years.  They have both improved the state of the art and made it feasible to learn task-specific models with very little labeled data.  However, it is not well understood what linguistic information is encoded in pre-trained models, where in the models it is encoded, and how best to apply this information to downstream tasks. In this talk I will describe recent work that begins to build an understanding of pre-trained speech models, through both layer-wise analysis and benchmarking on tasks.  We consider a number of popular pre-trained models and investigate the extent to which they encode spectral, phonetic, and word-level information.  The results of these analyses also suggest some ways to improve or simplify the application of pre-trained models for downstream tasks.  Finally, I will describe our efforts to benchmark model performance on a variety of spoken language understanding tasks, in order to broaden our understanding of the semantic capabilities of speech models.

    Biography: Karen Livescu is a Professor at TTI-Chicago.  This year she is on sabbatical, splitting her time between the Stanford NLP group and the CMU Language Technologies Institute.  She completed her PhD at MIT in 2005. She is an ISCA Fellow and a recent IEEE Distinguished Lecturer.  She has served as a program chair/co-chair for ICLR, Interspeech, and ASRU, and is an Associate Editor for TACL and IEEE T-PAMI.  Her group's work spans a variety of topics in spoken, written, and signed language processing, with a particular interest in representation learning, cross-modality learning, and low-resource settings.

    Host: Shrikanth Narayanan

    More Info: https://usc.zoom.us/j/98343896109?pwd=VWxRVTJVc3NLMjZGcEVVNGw1a1J0dz09

    More Information: 2024 Karen Livescu Seminar.pdf

    Location: Hughes Aircraft Electrical Engineering Center (EEB) - 248

    Audiences: Everyone Is Invited

    Contact: Marilyn Poplawski

    Event Link: https://usc.zoom.us/j/98343896109?pwd=VWxRVTJVc3NLMjZGcEVVNGw1a1J0dz09

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File