Logo: University of Southern California

Events Calendar



Select a calendar:



Filter February Events by Event Type:


SUNMONTUEWEDTHUFRISAT
4
6
8
9
10

11
12
15
16
17

25
26
27
29
1
2


University Calendar
Events for February

  • PhD Thesis Proposal - Matthew Ferland

    Thu, Feb 01, 2024 @ 12:30 AM - 02:00 PM

    Thomas Lord Department of Computer Science

    University Calendar


    Event: PhD Thesis Proposal (Matthew Ferland)
     
    Committee: Shanghua Teng, David Kempe, Jiapeng Zhang, Shaddin Dughmi, and Larry Goldstein
     
    Date: February 1, 2024, 12:30pm – 2:00pm
     
    Title: Exploring the complexity landscape of combinatorial games
     
    Abstract: People have been playing games since before written history, and many of the earliest games were combinatorial games, that is to say, games of perfect information and no chance. This type of game is still widely played today, and many popular games of this type, such as Chess and Go, are some of the most studied games of all time. This proposed work resolves around a game-independent systemic study of these games, involving computational properties involving evaluating the mathematical analysis tools, such as sprague-grundy values and switches, as well identifying what can be determined about these games under simple oracle models.
     
     
     

    Location: Henry Salvatori Computer Science Center (SAL) - 213

    Audiences: Everyone Is Invited

    Contact: CS Events

    OutlookiCal
  • PhD Defense - KR Zentner

    Thu, Feb 01, 2024 @ 02:00 PM - 04:00 PM

    Thomas Lord Department of Computer Science

    University Calendar


    Title: Leveraging Cross-Task Transfer in Sequential Decision Problems: Scalable Reinforcement Learning for Robotics          
     
    Defense Committee:  Gaurav Sukhatme (chair), Heather Culbertson, Stefanos Nikolaidis, Laurent Itti, Bhaskar Krishnamachari           
     
    Date: Feb 1, 2024, 2 p.m. - 4 p.m.  - RTH 217       
     
    Abstract: The past few years have seen an explosion of interest in using machine learning to make robots capable of learning a diverse set of tasks. Potentially, these robots could operate in close proximity to humans, assisting humans with a wide variety of needs and being instructed to perform new tasks as needed. However, these robots generally use Reinforcement Learning to learn detailed sub-second interactions, but consequently require large amounts of data for each task. In this thesis we explore how Reinforcement Learning can be combined with Transfer Learning to re-use data across tasks. We begin by reviewing the state of Multi-Task and Meta RL and describe the motivations for using Transfer Learning. Then, we describe a basic framework for using Transfer Learning to efficiently learn multiple tasks, and show how it requires predicting how effectively transfer can be performed across tasks. Next, we present a simple rule, based in information theory, for predicting the effectiveness of Cross-Task Transfer, which we call the "Transfer Cost Rule." We discuss the theoretical implications of that rule, and show various quantitative evaluations of it. Then, we show two directions of work making use of our insights to perform efficient Transfer Reinforcement Learning. The first of these directions uses Cross-Task Co-Learning and Plan Conditioned Behavioral Cloning to share skill representations produced by a Large Language Model, and is able to learn many tasks from a single demonstration each in a simulated environment. The second of these directions uses Two-Phase KL Penalization to enforce a (potentially off-policy) trust region. These advances in Transfer RL may enable robots to be used in a wider range of applications, such as in the home or office. The insight provided by the Transfer Cost Rule may also be relevant to a wide audience of Reinforcement Learning practitioners, since it provides a practical and theoretically grounded explanation for the performance of Deep Reinforcement Learning algorithms.      
     
    Zoom link: https://usc.zoom.us/j/96965616504?pwd=QngwQTJsTXJkbXJJNU9hRVV2Mk1DQT09   

    Location: Ronald Tutor Hall of Engineering (RTH) - 217

    Audiences: Everyone Is Invited

    Contact: CS Events

    Event Link: https://usc.zoom.us/j/96965616504?pwd=QngwQTJsTXJkbXJJNU9hRVV2Mk1DQT09

    OutlookiCal
  • PhD Thesis Proposal - Hsien-Te Kao

    Fri, Feb 02, 2024 @ 01:00 PM - 02:30 PM

    Thomas Lord Department of Computer Science

    University Calendar


    Committee: Emilio Ferrara (Chair), Kristina Lerman, Phebe Vayanos, Souti Chattopadhyay, Ruishan Liu  
     
    Date and Time: Friday, February 2, 2024, 1:00 PM - 2:30 PM PST - RTH 115
     
    Title: Cold Start Prediction in Personalized mHealth  
     
    Abstract: Mobile health has brought fundamental changes to the healthcare industry, offering new hope in addressing growing healthcare expenditures, opportunity costs, and labor shortages. Machine learning is driving mobile health towards decentralized healthcare by automating health monitoring, diagnosis, and treatment. Personalized mobile health systems are a key component in advancing patient-centric healthcare, but these systems remain unfeasible outside of hospital settings because personal health data is largely inaccessible, uncollectible, and regulated. In this proposal, we introduce a personalized mobile health system to predict individual health status without user context through a set of mobile, wearable, and ubiquitous technologies. The model leverages collaborative filtering to replace missing user context with learned similar group characteristics, where user similarity is captured through multiple dimensions of cognitive appraisal based on a combination of psychology theories. The system eliminates user dependence through passive feedback that satisfies real-world constraints. Our preliminary results demonstrate a proof-of-concept system.

    Location: Ronald Tutor Hall of Engineering (RTH) - 115

    Audiences: Everyone Is Invited

    Contact: CS Events

    OutlookiCal
  • PhD Thesis Proposal - Ayush Jain

    Mon, Feb 05, 2024 @ 04:00 PM - 06:00 PM

    Thomas Lord Department of Computer Science

    University Calendar


    Thesis Proposal: Ayush Jain
    Date: February 5, 2024 (Monday), 4 pm - 6 pm
    Location: TBD
     
    Committee: Erdem Biyik, Joseph J Lim, Gaurav Sukhatme, Stefanos Nikolaidis, Fefei Qian
     
    Title: Enabling Robust Reinforcement Learning in Challenging Action Spaces
     
    Abstract: The action space of an agent defines its interface to interact with the world. It can take two forms: discrete, as in recommender systems making decisions from millions of choices, or continuous, as in robots actuating control movements. While humans excel at a vast range of action spaces, from deciding between potentially unseen choices to making precise dexterous control like in surgery, conventional reinforcement learning (RL) is limited to simple action spaces beyond which agents fail entirely. Concretely, discrete RL typically assumes a "static" action space that never changes, while continuous RL assumes a "smooth" action space such that nearby actions have similar consequences. My goal is to alleviate these assumptions to broaden the applicability of RL agents to tasks with challenging action spaces. Thus, I build discrete RL algorithms that can adapt to any available action set and even choose from actions never seen before, such as recommending new items and choosing from unseen toolsets. In continuous action space tasks like robotics, I show how conventional agents get stuck on suboptimal actions due to a challenging action space. To address this, I propose a novel actor-critic algorithm enabling actors to search for more optimal actions.
     

    Location: TBD

    Audiences: Everyone Is Invited

    Contact: CS Events

    OutlookiCal
  • PhD Thesis Defense - Sepanta Zeighami

    Wed, Feb 07, 2024 @ 01:00 PM - 03:00 PM

    Thomas Lord Department of Computer Science

    University Calendar


    Committee members: Cyrus Shahabi (chair), Keith Chugg, Vatsal Sharan, Haipeng Luo
     
    Title: A Function Approximation View of Database Operations for Efficient, Accurate, Privacy-Preserving & Robust Query Answering with Theoretical Guarantees
     
    Abstract: Machine learning models have been recently used to replace various database components (e.g., index, cardinality estimator) and provide substantial performance enhancements over their non-learned alternatives. Such approaches take a function approximation view of the database operations. They consider the database operation as a function that can be approximated (e.g., an index is a function that maps items to their location in a sorted array) and learn a model to approximate the operation's output. In this thesis, we first develop the Neural Database (NeuroDB) framework which extends this function approximation view by considering the entire database system as a function that can be approximated. We show, utilizing this framework, that training neural networks that take queries as input and are trained to output query answer estimates provide substantial performance benefits in various important database problems including approximate query processing, privacy-preserving query answering, and query answering on incomplete datasets. Moreover, we present the first theoretical study of this function approximation view of database operations, providing the first-ever theoretical analysis of various learned database operations. Our analysis provides theoretical guarantees on the performance of the learned models, showing why and when they perform well. Furthermore, we theoretically study the model size requirements, showing how model size needs to change as the dataset changes to ensure a desired accuracy level. Our results enhance our understanding of learned database operations and provide the much-needed theoretical guarantees on their performance for robust practical deployment.
     
    Zoom Link: https://usc.zoom.us/j/91683810479?pwd=VXBmblhDdzZCZU1Oc05jRFV2dzI2dz09
    Meeting ID: 916 8381 0479
    Passcode: 250069

    Location: Charles Lee Powell Hall (PHE) - 106

    Audiences: Everyone Is Invited

    Contact: CS Events

    Event Link: https://usc.zoom.us/j/91683810479?pwd=VXBmblhDdzZCZU1Oc05jRFV2dzI2dz09

    OutlookiCal
  • PhD Thesis Proposal - Saghar Talebipour

    Tue, Feb 20, 2024 @ 01:30 AM - 03:00 PM

    Thomas Lord Department of Computer Science

    University Calendar


    PhD Thesis Proposal - Saghar Talebipour  
     
    Committee Members: Nenad Medvidovic (Chair), William G.J. Halfond, Chao Wang, Mukund Raghothaman, Sandeep Gupta  
     
    Date: Tuesday, February 20, 2024, 1:30 p.m. - 3:00 p.m. Location: EEB 349 
     
    Title: Automated Usage-based Mobile Application Testing via Artifact Reuse  
     
    Abstract: Writing and maintaining UI tests for mobile applications is both time-consuming and tedious. While decades of research have led to automated methods for UI test generation, these methods have largely focused on identifying crashes or maximizing code coverage. However, recent studies have emphasized the significance of usage-based tests targeting specific app functionalities and use cases. My research introduces novel automated testing techniques that make use of existing artifacts, such as tests from similar applications or video recordings of app operations. These approaches help us move closer to achieving the goal of automated usage-based testing of mobile applications.

    Location: Hughes Aircraft Electrical Engineering Center (EEB) - 349

    Audiences: Everyone Is Invited

    Contact: CS Events

    OutlookiCal
  • PhD Thesis Proposal - Qinyi Luo

    Wed, Feb 21, 2024 @ 11:00 AM - 12:30 PM

    Thomas Lord Department of Computer Science

    University Calendar


    PhD Thesis Proposal - Qinyi Luo
    Title: High-Performance Heterogeneity-Aware Distributed Machine Learning Model Training
     
    Committee members: Xuehai Qian (co-chair), Viktor Prasanna (co-chair), Ramesh Govindan, Chao Wang, Salman Avestimehr    
     
    Abstract: The increasing size of machine learning models and the ever-growing amount of data result in days or even weeks of time required to train a machine learning model. To accelerate training, distributed training with parallel stochastic gradient descent is widely adopted as the go-to training method. This thesis proposal targets four challenges in distributed training: (1) performance degradation caused by large amount of data transfer among parallel workers, (2) heterogeneous computation and communication capacities in the training devices, i.e., the straggler problem, (3) huge memory consumption during training caused by huge model sizes, and (4) automatic selection of parallelization strategies. The proposal first introduces our work in decentralized training, including system support and algorithmic innovation that strengthen tolerance against stragglers in data-parallel training. Then, an adaptive during-training model compression technique is proposed to reduce the memory consumption of training huge recommender models. In the end, in the aspect of automatic parallelization of training workloads, a novel unified representation of parallelization strategies is proposed, as well as a search algorithm that selects superior parallel settings in the vast search space, and preliminary findings are discussed.     
     
    Date and time: Feb 21 11am-12:30pm
    Location: EEB 110  
     
    Zoom link: https://usc.zoom.us/j/97299158202?pwd=bVlnRVFhTjJlZjVCY1hVNy9yWWE1UT09          

    Location: Hughes Aircraft Electrical Engineering Center (EEB) - 110

    Audiences: Everyone Is Invited

    Contact: CS Events

    OutlookiCal