Logo: University of Southern California

Events Calendar



Select a calendar:



Filter February Events by Event Type:



Events for February 01, 2024

  • PhD Thesis Proposal - Matthew Ferland

    Thu, Feb 01, 2024 @ 12:30 AM - 02:00 PM

    Thomas Lord Department of Computer Science

    University Calendar


    Event: PhD Thesis Proposal (Matthew Ferland)
     
    Committee: Shanghua Teng, David Kempe, Jiapeng Zhang, Shaddin Dughmi, and Larry Goldstein
     
    Date: February 1, 2024, 12:30pm – 2:00pm
     
    Title: Exploring the complexity landscape of combinatorial games
     
    Abstract: People have been playing games since before written history, and many of the earliest games were combinatorial games, that is to say, games of perfect information and no chance. This type of game is still widely played today, and many popular games of this type, such as Chess and Go, are some of the most studied games of all time. This proposed work resolves around a game-independent systemic study of these games, involving computational properties involving evaluating the mathematical analysis tools, such as sprague-grundy values and switches, as well identifying what can be determined about these games under simple oracle models.
     
     
     

    Location: Henry Salvatori Computer Science Center (SAL) - 213

    Audiences: Everyone Is Invited

    Contact: CS Events

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • NL Seminar -Harnessing Black-Box Control to Boost Commonsense in LM's Generation

    Thu, Feb 01, 2024 @ 11:00 AM - 12:00 PM

    Information Sciences Institute

    Conferences, Lectures, & Seminars


    Speaker: Yufei Tian, UCLA

    Talk Title: Harnessing Black-Box Control to Boost Commonsense in LM's Generation

    Series: NL Seminar

    Abstract: REMINDER: This talk will be a live presentation only, it will not be recorded.  Meeting hosts only admit guests that they know to the Zoom meeting. Hence, you’re highly encouraged to use your USC account to sign into Zoom. If you’re an outside visitor, please provide your: Full Name, Title and Name of Workplace to (nlg-seminar-host(at)isi.edu) beforehand so we’ll be aware of your attendance. Also, let us know if you plan to attend in-person or virtually. More Info for NL Seminars can be found at: https://nlg.isi.edu/nl-seminar/  Large language models like Alpaca and GPT-3 generate coherent texts but sometimes lack commonsense, yet improving their commonsense via fine-tuning is resource expensive in terms of both data and computation. In this talk, I'll present BOOST, a resource-efficient framework that steers a frozen Pre-Trained Language Model (PTLM) towards more reasonable outputs. This involves creating an interpretable and reference-free evaluator that assigns a sentence with a commonsensical score which grounds the sentence to a dynamic commonsense knowledge base. Using this evaluator as a guide, we extend the NADO controllable generation method to train an auxiliary head that improves the PTLM's output. Our framework was tested on various language models, including GPT-2, Flan-T5, and Alpaca-based models. On two constrained concept-to-sentence benchmarks, human evaluation results show that BOOST consistently generates the most commonsensical content. Finally, I will demonstrate how ChatGPT outputs are different from and sometimes less favored than our outputs.

    Biography: Yufei Tian is a CS PhD student at UCLA advised by Prof. Nanyun (Violet) Peng. Her research is centered around creative and controllable text generation, machine reasoning and its interaction with cognitive science, as well as designing evaluation metrics for open-ended NLG tasks. She is supported by the UCLA-Amazon fellowship program.

    Host: Jon May and Justin Cho

    More Info: https://nlg.isi.edu/nl-seminar/

    Webcast: https://www.youtube.com/watch?v=WTIKszPDzDk

    Location: Information Science Institute (ISI) - Virtual and ISI-Conf Rm#689

    WebCast Link: https://www.youtube.com/watch?v=WTIKszPDzDk

    Audiences: Everyone Is Invited

    Contact: Pete Zamar

    Event Link: https://nlg.isi.edu/nl-seminar/

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • PhD Defense - KR Zentner

    Thu, Feb 01, 2024 @ 02:00 PM - 04:00 PM

    Thomas Lord Department of Computer Science

    University Calendar


    Title: Leveraging Cross-Task Transfer in Sequential Decision Problems: Scalable Reinforcement Learning for Robotics          
     
    Defense Committee:  Gaurav Sukhatme (chair), Heather Culbertson, Stefanos Nikolaidis, Laurent Itti, Bhaskar Krishnamachari           
     
    Date: Feb 1, 2024, 2 p.m. - 4 p.m.  - RTH 217       
     
    Abstract: The past few years have seen an explosion of interest in using machine learning to make robots capable of learning a diverse set of tasks. Potentially, these robots could operate in close proximity to humans, assisting humans with a wide variety of needs and being instructed to perform new tasks as needed. However, these robots generally use Reinforcement Learning to learn detailed sub-second interactions, but consequently require large amounts of data for each task. In this thesis we explore how Reinforcement Learning can be combined with Transfer Learning to re-use data across tasks. We begin by reviewing the state of Multi-Task and Meta RL and describe the motivations for using Transfer Learning. Then, we describe a basic framework for using Transfer Learning to efficiently learn multiple tasks, and show how it requires predicting how effectively transfer can be performed across tasks. Next, we present a simple rule, based in information theory, for predicting the effectiveness of Cross-Task Transfer, which we call the "Transfer Cost Rule." We discuss the theoretical implications of that rule, and show various quantitative evaluations of it. Then, we show two directions of work making use of our insights to perform efficient Transfer Reinforcement Learning. The first of these directions uses Cross-Task Co-Learning and Plan Conditioned Behavioral Cloning to share skill representations produced by a Large Language Model, and is able to learn many tasks from a single demonstration each in a simulated environment. The second of these directions uses Two-Phase KL Penalization to enforce a (potentially off-policy) trust region. These advances in Transfer RL may enable robots to be used in a wider range of applications, such as in the home or office. The insight provided by the Transfer Cost Rule may also be relevant to a wide audience of Reinforcement Learning practitioners, since it provides a practical and theoretically grounded explanation for the performance of Deep Reinforcement Learning algorithms.      
     
    Zoom link: https://usc.zoom.us/j/96965616504?pwd=QngwQTJsTXJkbXJJNU9hRVV2Mk1DQT09   

    Location: Ronald Tutor Hall of Engineering (RTH) - 217

    Audiences: Everyone Is Invited

    Contact: CS Events

    Event Link: https://usc.zoom.us/j/96965616504?pwd=QngwQTJsTXJkbXJJNU9hRVV2Mk1DQT09

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • Preparing for the Career & Internship Expo

    Thu, Feb 01, 2024 @ 03:00 PM - 04:00 PM

    Viterbi School of Engineering Career Connections

    Workshops & Infosessions


    THIS EVENT WILL BE HOSTED HYBRID: IN-PERSON & ONLINE SIMULTANEOUSLY
    Increase your preparedness for the Career & Internship Expo by attending this professional development Q&A moderated by Viterbi Career Connections staff or Viterbi employer partners.
    Zoom link:  https://usc.zoom.us/meeting/register/tJElc-CrpzsuEtFJDe8q8IP-qlj4DLN5xdkp
    For more information about all workshops, please visit viterbicareers.usc.edu/workshops.

    Location: Ronald Tutor Hall of Engineering (RTH) - 211

    Audiences: All Viterbi

    Contact: RTH 218 Viterbi Career Connections

    Event Link: https://usc.zoom.us/meeting/register/tJElc-CrpzsuEtFJDe8q8IP-qlj4DLN5xdkp

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File