Logo: University of Southern California

Events Calendar



Select a calendar:



Filter June Events by Event Type:



Events for June 13, 2022

  • NL Seminar Sources of Variance in Pretraining and Finetuning LLMs

    Mon, Jun 13, 2022 @ 02:00 PM - 03:00 PM

    Information Sciences Institute

    Conferences, Lectures, & Seminars


    Speaker: Naomi Saphra, NYU

    Talk Title: Sources of Variance in Pretraining and Finetuning LLMs

    Series: NL Seminar

    Abstract: REMINDER
    Meeting hosts only admit guests that they know to the Zoom meeting. Hence, you are highly encouraged to use your USC account to sign into Zoom.

    If you are an outside visitor, please inform us at (nlg DASH seminar DASH host AT isi DOR edu beforehand so we will be aware of your attendance and let you in.

    You have engaged in the very modern practice of transfer learning. You pretrained a model on a self supervised objective, then you finetuned it on a downstream task, and you find excellent performance on the test set. Aha, you say. I found a good pretraining procedure. Did you? You try finetuning again. The results are terrible! Aha, you say. I found a bad finetuning procedure. Did you?

    The random seeds for both pretraining and finetuning stages have a substantial influence on outcome. However, it is computationally expensive to pretrain new models, so measuring the robustness of a procedure across different seeds can be prohibitive. This talk will address, first, the influence that a pretraining seed has on both in domain and OOD performance. Then we will address the role of the finetuning seed. Much variation in OOD generalization can be ascribed to where the finetuning seeds direct SGD trajectories. In particular, we discuss how to predict generalization behavior in a finetuned model, based on topographic properties of its region of the loss surface. By understanding the degree of influence that random seeds have on performance, we can fairly evaluate a robust training procedure, rather than a single set of parameters. By understanding the mechanism of that influence, we can go further by developing improved training methods.


    Biography: Naomi has interests relating to NLP learning dynamics how models learn to encode linguistic structure, and how we can encode useful inductive biases into the training process. Having earned a PhD from University of Edinburgh, they are now a postdoc at NYU. In their spare time, they play roller derby under the name Gaussian Retribution, do standup comedy, and shepherd programmers who cannot type into the world of code dictation.

    Host: Jon May and Thamme Gowda

    More Info: https://nlg.isi.edu/nl-seminar/

    Webcast: https://www.youtube.com/watch?v=Lni4PIlbJjI

    Location: Information Science Institute (ISI) - Virtual

    WebCast Link: https://www.youtube.com/watch?v=Lni4PIlbJjI

    Audiences: Everyone Is Invited

    Contact: Pete Zamar

    Event Link: https://nlg.isi.edu/nl-seminar/

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File