Thu, Jun 17, 2021 @ 11:00 AM - 12:00 PM
Information Sciences Institute
Conferences, Lectures, & Seminars
Speaker: Xiang Lisa Li, Stanford Unversity
Talk Title: PREFIX-TUNING: OPTIMIZING CONTINUOUS PROMPTS FOR GENERATION
Series: NL Seminar
Abstract: Fine tuning is the de facto way of leveraging large pre trained language models for downstream tasks. However, fine tuning modifies all the language model parameters and therefore necessitates storing a full copy for each task. In this paper, we propose prefix tuning, a lightweight alternative to fine tuning for natural language generation tasks, which keeps language model parameters frozen and instead optimizes a sequence of continuous task specific vectors, which we call the prefix. Prefix tuning draws inspiration from prompting for language models, allowing subsequent tokens to attend to this prefix as if it were virtual tokens. We apply prefix tuning to GPT 2 for table to text generation and to BART for summarization. We show that by modifying only 0.1 percent of the parameters, prefix tuning obtains comparable performance in the full data setting, outperforms fine tuning in low data settings, and extrapolates better to examples with topics that are unseen during training.
Biography: Xiang Lisa Li is a first year PhD student in computer science at Stanford University, advised by Percy Liang and Tatsunori Hashimoto. She works on controllable text generation/decoding and efficient adaptation of pre trained language models. Lisa is supported by a Stanford Graduate Fellowship and is the recipient of an EMNLP Best Paper award.
Host: Jon May and Mozhdeh Gheini
More Info: https://nlg.isi.edu/nl-seminar/
WebCast Link: https://youtu.be/TwE2m6Z991s
Audiences: Everyone Is Invited
Contact: Petet Zamar