Logo: University of Southern California

Events Calendar



Select a calendar:



Filter November Events by Event Type:


SUNMONTUEWEDTHUFRISAT
30
31
1
2
4
5

6
7
8
9
11
12

13
14
15
16
18
19

20
21
22
23
24
25
26

27
28
29
30
2
3


Conferences, Lectures, & Seminars
Events for November

  • NL Seminar-Modular and Composable Transfer Learning

    Thu, Nov 03, 2022 @ 11:00 AM - 12:00 PM

    Information Sciences Institute

    Conferences, Lectures, & Seminars


    Speaker: Jonas Pfeiffer, Google

    Talk Title: Modular and Composable Transfer Learning

    Series: NL Seminar

    Abstract: REMINDER
    Meeting hosts only admit guests that they know to the Zoom meeting. Hence, you are highly encouraged to use your USC account to sign into Zoom.

    If you are an outside visitor, please inform us at nlg DASH seminar DASH host AT isi DOT edu beforehand so we will be aware of your attendance and let you in.

    In person attendance will be permitted for USC ISI faculty, staff, students only. Open to the public virtually via the zoom link and online.

    With pre-trained transformer-based models continuously increasing in size, there is a dire need for parameter-efficient and modular transfer learning strategies. In this talk, we will touch base on adapter-based fine-tuning, where instead of fine-tuning all weights of a model, small neural network components are introduced at every layer. While the pre-trained parameters are frozen, only the newly introduced adapter weights are fine-tuned, achieving an encapsulation of the down-stream task information in designated parts of the model. We will demonstrate that adapters are modular components which can be composed for improvements on a target task and how they can be used for out of distribution generalization on the example of zero shot cross-lingual transfer. Finally, we will discuss how adding modularity during pre training can mitigate catastrophic interference and consequently lift the curse of multilinguality.


    Biography: Jonas Pfeiffer is a Research Scientist at Google Research. He is interested in modular representation learning in multi task, multilingual, and multi-modal contexts, and in low resource scenarios. He worked on his PhD at the Technical University of Darmstadt, was a visiting researcher at the New York University and a Research Scientist Intern at Meta Research. Jonas has received the IBM PhD Research Fellowship award for 2021/2022. He has given numerous invited talks at academia, industry and ML summer schools, and has co-organized multiple workshops on multilinguality and multimodality

    Host: Jon May and Meryem M'hamdi

    More Info: https://nlg.isi.edu/nl-seminar/

    Webcast: https://www.youtube.com/watch?v=hrGOb4okvI0

    Location: Information Science Institute (ISI) - Virtual and ISI-Conf Rm#689

    WebCast Link: https://www.youtube.com/watch?v=hrGOb4okvI0

    Audiences: Everyone Is Invited

    Contact: Pete Zamar

    Event Link: https://nlg.isi.edu/nl-seminar/


    This event is open to all eligible individuals. USC Viterbi operates all of its activities consistent with the University's Notice of Non-Discrimination. Eligibility is not determined based on race, sex, ethnicity, sexual orientation, or any other prohibited factor.

  • NL Seminar -Effective, Explainable, and Equitable NLP with World Knowledge and Interactions

    Thu, Nov 10, 2022 @ 11:00 AM - 12:00 PM

    Information Sciences Institute

    Conferences, Lectures, & Seminars


    Speaker: Bodhi Prasad Majumder, UCSD

    Talk Title: Effective, Explainable, and Equitable NLP with World Knowledge and Interactions

    Series: NL Seminar

    Abstract: REMINDER
    Meeting hosts only admit guests that they know to the Zoom meeting. Hence, you are highly encouraged to use your USC account to sign into Zoom.

    If you are an outside visitor, please inform us at nlg DASH seminar DASH host AT isi DOT edu beforehand so we will be aware of your attendance and let you in.

    In person attendance will be permitted for USC ISI faculty, staff, students only. Open to the public virtually via the zoom link and online.

    Artificial intelligence AI has shown remarkable effectiveness in knowledge seeking applications e.g., for recommendations and explanations. However, the increasing expectation of more trust, accessibility, and anthropomorphism in these AI systems requires the underlying components dialog models, LLMs, classifiers to be adaptive and adequately knowledge grounded. In reality, the outputs of the constituent models often lack commonsense, explanations, and subjectivity a long standing goal of artificial general intelligence.

    In this talk, I aim to address this gap through the concept of interactive explainability, realized via three pillars knowledge, explanations, and interactions. First, I will explore the post-hoc methods to effectively inject relevant and diverse knowledge into an existing dialog model without additional training. Second, I will investigate the role of background knowledge in model reasoning, prediction, and faithfully constructing natural language explanations. Third, I will propose an interactive approach to address fairness and subjectivity in bias mitigation via feature level user interventions. Finally, I will hint at future possibilities and societal impacts of next-generation explainable interactive systems.

    Biography: Bodhi Prasad Majumder is a final year PhD student at CSE, UC San Diego, advised by Prof. Julian McAuley. His research goal is to build interactive machines capable of producing knowledge grounded explanations. He previously spent time at the Allen Institute of AI, Google AI, Microsoft Research, and FAIR Meta AI, along with collaborations from U of Oxford, U of British Columbia, and the Alan Turing Institute.

    His work has been recognized by the UCSD CSE Doctoral Award for Research, Adobe Research Fellowship, Qualcomm Innovation Fellowship, and Highlights of ACM Rec Sys, among many awards and several media coverages. In 2019, Bodhi led UCSD in the finals of the Amazon Alexa Prize. He also co authored a best selling NLP book with O Reilly Media that is being adopted in universities internationally.

    Host: Jon May and Meryem Mhamdi

    More Info: https://nlg.isi.edu/nl-seminar/

    Webcast: https://www.youtube.com/watch?v=5Mva6sQgjuw

    Location: Information Science Institute (ISI) - Virtual and ISI-Conf Rm#689

    WebCast Link: https://www.youtube.com/watch?v=5Mva6sQgjuw

    Audiences: Everyone Is Invited

    Contact: Pete Zamar

    Event Link: https://nlg.isi.edu/nl-seminar/


    This event is open to all eligible individuals. USC Viterbi operates all of its activities consistent with the University's Notice of Non-Discrimination. Eligibility is not determined based on race, sex, ethnicity, sexual orientation, or any other prohibited factor.

  • NL Seminar -Pragmatic Interpretability

    Thu, Nov 17, 2022 @ 11:00 AM - 12:00 PM

    Information Sciences Institute

    Conferences, Lectures, & Seminars


    Speaker: Shi Feng, Univ of Illinois, Chicago

    Talk Title: Pragmatic Interpretability

    Series: NL Seminar

    Abstract: Abstract: REMINDER
    Meeting hosts only admit guests that they know to the Zoom meeting. Hence, you are highly encouraged to use your USC account to sign into Zoom.

    If you are an outside visitor, please inform us at nlg DASH seminar DASH host AT isi DOT edu beforehand so we will be aware of your attendance and let you in.

    In person attendance will be permitted for USC ISI faculty, staff, students only. Open to the public virtually via the zoom link and online.

    Machine learning models have been quite successful at emulating human intelligence but their potential as intelligence augmentation is less explored. Part of the challenge is our lack of understanding in how these models work, and this is the problem interpretability is trying to tackle. But most existing interpretability work takes models trained under the emulation paradigm and adds humans into the mix post-hoc-the human's role is largely an afterthought. In this talk, I advocate for a more pragmatic approach to interpretability and emphasize modeling the human's needs in their cooperation with AIs. In the first part, I discuss how the human-AI team can be evaluated and optimized as a unified decision-maker, and how the model can learn to explain selectively. In the second part, I discuss how human intuition measured outside of the working with an AI context can be incorporated into models and explanations. I'll conclude with a brief discussion on formulating the model's pragmatic inference about its human teammate.

    Biography: Shi Feng is a postdoc at University of Chicago working with Chenhao Tan. He got his PhD from University of Maryland under Jordan Boyd-Graber. He is interested in human-AI cooperation: how machine learning can help humans make better decisions, and how humans can provide supervision more effectively. His past work focuses on natural language processing, and covers topics including interpretability, adversarial attacks, robustness, and human-in-the-loop evaluations.

    Host: Jon May and Meryem Mhamdi

    More Info: https://nlg.isi.edu/nl-seminar/

    Webcast: https://www.youtube.com/watch?v=C8jUO4w5xwU

    Location: Information Science Institute (ISI) - Virtual and ISI-Conf Rm#689

    WebCast Link: https://www.youtube.com/watch?v=C8jUO4w5xwU

    Audiences: Everyone Is Invited

    Contact: Pete Zamar

    Event Link: https://nlg.isi.edu/nl-seminar/


    This event is open to all eligible individuals. USC Viterbi operates all of its activities consistent with the University's Notice of Non-Discrimination. Eligibility is not determined based on race, sex, ethnicity, sexual orientation, or any other prohibited factor.