Conferences, Lectures, & Seminars
Events for May
-
AI Seminar- Understanding LLMs through their Generative Behavior, Successes and Shortcomings
Fri, May 03, 2024 @ 11:00 AM - 12:00 PM
Information Sciences Institute
Conferences, Lectures, & Seminars
Speaker: Swabha Swayamdipta, USC
Talk Title: Understanding LLMs through their Generative Behavior, Successes and Shortcomings
Series: AI Seminar
Abstract: Abstract: Generative capabilities of large language models have grown beyond the wildest imagination of the broader AI research community, leading many to speculate whether these successes may be attributed to the training data or model design. I will present some work from my group which sheds light on understanding LLMs by studying their generative behavior, successes and shortcomings. First, I will show that standard inference algorithms work well because of the particular design behind LLMs. Next, I will discuss recently found successes and failures of LLMs on a combination of tasks, requiring world and domain-specific knowledge, linguistic capabilities and awareness of human and social utility. Overall, these findings paint a partial yet complex picture of our understanding of LLMs and provide a guide to the next steps forward.
This event will be recorded.
It will be posted on our USC/ISI YouTube page within 1-2 business days: https://www.youtube.com/user/USCISI.
Biography: Swabha Swayamdipta is an Assistant Professor of Computer Science and a Gabilan Assistant Professor at the University of Southern California. Her research interests are in natural language processing and machine learning, with a primary interest in the estimation of dataset quality, understanding and evaluation of generative models of language, and using language technologies to understand social behavior. At USC, Swabha leads the Data, Interpretability, Language and Learning (DILL) Lab. She received her PhD from Carnegie Mellon University, followed by a postdoc at the Allen Institute for AI. Her work has received outstanding paper awards at ICML 2022, NeurIPS 2021 and an honorable mention for the best paper at ACL 2020. Her research is supported by awards from the Allen Institute for AI and Intel Labs.
Host: Jay Pujara and Karen Lake
More Info: https://www.isi.edu/events/4684/ai-seminar-understanding-llms-through-their-generative-behavior-successes-and-shortcomings/
Webcast: https://usc.zoom.us/j/95888595423?pwd=VHBLa041dUJWcWx0NEhuYmQrV29ZQT09Location: Information Science Institute (ISI) - Conf Rm#1135-37
WebCast Link: https://usc.zoom.us/j/95888595423?pwd=VHBLa041dUJWcWx0NEhuYmQrV29ZQT09
Audiences: Everyone Is Invited
Contact: Pete Zamar
This event is open to all eligible individuals. USC Viterbi operates all of its activities consistent with the University's Notice of Non-Discrimination. Eligibility is not determined based on race, sex, ethnicity, sexual orientation, or any other prohibited factor. -
NL Seminar-Event Extraction for Epidemic Prediction
Thu, May 09, 2024 @ 11:00 AM - 12:00 PM
Information Sciences Institute
Conferences, Lectures, & Seminars
Speaker: Tanmay Parekh, UCLA
Talk Title: Event Extraction for Epidemic Prediction
Series: NL Seminar
Abstract: *Meeting hosts only admit on-line guests that they know to the Zoom meeting. Hence, you’re highly encouraged to use your USC account to sign into Zoom. If you’re an outside visitor, please inform us at (nlg-seminar-host(at)isi.edu) to make us aware of your attendance so we can admit you. Specify if you will attend remotely or in person at least one business day prior to the event Provide your: full name, job title and professional affiliation and arrive at least 10 minutes before the seminar begins. If you do not have access to the 6th Floor for in-person attendance, please check in at the 10th floor main reception desk to register as a visitor and someone will escort you to the conference room location. Tanmay Parekh is a third-year PhD student in Computer Science at the University of California Los Angeles (UCLA). He is advised by Prof. Nanyun Peng and Prof. Kai-Wei Chang. Previously, he completed his Masters at the Language Technologies Institute at Carnegie Mellon University (CMU) where he worked with Prof. Alan Black and Prof. Graham Neubig. He has completed his undergraduate studies at the Indian Institute of Technology Bombay (IITB). He has also worked in the industry at Amazon and Microsoft. He has worked on a wide range of research topics in multilingual, code-switching, controlled generation, and speech technologies. His current research focuses on improving the utilization and generalizability of Large Language Models (LLMs) for applications in Information Extraction (specifically Event Extraction) across various languages and domains.
Biography: Early warnings and effective control measures are among the most important tools for policymakers to be prepared against the threat of any epidemic. Social media is an important information source here, as it is more timely than other alternatives like news and public health and is publicly accessible. Given the sheer volume of daily social media posts, there is a need for an automated system to monitor social media to provide early and effective epidemic prediction. To this end, I introduce two works to aid the creation of such an automated system using information extraction. In my first work, we pioneer exploiting Event Detection (ED) for better preparedness and early warnings of any upcoming epidemic by developing a framework to extract and analyze epidemic-related events from social media posts. We curate an epidemic event ontology comprising seven disease-agnostic event types and construct a Twitter dataset SPEED focused on the COVID-19 pandemic. Experimentation reveals how ED models trained on COVID-based SPEED can effectively detect epidemic events for three unseen epidemics of Monkeypox, Zika, and Dengue. Furthermore, we show that reporting sharp increases in the extracted events by our framework can provide warnings 4-9 weeks earlier than the WHO epidemic declaration for Monkeypox. Since epidemics can originate across the globe, social media posts discussing them can be in varied languages. However, training supervised models on every language is a tedious and resource-expensive task. The alternative is the usage of zero-shot cross-lingual models. In this work, we introduce a new approach for label projection that can be used to generate synthetic training data in any language using the translate-train paradigm. This novel approach, CLaP, translates text to the target language and performs contextual translation on the labels using the translated text as the context, ensuring better accuracy for the translated labels. We leverage instruction-tuned language models with multilingual capabilities as our contextual translator, imposing the constraint of the presence of translated labels in the translated text via instructions. We benchmark CLaP with other label projection techniques on zero-shot cross-lingual transfer across 39 languages on two representative structured prediction tasks — event argument extraction (EAE) and named entity recognition (NER), showing over 2.4 F1 improvement for EAE and 1.4 F1 improvement for NER.
Host: Jon May and Justin Cho
More Info: https://www.isi.edu/research-groups-nlg/nlg-seminars/
Webcast: https://www.youtube.com/watch?v=8MPbW2abdKsLocation: Information Science Institute (ISI) - Conf Rm#689
WebCast Link: https://www.youtube.com/watch?v=8MPbW2abdKs
Audiences: Everyone Is Invited
Contact: Pete Zamar
Event Link: https://www.isi.edu/research-groups-nlg/nlg-seminars/
This event is open to all eligible individuals. USC Viterbi operates all of its activities consistent with the University's Notice of Non-Discrimination. Eligibility is not determined based on race, sex, ethnicity, sexual orientation, or any other prohibited factor. -
AI Seminar- Causal Inference to Inform Curation Practices in Online Platforms
Fri, May 10, 2024 @ 11:00 AM - 12:00 PM
Information Sciences Institute
Conferences, Lectures, & Seminars
Speaker: Giuseppe Russo, EPFL- Ecole Polytechnique Fédérale de Lausanne
Talk Title: Causal Inference to Inform Curation Practices in Online Platforms
Series: AI Seminar
Abstract: Digital platforms like Facebook, Wikipedia, Amazon, and LinkedIn play a foundational role in our society. They engage in content curation through moderation, recommendations, and monetization efforts, impacting individuals positively or negatively. In this talk, I will highlight the critical need for improving the existing methodologies used in these curation practices. I’ll make a case for the essential role of academic research in shaping policy and establishing best practices, drawing on two significant projects from my doctoral research. First, I will delve into an observational study on Reddit that uncovered a mechanism potentially driving the proliferation of extremist communities online. Following that, I will detail the outcomes of a study assessing the impact of removing entire extremist groups from Reddit. To conclude, I will examine potential research paths aimed at improving digital platforms, with a special focus on both the promises and challenges introduced by the emergence of generative AI technologies. My research demonstrates that investigating the direct effects of content curation practices with rigor can significantly enhance the quality of online platforms.
Biography: I am a Postdoctoral Researcher at EPFL, guided by Professor Robert West. My research spans causal inference, machine learning, and the broader impacts of AI on both society and individuals. Currently, my focus is on understanding the effects of content moderation in online social networks. My research extends to the applying causal methods to decision-making processes related to health and sustainability. I earned both my PhD and MSc from ETH Zurich, under the mentorship of Professor Frank Schweitzer, and completed my Bachelor's degree at the Politecnico di Milano. My work has been showcased at several academic conferences, including ACL, EMNLP, ICWSM, WWW, and IC2S2. Notably, it has been featured in the enlightening talk series at the International Conference of Computational Social Science (IC2S2).
Host: Fred Mortatter and Pete Zamar
More Info: https://www.isi.edu/events/4871/ai-seminar-causal-inference-to-inform-curation-practices-in-online-platforms/
Webcast: https://www.youtube.com/watch?v=XPf4ymbGRakLocation: Information Science Institute (ISI) - Virtual Only
WebCast Link: https://www.youtube.com/watch?v=XPf4ymbGRak
Audiences: Everyone Is Invited
Contact: Pete Zamar
This event is open to all eligible individuals. USC Viterbi operates all of its activities consistent with the University's Notice of Non-Discrimination. Eligibility is not determined based on race, sex, ethnicity, sexual orientation, or any other prohibited factor. -
AI Seminar- AI for Fostering Constructive Online Conversations
Fri, May 17, 2024 @ 11:00 AM - 12:00 PM
Information Sciences Institute
Conferences, Lectures, & Seminars
Speaker: Kristina Gligoric, Stanford University
Talk Title: AI for Fostering Constructive Online Conversations
Abstract: REMINDER: Meeting hosts only admit guests that they know to the Zoom meeting. Hence, you’re highly encouraged to use your USC account to sign into Zoom. If you are an outside visitor, please inform us at aiseminars DASH poc AT isi DOT edu beforehand so we will be aware of your attendance and let you in. Zoom meeting ID: 704 285 0182Passcode: 832239 Abstract: NLP systems promise to positively impact society in high-stakes social domains. However, current evaluation and development focus on tasks that are not grounded in specific societal implications, which can lead to societal harms. In this talk, I will present recent work addressing these issues in the domain of online content moderation. In the first part, I will discuss online content moderation to enable constructive conversations about race. Content moderation practices on social media risk silencing the voices of historically marginalized groups. Both the most recent models and humans disproportionately flag posts in which users share personal experiences of racism. Not only does this censorship hinder the potential of social media to give voice to marginalized communities, but we also find that witnessing such censorship exacerbates feelings of isolation. A psychologically informed reframing intervention offers a path to reduce censorship through. In the second part, I will discuss how identified biases in models can be traced to the use-mention distinction, which is the difference between the use of words to convey a speaker’s intent and the mention of words for quoting what someone said or pointing out properties of a word. Computationally modeling the use-mention distinction is crucial for enabling counterspeech to hate and misinformation. Counterspeech that refutes problematic content mentions harmful language but is not harmful itself. Even recent language models fail at distinguishing use from mention. This failure propagates to downstream tasks but can be reduced through introduced mitigations. Finally, I discuss the big picture and other recent efforts to address these issues in different domains beyond content moderation, including education, emotional support, sustainability, and public discourse about AI. I will reflect on how, by doing so, we can minimize the harms and develop and apply NLP systems for social good.
Biography: Kristina Gligoric is a Postdoctoral Scholar at Stanford University Computer Science Department, advised by Dan Jurafsky at the NLP group. Previously she obtained her Ph.D. in Computer Science at EPFL, where she was advised by Robert West. Her research focuses on developing computational approaches to address societal issues, drawing methods from NLP and causal inference. Her work has been published in top computer science conferences focused on computational social science and social media (CSCW, ICWSM, TheWebConf), natural language processing (EACL, NAACL, EMNLP), and broad audience journals (Nature Communications and Nature Medicine). She is a Swiss National Science Foundation Fellow and University of Chicago Rising star in Data Science. She received awards for her work, including EPFL Thesis Distinction and CSCW Best Paper Honorable Mention Award. This event will be recorded. It will be posted on our USC/ISI YouTube page within 1-2 business days: https://www.youtube.com/user/USCISI.
Host: Myrl Marmarelis and Maura Covaci
More Info: https://www.isi.edu/events/4952/ai-for-fostering-constructive-online-conversations/
Location: Information Science Institute (ISI) - Conf Rm#1014
Audiences: Everyone Is Invited
Contact: Pete Zamar
Event Link: https://www.isi.edu/events/4952/ai-for-fostering-constructive-online-conversations/
This event is open to all eligible individuals. USC Viterbi operates all of its activities consistent with the University's Notice of Non-Discrimination. Eligibility is not determined based on race, sex, ethnicity, sexual orientation, or any other prohibited factor. -
AI Seminar-Things Multimodal LLMs Cannot See: Toward Discovering and Mitigating Perceptual Biases in Neural Networks through Visual Interventions
Thu, May 23, 2024 @ 11:00 AM - 12:00 PM
Information Sciences Institute
Conferences, Lectures, & Seminars
Speaker: Mahyar Khayatkhoei , USC/ISI
Talk Title: Things Multimodal LLMs Cannot See: Toward Discovering and Mitigating Perceptual Biases in Neural Networks through Visual Interventions
Abstract: In this talk, I will discuss our recent research on the use of pixel-space interventions for discovering and mitigating biases in visual neural networks, including in multimodal large language models (MLLMs). I will start by showcasing our discovered perceptual limitations and biases of MLLMs (including commercial ones such as GPT-4V and LLaVA). I will then discuss our simple yet effective intervention-based approach for mitigating such limitations, which can do so without requiring any training. Finally, I will more broadly discuss the problem of removing attribute-specific bias from neural networks, present our latest information theoretic bounds on this problem, and explain our adversarial input-intervention approach for removing strong attribute bias.
This event will be recorded but only shared with AI Division Leadership.
Biography: I am a Computer Scientist at the AI Division of the USC Information Sciences Institute. I received my Ph.D. and M.Sc. in computer science from Rutgers University working with Dr. Ahmed Elgammal, and my B.Sc. in electrical engineering from the University of Tehran. My research explores the theory and application of deep generative models, and has identified and resolved major bottlenecks in neural networks’ ability to learn from heterogeneous data (NeurIPS 2018), to learn high frequency features (AAAI 2022), and in their reliable evaluation (ICML 2023). My latest focus is on adopting large-scale generative neural networks to real-world mission-critical tasks. I am particularly interested in developing reliable and efficient data-driven computational models of real-world phenomena that would enhance our current physics-based models. My personal website is at https://mahyarkoy.github.io
Host: Host: Adam Russell, POC Justina Gilleland and Alma Nava
More Info: https://www.isi.edu/events/4966/things-multimodal-llms-cannot-see-toward-discovering-and-mitigating-perceptual-biases-in-neural-networks-through-visual-interventions/
Webcast: https://usc.zoom.us/j/93179461297?pwd=d2RpNWlEblhxcHRFMU9RbnRxbWJBUT09Location: Information Science Institute (ISI) - Conf Rm#1135
WebCast Link: https://usc.zoom.us/j/93179461297?pwd=d2RpNWlEblhxcHRFMU9RbnRxbWJBUT09
Audiences: Everyone Is Invited
Contact: Pete Zamar
This event is open to all eligible individuals. USC Viterbi operates all of its activities consistent with the University's Notice of Non-Discrimination. Eligibility is not determined based on race, sex, ethnicity, sexual orientation, or any other prohibited factor. -
Human-AI Interaction: From Supporting Surgical Training to Inspecting Social Bias in LLMs
Fri, May 31, 2024 @ 11:00 AM - 12:00 PM
Information Sciences Institute
Conferences, Lectures, & Seminars
Speaker: Rafal Kocielnik, California Institute of Technology
Talk Title: Human-AI Interaction: From Supporting Surgical Training to Inspecting Social Bias in LLMs
Series: AI Seminar
Abstract: *Meeting hosts only admit on-line guests that they know to the Zoom meeting. Hence, you’re highly encouraged to use your USC account to sign into Zoom. If you’re an outside visitor, please inform us at (aiseminars-poc(at)isi.edu) to make us aware of your attendance so we can admit you. Specify if you will attend remotely or in person at least one business day prior to the event Provide your: full name, job title and professional affiliation and arrive at least 10 minutes before the seminar begins. If you do not have access to the 6th Floor for in-person attendance, please check in at the 10th floor main reception desk to register as a visitor and someone will escort you to the conference room location.
In this talk, I will present my recent contributions to Human-AI interaction, focusing on two distinct projects looking at opportunities and challenges involved in the use of modern AI. In the first part of my talk, I will present my work on leveraging AI in clinician education, specifically within the surgical context. I will detail my work on utilizing multimodal deep-learning techniques to analyze formative feedback from surgeons to trainees in the context of real-world robot-assisted surgeries. This project marks a significant step forward in harnessing contemporary AI for the specialized domain of surgical education, receiving the best paper award at the ML4H conference. For the second part of my talk, I will focus on Human-AI interaction in the context of empowering domain experts (e.g., social scientists and ethicists) to inspect modern generative AI for the presence of harmful stereotypes. I will describe our BiasTestGPT framework which offers two important contributions: 1) a novel approach for generating high-quality synthetic data for social bias testing at scale and 2) a user-friendly and open-sourced interface for engaging the general public and domain experts in the inspection of modern AI. Together, these projects demonstrate opportunities in leveraging Human-AI interaction for supporting specialized domains and helping inspect the challenges in AI itself. This event will be recorded. It will be posted on our USC/ISI YouTube page within 1-2 business days: https://www.youtube.com/user/USCISI.
Biography: RafaÅ Kocielnik is a Postdoctoral Researcher at Caltech's Computing+Mathematical Sciences department, where he also collaborates with Cedars-Sinai Medical Center and Activision Blizzard gaming company. He holds an M.Sc. in Computer Science from the Polish-Japanese Academy of Information Technology, a P.D.Eng. in Industrial Design from Eindhoven University of Technology and completed his Ph.D. in Human-Centered Design & Engineering at the University of Washington, Seattle, in 2021. His focus was on designing engaging conversational interactions for health and behavior change. Awarded a CRA Computing Innovation Fellowship in 2021, his research at Caltech explores the intersection of AI and HCI with applications in surgical training, social bias testing in Generative AI, and toxicity mitigation in gaming. He has received Best Paper awards at CSCW and ML4H, with an Honorable Mention at CUI, underscoring his interdisciplinary focus and commitment to advancing AI and HCI for human-centered applications. Visit links below to subscribe and for details on upcoming seminars: https://www.isi.edu/isi-seminar-series/ https://www.isi.edu/events/
Host: Myrl Marmarelis and Justina Gilleland + Maura Covaci
More Info: https://www.isi.edu/events/4976/human-ai-interaction-from-supporting-surgical-training-to-inspecting-social-bias-in-llms/
Webcast: https://usc.zoom.us/j/99601436181?pwd=d0Y5eTZPbHRjM2t3NHc5cXRMNkE1dz09Location: Information Science Institute (ISI) - Conf Rm#1135-1137
WebCast Link: https://usc.zoom.us/j/99601436181?pwd=d0Y5eTZPbHRjM2t3NHc5cXRMNkE1dz09
Audiences: Everyone Is Invited
Contact: Pete Zamar
This event is open to all eligible individuals. USC Viterbi operates all of its activities consistent with the University's Notice of Non-Discrimination. Eligibility is not determined based on race, sex, ethnicity, sexual orientation, or any other prohibited factor.