-
AI Seminar- AI for Fostering Constructive Online Conversations
Fri, May 17, 2024 @ 11:00 AM - 12:00 PM
Information Sciences Institute
Conferences, Lectures, & Seminars
Speaker: Kristina Gligoric, Stanford University
Talk Title: AI for Fostering Constructive Online Conversations
Abstract: REMINDER: Meeting hosts only admit guests that they know to the Zoom meeting. Hence, you’re highly encouraged to use your USC account to sign into Zoom. If you are an outside visitor, please inform us at aiseminars DASH poc AT isi DOT edu beforehand so we will be aware of your attendance and let you in. Zoom meeting ID: 704 285 0182Passcode: 832239 Abstract: NLP systems promise to positively impact society in high-stakes social domains. However, current evaluation and development focus on tasks that are not grounded in specific societal implications, which can lead to societal harms. In this talk, I will present recent work addressing these issues in the domain of online content moderation. In the first part, I will discuss online content moderation to enable constructive conversations about race. Content moderation practices on social media risk silencing the voices of historically marginalized groups. Both the most recent models and humans disproportionately flag posts in which users share personal experiences of racism. Not only does this censorship hinder the potential of social media to give voice to marginalized communities, but we also find that witnessing such censorship exacerbates feelings of isolation. A psychologically informed reframing intervention offers a path to reduce censorship through. In the second part, I will discuss how identified biases in models can be traced to the use-mention distinction, which is the difference between the use of words to convey a speaker’s intent and the mention of words for quoting what someone said or pointing out properties of a word. Computationally modeling the use-mention distinction is crucial for enabling counterspeech to hate and misinformation. Counterspeech that refutes problematic content mentions harmful language but is not harmful itself. Even recent language models fail at distinguishing use from mention. This failure propagates to downstream tasks but can be reduced through introduced mitigations. Finally, I discuss the big picture and other recent efforts to address these issues in different domains beyond content moderation, including education, emotional support, sustainability, and public discourse about AI. I will reflect on how, by doing so, we can minimize the harms and develop and apply NLP systems for social good.
Biography: Kristina Gligoric is a Postdoctoral Scholar at Stanford University Computer Science Department, advised by Dan Jurafsky at the NLP group. Previously she obtained her Ph.D. in Computer Science at EPFL, where she was advised by Robert West. Her research focuses on developing computational approaches to address societal issues, drawing methods from NLP and causal inference. Her work has been published in top computer science conferences focused on computational social science and social media (CSCW, ICWSM, TheWebConf), natural language processing (EACL, NAACL, EMNLP), and broad audience journals (Nature Communications and Nature Medicine). She is a Swiss National Science Foundation Fellow and University of Chicago Rising star in Data Science. She received awards for her work, including EPFL Thesis Distinction and CSCW Best Paper Honorable Mention Award. This event will be recorded. It will be posted on our USC/ISI YouTube page within 1-2 business days: https://www.youtube.com/user/USCISI.
Host: Myrl Marmarelis and Maura Covaci
More Info: https://www.isi.edu/events/4952/ai-for-fostering-constructive-online-conversations/
Location: Information Science Institute (ISI) - Conf Rm#1014
Audiences: Everyone Is Invited
Contact: Pete Zamar
Event Link: https://www.isi.edu/events/4952/ai-for-fostering-constructive-online-conversations/