Conferences, Lectures, & Seminars
Events for October
-
AI Seminar- Why Are Human Laws So Difficult For AI to Follow?
Fri, Oct 18, 2024 @ 11:00 AM - 12:00 PM
Information Sciences Institute
Conferences, Lectures, & Seminars
Speaker: John Licato, University of South Florida
Talk Title: Why Are Human Laws So Difficult For AI to Follow?
Abstract: Join Zoom Meeting: https://usc.zoom.us/j/96076927864?pwd=tOuC1grLlyiRgcwicpm9e7XziHgE0R.1 Meeting ID: 960 7692 7864 Passcode: 810249 Register in advance for this webinar: https://usc.zoom.us/webinar/register/WN_ANEShGxrSfeTwa5sFZsRag Although it is now incredibly easy to create and deploy a chatbot for almost any application, powered by highly capable LLMs, even the best systems still tend to perform poorly when they need to interpret and reason about rules---specifically, rules expressed in the kind of language found in laws, contracts, regulations, and the like. Why does this problem still exist, and how can it be overcome? Dr. Licato argues that the problem is rooted in a feature (not a bug) of human languages called open-texturedness. And this open-texturedness, because it is an inevitable feature of normative rule systems, must be addressed by any agent-level AI system, especially if we want it to be able to follow our laws.
Biography: John Licato, PhD is an Associate Professor of Computer Science and Engineering at USF, Director of the USF Advancing Machine and Human Reasoning (AMHR) Lab, and founder of AI startup Actualization AI, LLC. He designed and teaches the natural language processing course (the field that created ChatGPT) at USF, and his lab's mission is to not only make AI smarter, but to use those advances to make people reason better as well. His research expertise lies in AI, NLP, human reasoning, cognitive modeling, and legal / regulatory reasoning, with over 100 peer-reviewed publications. He has been featured in outlets such as NPR's Marketplace Tech, ABC Action News, and the Tampa Bay Business Journal. If speaker approves to be recorded for this AI Seminar talk, it will be posted on our USC/ISI YouTube page within 1-2 business days: https://www.youtube.com/user/USCISI.
Host: Abel Salinas and Pete Zamar
More Info: https://www.isi.edu/events/5149/why-are-human-laws-so-difficult-for-ai-to-follow/
Webcast: https://www.youtube.com/watch?v=CmNz7hAAtLsLocation: Information Science Institute (ISI) - Virtual Only
WebCast Link: https://www.youtube.com/watch?v=CmNz7hAAtLs
Audiences: Everyone Is Invited
Contact: Pete Zamar
Event Link: https://www.isi.edu/events/5149/why-are-human-laws-so-difficult-for-ai-to-follow/
This event is open to all eligible individuals. USC Viterbi operates all of its activities consistent with the University's Notice of Non-Discrimination. Eligibility is not determined based on race, sex, ethnicity, sexual orientation, or any other prohibited factor. -
NL Seminar-Mission: Impossible Language Models
Thu, Oct 24, 2024 @ 11:00 AM - 12:00 PM
Information Sciences Institute
Conferences, Lectures, & Seminars
Speaker: Julie Kallini, Stanford University
Talk Title: Mission: Impossible Language Models
Abstract: REMINDER: Meeting hosts only admit on-line guests that they know to the Zoom meeting. Hence, you’re highly encouraged to use your USC account to sign into Zoom. If you’re an outside visitor, please inform us at (nlg-seminar-host(at)isi.edu) to make us aware of your attendance so we can admit you. Specify if you will attend remotely or in person at least one business day prior to the event Provide your: full name, job title and professional affiliation and arrive at least 10 minutes before the seminar begins. If you do not have access to the 6th Floor for in-person attendance, please check in at the 10th floor main reception desk to register as a visitor and someone will escort you to the conference room location. ZOOM INFO: https://usc.zoom.us/j/97400245543?pwd=uo9TL9Ss4TA4Wa4TPtfDQnedE7Va8B.1 Meeting ID: 974 0024 5543 Passcode: 407395 Chomsky and others have very directly claimed that large language models (LLMs) are equally capable of learning languages that are possible and impossible for humans to learn. However, there is very little published experimental evidence to support such a claim. Here, we develop a set of synthetic impossible languages of differing complexity, each designed by systematically altering English data with unnatural word orders and grammar rules. These languages lie on an impossibility continuum: at one end are languages that are inherently impossible, such as random and irreversible shuffles of English words, and on the other, languages that may not be intuitively impossible but are often considered so in linguistics, particularly those with rules based on counting word positions. We report on a wide range of evaluations to assess the capacity of GPT-2 small models to learn these uncontroversially impossible languages, and crucially, we perform these assessments at various stages throughout training to compare the learning process for each language. Our core finding is that GPT-2 struggles to learn impossible languages when compared to English as a control, challenging the core claim. More importantly, we hope our approach opens up a productive line of inquiry in which different LLM architectures are tested on a variety of impossible languages in an effort to learn more about how LLMs can be used as tools for these cognitive and typological investigations.
Biography: Julie Kallini is a second-year Computer Science Ph.D. student at Stanford University advised by Christopher Potts and Dan Jurafsky. Her research spans several topics in natural language processing, including computational linguistics, cognitive science, interpretability, and model architecture. Julie's work is generously supported by the NSF Graduate Research Fellowship, the Stanford School of Engineering Graduate Fellowship, and the Stanford EDGE Fellowship. Before starting her Ph.D., Julie was a software engineer at Meta, where she worked on machine learning for advertisements. Julie graduated summa cum laude from Princeton University with a B.S.E. in Computer Science and a minor in Linguistics.
Host: Jonathan May and Katy Felkner
More Info: https://www.isi.edu/research-groups-nlg/nlg-seminars/
Webcast: https://www.youtube.com/watch?v=sDMUu8rrgV8Location: Information Science Institute (ISI) - Conf Rm#689
WebCast Link: https://www.youtube.com/watch?v=sDMUu8rrgV8
Audiences: Everyone Is Invited
Contact: Pete Zamar
Event Link: https://www.isi.edu/research-groups-nlg/nlg-seminars/
This event is open to all eligible individuals. USC Viterbi operates all of its activities consistent with the University's Notice of Non-Discrimination. Eligibility is not determined based on race, sex, ethnicity, sexual orientation, or any other prohibited factor. -
NL Seminar-InterIntent Investigating Social Intelligence of LLMs via Intention Understanding in a Game context
Thu, Oct 31, 2024 @ 11:00 AM - 12:00 PM
Information Sciences Institute
Conferences, Lectures, & Seminars
Speaker: Ziyi Liu, USC
Talk Title: InterIntent Investigating Social Intelligence of LLMs via Intention Understanding in a Game Context
Abstract: REMINDER: Meeting hosts only admit on-line guests that they know to the Zoom meeting. Hence, you’re highly encouraged to use your USC account to sign into Zoom. If you’re an outside visitor, please inform us at (nlg-seminar-host(at)isi.edu) to make us aware of your attendance so we can admit you. Specify if you will attend remotely or in person at least one business day prior to the event Provide your: full name, job title and professional affiliation and arrive at least 10 minutes before the seminar begins. If you do not have access to the 6th Floor for in-person attendance, please check in at the 10th floor main reception desk to register as a visitor and someone will escort you to the conference room location. Zoom Info: https://usc.zoom.us/j/95325436571?pwd=NMJIFIQNQ01esvL9UffxxIp4dnSCmF.1Meeting ID: 953 2543 6571/Passcode: 985321 Abstract: Large language models (LLMs) have demonstrated the potential to mimic human social intelligence. However, most studies focus on simplistic and static self-report or performance-based tests, which limits the depth and validity of the analysis. In this paper, we developed a novel framework, INTERINTENT, to assess LLMs’ social intelligence by mapping their ability to understand and manage intentions in a game setting. We focus on four dimensions of social intelligence: situational awareness, self-regulation, self-awareness, and theory of mind. Each dimension is linked to a specific game task: intention selection, intention following, intention summarization, and intention guessing. Our findings indicate that while LLMs exhibit high proficiency in selecting intentions, achieving an accuracy of 88%, their ability to infer the intentions of others is significantly weaker, trailing human performance by 20%. Additionally, game performance correlates with intention understanding, highlighting the importance of the four components towards success in this game. These findings underline the crucial role of intention understanding in evaluating LLMs’ social intelligence and highlight the potential of using social deduction games as a complex testbed to enhance LLM evaluation. INTERINTENT contributes a structured approach to bridging the evaluation gap in social intelligence within multiplayer games.
Biography: Ziyi Liu is a second-year PhD student at the University of Southern California, advised by Professor Jieyu Zhao in LIME Lab. Previously, she earned her master’s degree at USC and was a Research Assistant in USC ISI’s Ink Lab for two years under the guidance of Professor Xiang Ren. Her research focuses on social intelligence and hallucination detection in human-LLM interactions, particularly in evaluating LLM behaviors and aligning LLM values with those of humans. Her work is driven by two key questions: (1) How can we make interactions between models and humans more seamless? (2) How can we ensure the faithfulness of LLMs and avoid hallucinations during interactions?
Host: Jonathan May and Katy Felkner
More Info: https://www.isi.edu/research-groups-nlg/nlg-seminars/
Webcast: https://www.youtube.com/watch?v=yHfeHKahMoILocation: Information Science Institute (ISI) - Conf Rm#689
WebCast Link: https://www.youtube.com/watch?v=yHfeHKahMoI
Audiences: Everyone Is Invited
Contact: Pete Zamar
Event Link: https://www.isi.edu/research-groups-nlg/nlg-seminars/
This event is open to all eligible individuals. USC Viterbi operates all of its activities consistent with the University's Notice of Non-Discrimination. Eligibility is not determined based on race, sex, ethnicity, sexual orientation, or any other prohibited factor.