BEGIN:VCALENDAR METHOD:PUBLISH PRODID:-//Apple Computer\, Inc//iCal 1.0//EN X-WR-CALNAME;VALUE=TEXT:USC VERSION:2.0 BEGIN:VEVENT DESCRIPTION:Speaker: Lindsay Sanneman , MIT Talk Title: Transparent Value Alignment: Foundations for Human-Centered Explainable AI in Alignment Series: CS Colloquium Abstract: Alignment of robot objectives with those of humans can greatly enhance robots' ability to act flexibly to safely and reliably meet humans' goals across diverse contexts from space exploration to robotic manufacturing. However, it is often difficult or impossible for humans, both expert and non-expert, to enumerate their objectives comprehensively, accurately, and in forms that are readily usable for robot planning. Value alignment is an open challenge in artificial intelligence that aims to address this problem by enabling robots and autonomous agents to infer human goals and values through interaction. Providing humans with direct and explicit feedback about this value learning process through approaches for explainable AI (XAI) can enable humans to more efficiently and effectively teach robots about their goals. In this talk, I will introduce the Transparent Value Alignment (TVA) paradigm which captures this two-way communication and inference process and will discuss foundations for the design and evaluation of XAI within this paradigm. First, I will present a novel suite of metrics for assessing alignment which have been validated through human subject experiments by applying approaches from cognitive psychology. Next, I will discuss the Situation Awareness Framework for Explainable AI (SAFE-AI), a human factors-based framework for the design and evaluation of XAI across diverse contexts including alignment. Finally, I will propose design guidance for XAI within the TVA context which is grounded in results from a set of human studies comparing a broad range of explanation techniques across multiple domains. I will additionally highlight how this research relates to real-world robotic manufacturing and space exploration settings that I have studied. I will conclude the talk by discussing the future vision of this work.\n \n \n \n This lecture satisfies requirements for CSCI 591: Research Colloquium Biography: Lindsay Sanneman is a final year PhD candidate in the Department of Aeronautics and Astronautics at MIT and a member of the Interactive Robotics Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL). Her research focuses on the development of models, metrics, and algorithms for explainable AI (XAI) and AI alignment in complex human-autonomy interaction settings. Since 2018, she has been a member of MIT's Work of the Future task force and has visited over 50 factories worldwide alongside an interdisciplinary team of social scientists and engineers in order to study the adoption of robotics in manufacturing. She is currently a Siegel Research Fellow and has presented her work in diverse venues including the Industry Studies Association and the UN Department of Economic and Social Affairs. Host: Heather Culbertson SEQUENCE:5 DTSTART:20230322T110000 LOCATION:RTH 109 DTSTAMP:20230322T110000 SUMMARY:CS Colloquium: Lindsay Sanneman (MIT) - Transparent Value Alignment: Foundations for Human-Centered Explainable AI in Alignment UID:EC9439B1-FF65-11D6-9973-003065F99D04 DTEND:20230322T120000 END:VEVENT END:VCALENDAR