Fri, Mar 11, 2022 @ 03:00 PM - 05:00 PM
Thomas Lord Department of Computer Science
PhD Candidate: Aaron Chan
Title: Generating and Utilizing Machine Explanations for Trustworthy NLP
Time: Friday, March 11, 3:00PM-5:00PM PST
Committee: Xiang Ren, Robin Jia, Jesse Thomason, Bistra Dilkina, Morteza Dehghani
Neural language models (NLMs) have achieved remarkable success on a wide range of natural language processing (NLP) tasks. However, NLMs sometimes exhibit undesirable behavior, which can be difficult to resolve due to NLMs\' opaque reasoning processes. Such a lack of transparency poses serious concerns about NLMs\' trustworthiness in high-stakes decision-making.
This motivates the use of machine explanations to automatically interpret how NLMs make decisions. In my thesis proposal, I argue that building human trust in NLP systems requires being able to: (A) generate machine explanations for NLM behavior faithfully and plausibly, and (B) utilize machine explanations to improve language model decision-making.
First, I introduce a framework for optimizing machine explanations w.r.t. both faithfulness and plausibility, without compromising the NLM\'s task performance. Second, I present an algorithm for regularizing NLMs via machine explanations, in order to improve NLM task performance. Third, I discuss using limited human-in-the-loop feedback on machine explanations to further improve NLMs\' generalization ability.
Zoom Link: https://usc.zoom.us/j/99570395469?pwd=OE9IMnhLOU5oSmRCYzFiUWdMZ1BuZz09
Audiences: Everyone Is Invited
Contact: Lizsl De Leon