BEGIN:VCALENDAR METHOD:PUBLISH PRODID:-//Apple Computer\, Inc//iCal 1.0//EN X-WR-CALNAME;VALUE=TEXT:USC VERSION:2.0 BEGIN:VEVENT DESCRIPTION:PhD Candidate: Aaron Chan\n \n Title: Generating and Utilizing Machine Explanations for Trustworthy NLP\n \n Time: Tuesday, November 15, 3:00PM-5:00PM PST\n \n Committee: Xiang Ren (chair), Robin Jia, Jesse Thomason, Bistra Dilkina, Morteza Dehghani\n \n Abstract:\n Neural language models (LMs) have yielded remarkable success on a wide range of natural language processing (NLP) tasks. However, LMs sometimes exhibit undesirable behavior, which can be difficult to resolve due to LMs' opaque reasoning processes. This lack of transparency poses serious concerns about LMs' trustworthiness in high-stakes decision-making, thus motivating the use of machine explanations to automatically interpret how LMs make their predictions. In my thesis, I argue that building human trust in NLP systems requires being able to: (A) generate machine explanations for LM behavior faithfully and plausibly and (B) utilize machine explanations to improve LM generalization and decision-making. First, to address (A), I propose UNIREX, a unified learning framework for jointly optimizing machine explanations with respect to both faithfulness and plausibility, without compromising the LM's task performance. Second, for (B), I introduce ER-Test, a framework for evaluating the out-of-distribution generalization ability of LMs that are regularized via strongly-supervised machine explanations. Third, to further support (B), I present SalKG, an algorithm for improving LM generalization by regularizing LMs via weakly-supervised machine explanations. Finally, I discuss several future directions for achieving (A) and (B).\n \n Zoom Link: https://usc.zoom.us/j/95606515253?pwd=QzBvaVVpcWtYSFhVYzVoUi9tdHBRdz09 \n SEQUENCE:5 DTSTART:20221115T150000 LOCATION: DTSTAMP:20221115T150000 SUMMARY:PhD Defense - Aaron Chan UID:EC9439B1-FF65-11D6-9973-003065F99D04 DTEND:20221115T170000 END:VEVENT END:VCALENDAR