-
PhD Thesis Proposal - Emily Sheng
Tue, Mar 03, 2020 @ 10:00 AM - 11:30 AM
Thomas Lord Department of Computer Science
University Calendar
Title: Towards Fairness in Natural Language Processing
Date/Time: Tuesday, March 3rd, 10-11:30am
Location: SAL 213
Candidate: Emily Sheng
Committee: Prof. Prem Natarajan (advisor), Prof. Nanyun Peng, Prof. Aram Galstyan, Prof. Shri Narayanan, Prof. Yan Liu
Abstract: With the advent of more effective, large-scale natural language processing (NLP) techniques, issues of fairness and bias in NLP techniques have become increasingly important. Biased models have the potential to perpetuate and amplify societal biases, which has implications for ethics, model robustness, and model interpretability. First, we describe our work to define biases in a language generation setting. We subsequently describe how different definitions of bias can be used to analyze biases in existing NLP systems, e.g., language generation and named entity recognition. Finally, we propose techniques that allow us to move towards the mitigation and control of biases. This talk will examine the importance of defining tasks and metrics for biases in NLP, how our bias analyses can inform our approach to bias mitigation, and related directions in how we can move towards fairness in NLP.
Location: Henry Salvatori Computer Science Center (SAL) - 213
Audiences: Everyone Is Invited
Contact: Lizsl De Leon