Fri, Apr 29, 2022 @ 03:00 AM - 04:30 PM
PhD Candidate: Aida Mostafazadeh Davani
Friday, April 29th, 3pm, SGM 911.
Committee: Morteza Dehghani, Bistra Dilkina, Xiang Ren, and Stephen Read
Integrating Annotator Biases into Modeling Subjective Language Classification Tasks
Subjective annotation tasks are inherently nuanced due to annotators' individual differences in understanding of language. Training Natural Language Processing (NLP) models for making predictions in subjective tasks based on human-annotated datasets is also marked by challenges; model decisions are rarely generalizable to judgements of unseen annotators. Therefore, modeling an acceptable interpretation of subjective tasks requires integrating psychological dimensions that capture individual differences in perceiving language for each specific task. This thesis provides an alternative approach for modeling subjective NLP tasks by tailoring representations based on annotators' varying perceptions of language. First, NLP datasets for subjective tasks are investigated to demonstrate how aggregating annotation into single ground truth labels impacts the representation of different perspectives in language resources. Then, the impacts of annotators' social biases are explored to demonstrate sources for human-like biases in annotated datasets and language classifiers. And lastly, alternative approaches for incorporating annotators' individual differences into modeling their annotation behaviors are presented.
Location: Seeley G. Mudd Building (SGM) - 911
Audiences: Everyone Is Invited
Contact: Lizsl De Leon