Mon, May 10, 2021 @ 11:00 AM - 01:00 PM
Title: Fair Machine Learning for Human Behavior Understanding
Time: 11:00 AM-1:00 PM PST, May 10 (Monday)
Committee: Emilio Ferrara, Cyrus Shahabi, Shri Narayanan, Kristina Lerman, and Fred Morstatter.
Zoom link: https://usc.zoom.us/j/96050343860
Artificial intelligence (AI) and machine learning models have been recently applied extensively to understand and predict human behavior, often in applications with major societal implications, such as making recruitment decisions, estimating daily well-beings, or assessing clinical treatments. Despite the increasing body of research on modeling human behavior and fair machine learning, most studies focus on homogeneous and objective measurements, and little has been discussed on how to mitigate the impact of heterogeneity on utility and fairness simultaneously. The increasing amount of collected data also raises concerns on data privacy. Recent regulations such as European General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) regulate the usage of personal data. However, most previous fairness work requires the access of sensitive attributes (e.g., race, gender) to debias the system.
This dissertation proposal will articulate the challenges posed by complex, multimodal human behavior data in both model utility and fairness. The proposed work is decomposed into three tasks, namely tackling machine learning fairness issues originating from the heterogeneous human behaviors (Task 1), and biased behavior annotations (Task 2), and designing fair machine learning methods without sensitive attributes (Task 3) for both centralized and federated learning. This work will provide possible solutions to mitigate bias of human behavior understanding systems, reducing barriers to access, alleviating systemic racism, discrimination, and unfair process.
WebCast Link: https://usc.zoom.us/j/96050343860
Audiences: Everyone Is Invited
Contact: Lizsl De Leon