Fri, Feb 28, 2020 @ 01:30 PM - 03:30 PM
PhD Candidate: Ayush Jaiswal
Date: Friday, February 28, 2020
Time: 1:30 PM - 3:30 PM
Location: SAL 213
Committee: Premkumar Natarajan (Chair), Ram Nevatia, Cauligi S. Raghavendra
Title: Invariant Representation Learning for Robust and Fair Predictions
Learning representations that are invariant to nuisance factors of data improves robustness of machine learning models, and promotes fairness for factors that represent biasing information. This view of invariance has been adopted for deep neural networks (DNNs) recently as they learn latent representations of data by design. Numerous methods for invariant representation learning for DNNs have emerged in recent literature, but the research problem remains challenging to solve: existing methods achieve partial invariance or fall short of optimal performance on the prediction tasks that the DNNs need to be trained for.
This thesis presents novel approaches for inducing invariant representations in DNNs by effectively separating predictive factors of data from undesired nuisances and biases. The presented methods improve the predictive performance and the fairness of DNNs through increased invariance to undesired factors. Empirical evaluation on a diverse collection of benchmark datasets shows that the presented methods achieve state-of-the-art performance.
Application of the invariance methods to real-world problems is also presented, demonstrating their practical utility. Specifically, the presented methods improve nuisance-robustness in presentation attack detection and automated speech recognition, fairness in face-based analytics, and generalization in low-data and semi-supervised learning settings.
Audiences: Everyone Is Invited
Contact: Lizsl De Leon