PhD Defense - Anh Tran
Fri, Sep 29, 2017 @ 02:30 PM - 04:00 PM
* PhD Candidate: Anh Tran
Gerard Medioni (chair)
Sandeep Gupta (outside)
* Title: Face Recognition and 3D Face Modeling from images in the wild.
(I assume that I can change my thesis title from the one registered for the hooding ceremony)
* Time: Sep 29 (Fri) 2:30-4:00pm
* Room: PHE 223
Face recognition and 3D face modeling are key problems in computer vision with many applications in biometrics, human-computer interactions, surveillance, entertainment, and many more. While we have witnessed improvements over the last few years, open problems remain when images and videos in the wild are considered. In this dissertation, we discuss how to address these problems effectively, as well as the connection between them. Face recognition must address appearance changes due to 3D factors, such as head pose, face shape, and expression. Second, 3D face modeling recovers stable and recognizable 3D shape.
The first part of this thesis focuses on face recognition in the wild. We show that by coupling 3D face augmentation with a state-of-the-art 2D face recognition engine, we can greatly boost recognition accuracy. Our 3D face augmentation synthesizes facial images with different 3D head poses, 3D shapes, and expressions, thereby making our system robust to facial variations introduced by these factors. Our end-to-end system shows state-of-the-art performances on the latest challenging face recognition benchmarks. We also present some additional novel techniques to enhance the proposed system, from speeding-up rendering and matching to a complete landmark-free pipeline, which makes our system scalable and robust to a very-large training data and further break in-the-wild recognition records.
Inferring the accurate 3D geometry of a face from one or more images is a challenging problem. In the second part of this thesis, we present robust methods to build 3D morphable face models (3DMM), and validate the quality with face recognition tests. First, we define the state of the art of traditional analysis-by-synthesis 3DMM methods. Particularly, we investigate the impact of multiple inputs on the 3D modeling results in both accuracy and distinctiveness. From this observation, we then generate a large amount of 3D "ground-truth" faces, and train a convolutional neural network (CNN) to regress 3D shape and texture directly from any single input photo. The 3D estimates produced by our CNN surpass the state-of-the-art 3D reconstruction accuracy. Our CNN also shows the first competitive face recognition results on the face recognition benchmarks using 3D face shapes as representations, rather than the somewhat opaque deep features used by other systems. Finally, we introduce some additional techniques to push 3D face reconstruction to the next level, thereby estimating expression in 3d and also 3D fine-grained details of the face, aiming towards laser-scan quality in the wild.
Location: Charles Lee Powell Hall (PHE) - 223
Audiences: Everyone Is Invited
Contact: Lizsl De Leon