Fri, Sep 29, 2017 @ 01:00 PM - 02:30 PM
Gerard Medioni (CS- chair)
Aiichiro Nakano (CS)
Antonio Ortega (EE)
PhD Candidate: Matthias Hernandez
PHE 223 - Friday, September 29th 1PM-2:30PM
3D inference and registration with application to retinal image analysis and face analysis
Image registration is a fundamental topic in image analysis, with applications in tracking, biometrics, medical imaging or 3D reconstruction. It consists in aligning 2 or multiple images of the same scene that are taken in different conditions, such as from different viewpoints, from different sensors or at different times. Similarly, 2D/3D registration aims at aligning captured 2D images with a 3D model.
In this talk, we study registration problems in challenging cases in which traditional methods do not provide satisfactory results. We show that even weak prior knowledge on the 3D structure provides reliable information that can be used for accurate registration. Specifically, we focus on two specific cases: 2D/3D multimodal retinal imaging and 3D face reconstruction from low-resolution videos.
For retinal image registration, we propose an integrated framework for registering an arbitrary number of images of different modalities, including a 3D volume. We propose a generic method to extract salient line structures in many image modality, based on dense tensor voting, and a robust registration framework for multiple images. Our approach can handle large variations across modalities and is evaluated on real-world retinal images with 5 modalities per eye.
For 3D face modeling, we propose to constrain traditional Structure from Motion (SfM) with a face shape prior to guide the correspondence finding process. We initialize a 3D face model xon coarse facial landmarks. We perform 3D reconstruction by maximizing photometric consistency across the video over 3D shape, camera poses and facial expressions. We compare our method to several state-of-the-art methods and show that our method can generate more accurate reconstructions.
To assess the discriminability of the reconstructed models, we develop an end-to-end 3D-3D facial recognition algorithm. We leverage existing deep learning networks trained on 2D images and fine tune-them on images generated by orthogonal projection of 3D data. We show that while having low amounts of 3D data, our method provides excellent recognition results while being significantly more scalable than state-of-the-art methods.
Finally, while excellent recognition results can be achieved with laser-scan 3D data, we have observed that reconstructed facial 3D models cannot be relied on for recognition purposes. We analyze which level of accuracy is required for enabling reliable 3D face recognition, and which factors impact recognition from reconstructed data.
Location: Charles Lee Powell Hall (PHE) - 223
Audiences: Everyone Is Invited
Contact: Lizsl De Leon