Select a calendar:
Filter September Events by Event Type:
SUNMONTUEWEDTHUFRISAT
University Calendar
Events for September
-
PhD Defense - Sung-Han Lin
Mon, Sep 11, 2017 @ 01:00 PM - 03:00 PM
Thomas Lord Department of Computer Science
University Calendar
PhD Candidate: Sung-Han Lin
Committee: Leana Golubchik (Chair), Fei Sha and Konstantinos Psounis
Title: Distributed Resource Management for QoS-Aware Service Provision
Time: September 11 (Monday) 1:00-3:00pm
Location: SAL 322
Abstract:
Provision of quality of service (QoS) is of significant importance to service providers, where QoS is a function of resource availability. When resources are insufficient at a particular service provider, two approaches to mitigating this problem are for that service provider to (a) limit the amount of resources allocated to its users, and (b) cooperate with other resource holders and find a reasonable way to share those resources. For instance, a private cloud could can reject its customers' requests or forward some requests to public clouds (e.g., Amazon) to achieve satisfactory QoS. To this end, in addition to designing resource allocation approaches, service providers should also consider how to maximize their utilities when cooperating with other resource holders.
Motivated by cooperation among resource holders and related resource allocation problems, in this document, we focus on several services and study how to allocate resources efficiently while maximizing all participants-benefit: For P2P video streaming, where the resource is the download rate for video playback, we eliminate the problem of playback pauses by adopting -reduced advertisement viewing duration- as a positive incentive for peers to contribute their unused download rates. For provision of on-demand compute capacity in the cloud service, where the resources are virtual machines (VMs), we study the incentives motivating the small-scale clouds to share their resources in some cooperative manner in order to achieve profitable service while maintaining customer SLAs. For co-locating machine learning training jobs, where the resource is the CPU core or GPU, we investigate the throughput improvement of a distributed training job via studying the trade-off of using more resources, and integrate the throughput estimation technique into the scheduling mechanisms for better sharing the limited computing resources.
Location: Henry Salvatori Computer Science Center (SAL) - 322
Audiences: Everyone Is Invited
Contact: Lizsl De Leon
-
PhD Defense - Matthias Hernandez
Fri, Sep 29, 2017 @ 01:00 PM - 02:30 PM
Thomas Lord Department of Computer Science
University Calendar
Committee:
Gerard Medioni (CS- chair)
Aiichiro Nakano (CS)
Antonio Ortega (EE)
PhD Candidate: Matthias Hernandez
Venue:
PHE 223 - Friday, September 29th 1PM-2:30PM
Title:
3D inference and registration with application to retinal image analysis and face analysis
Abstract:
Image registration is a fundamental topic in image analysis, with applications in tracking, biometrics, medical imaging or 3D reconstruction. It consists in aligning 2 or multiple images of the same scene that are taken in different conditions, such as from different viewpoints, from different sensors or at different times. Similarly, 2D/3D registration aims at aligning captured 2D images with a 3D model.
In this talk, we study registration problems in challenging cases in which traditional methods do not provide satisfactory results. We show that even weak prior knowledge on the 3D structure provides reliable information that can be used for accurate registration. Specifically, we focus on two specific cases: 2D/3D multimodal retinal imaging and 3D face reconstruction from low-resolution videos.
For retinal image registration, we propose an integrated framework for registering an arbitrary number of images of different modalities, including a 3D volume. We propose a generic method to extract salient line structures in many image modality, based on dense tensor voting, and a robust registration framework for multiple images. Our approach can handle large variations across modalities and is evaluated on real-world retinal images with 5 modalities per eye.
For 3D face modeling, we propose to constrain traditional Structure from Motion (SfM) with a face shape prior to guide the correspondence finding process. We initialize a 3D face model xon coarse facial landmarks. We perform 3D reconstruction by maximizing photometric consistency across the video over 3D shape, camera poses and facial expressions. We compare our method to several state-of-the-art methods and show that our method can generate more accurate reconstructions.
To assess the discriminability of the reconstructed models, we develop an end-to-end 3D-3D facial recognition algorithm. We leverage existing deep learning networks trained on 2D images and fine tune-them on images generated by orthogonal projection of 3D data. We show that while having low amounts of 3D data, our method provides excellent recognition results while being significantly more scalable than state-of-the-art methods.
Finally, while excellent recognition results can be achieved with laser-scan 3D data, we have observed that reconstructed facial 3D models cannot be relied on for recognition purposes. We analyze which level of accuracy is required for enabling reliable 3D face recognition, and which factors impact recognition from reconstructed data.
Location: Charles Lee Powell Hall (PHE) - 223
Audiences: Everyone Is Invited
Contact: Lizsl De Leon
-
PhD Defense - Anh Tran
Fri, Sep 29, 2017 @ 02:30 PM - 04:00 PM
Thomas Lord Department of Computer Science
University Calendar
* PhD Candidate: Anh Tran
* Committee:
Gerard Medioni (chair)
Ram Nevatia
Sandeep Gupta (outside)
* Title: Face Recognition and 3D Face Modeling from images in the wild.
(I assume that I can change my thesis title from the one registered for the hooding ceremony)
* Time: Sep 29 (Fri) 2:30-4:00pm
* Room: PHE 223
* Abstract:
Face recognition and 3D face modeling are key problems in computer vision with many applications in biometrics, human-computer interactions, surveillance, entertainment, and many more. While we have witnessed improvements over the last few years, open problems remain when images and videos in the wild are considered. In this dissertation, we discuss how to address these problems effectively, as well as the connection between them. Face recognition must address appearance changes due to 3D factors, such as head pose, face shape, and expression. Second, 3D face modeling recovers stable and recognizable 3D shape.
The first part of this thesis focuses on face recognition in the wild. We show that by coupling 3D face augmentation with a state-of-the-art 2D face recognition engine, we can greatly boost recognition accuracy. Our 3D face augmentation synthesizes facial images with different 3D head poses, 3D shapes, and expressions, thereby making our system robust to facial variations introduced by these factors. Our end-to-end system shows state-of-the-art performances on the latest challenging face recognition benchmarks. We also present some additional novel techniques to enhance the proposed system, from speeding-up rendering and matching to a complete landmark-free pipeline, which makes our system scalable and robust to a very-large training data and further break in-the-wild recognition records.
Inferring the accurate 3D geometry of a face from one or more images is a challenging problem. In the second part of this thesis, we present robust methods to build 3D morphable face models (3DMM), and validate the quality with face recognition tests. First, we define the state of the art of traditional analysis-by-synthesis 3DMM methods. Particularly, we investigate the impact of multiple inputs on the 3D modeling results in both accuracy and distinctiveness. From this observation, we then generate a large amount of 3D "ground-truth" faces, and train a convolutional neural network (CNN) to regress 3D shape and texture directly from any single input photo. The 3D estimates produced by our CNN surpass the state-of-the-art 3D reconstruction accuracy. Our CNN also shows the first competitive face recognition results on the face recognition benchmarks using 3D face shapes as representations, rather than the somewhat opaque deep features used by other systems. Finally, we introduce some additional techniques to push 3D face reconstruction to the next level, thereby estimating expression in 3d and also 3D fine-grained details of the face, aiming towards laser-scan quality in the wild.
Location: Charles Lee Powell Hall (PHE) - 223
Audiences: Everyone Is Invited
Contact: Lizsl De Leon