Logo: University of Southern California

Events Calendar



Select a calendar:



Filter April Events by Event Type:



University Calendar
Events for April

  • Big Data and Data Science: Some Hype but Real Opportunities

    Big Data and Data Science: Some Hype but Real Opportunities

    Thu, Apr 02, 2015 @ 05:00 PM - 06:00 PM

    Thomas Lord Department of Computer Science

    University Calendar


    Big Data and Data Science: Some Hype but Real Opportunities

    IMSC Seminar – Host: Cyrus Shahabi
    April 2 - 5:00-6:00pm
    SAL-101

    Speaker: Michael Franklin, UC Berkeley Computer Science

    Abstract
    Data is all the rage across industry and across campuses. While it may be temping to dismiss the buzz as just another spin of the hype cycle, there are substantial shifts and realignments underway that are fundamentally changing how Computer Science, Statistics and virtually all subject areas will be taught, researched, and perceived as disciplines. In this talk I will give my personal perspectives on this new landscape based on experiences organizing a large, industry-engaged academic Computer Science research project (the AMPLab), in helping to establish a campus-wide Data Science research initiative (the Berkeley Institute for Data Science), and my participation on a campus task force charged with mapping out Data Science Education for all undergraduates at Berkeley. I will make the case that there are real opportunities across campus in both education and research, and that Data Science should be viewed as an emerging discipline in its own right.

    Bio
    Michael Franklin is the Thomas M. Siebel Professor of Computer Science and Chair of the Computer Science Division at the University of California, Berkeley. Prof. Franklin is also the Director of the Algorithms, Machines, and People Laboratory (AMPLab) at UC Berkeley. The AMPLab currently works with 27 industrial sponsors including founding sponsors Amazon Web Services, Google, and SAP. AMPLab is well-known for creating a number of popular systems in the Open Source Big Data ecosystem including Spark, Mesos, GraphX and MLlib, all parts of the Berkeley Data Analytics Stack (BDAS). Prof. Franklin is a co-PI and Executive Committee member for the Berkeley Institute for Data Science, part of a multi-campus initiative to advance Data Science Environments. He is an ACM Fellow, a two-time winner of the ACM SIGMOD "Test of Time" award, has several "Best Paper" awards and two CACM Research Highlights selections, and is recipient of the outstanding Advisor Award from the Computer Science Graduate Student Association at Berkeley.

    Location: Henry Salvatori Computer Science Center (SAL) - 101

    WebCast Link: https://bluejeans.com/952662854

    Audiences: Everyone Is Invited

    Contact: Lizsl De Leon

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • PhD Defense - Zhuoliang Kang

    Fri, Apr 03, 2015 @ 10:00 AM - 12:00 PM

    Thomas Lord Department of Computer Science

    University Calendar


    Ph.D candidate: Zhuoliang Kang

    Title: Accurate 3D Model Acquisition from Imagery Data

    Date: Friday, April 3, 10:00 AM

    Location: EEB 131A

    Committee:
    Prof. Gerard Medioni (chair)
    Prof. Hao Li
    Prof. Alexander Sawchuk (outside member)

    Abstract:

    Acquisition of 3D models from 2D imagery has been essential for various applications. In particular, this dissertation investigates two important application scenarios: city-scale 3D reconstruction from aerial imagery and general 3D model acquisition with a commodity camera.

    The first part of this dissertation explores an online solution to the problem. We propose an approach to solve camera pose estimation and dense reconstruction from Wide Area Aerial Surveillance (WAAS) videos captured by an airborne platform. Our approach solves them in an online fashion: it incrementally updates a sparse 3D map and estimates the camera pose as each new frame arrives; depth maps of selected key frames are computed using a variational method and integrated to produce a full 3D model via volumetric reconstruction. In practice, WAAS videos are usually captured using a multi-camera system. We parallelize our approach on multiple GPUs to efficiently handle the multi-camera imagery. The approach is also extended for progressive 3D scanning with a hand-held camera.

    In many scenarios, online approach is not a necessity and accuracy has higher priority over efficiency. In the second part, we present two offline solutions. The first work generates dense 3D model based on depth map fusion, which combines variational multi-scale depth map estimation with volumetric reconstruction. We also present MeshRecon, a mesh-based offline system composed of three modules: a dense point cloud is generated using multi-resolution plane sweep method; an initial mesh model is extracted from the point cloud via global optimization considering visibility information of all images; the mesh model is then iteratively refined to capture structural details by optimizing the photometric consistency and spatial regularization. The major processes are also parallelized on GPU for efficiency. We validate its performance on real-world objects of different types at different scales in both indoor and outdoor environments. For aerial imagery case, we evaluate the approach on several real-world aerial imagery datasets each covering an urban scenario of several square kilometers. Quantitative result shows that the reconstructed model is highly accurate with mean error smaller than 1 meter over the entire city. Based on city 3D models generated at different times, we present a system for city-scale geometric change detection by performing comparisons at the 3D geometry level. Our system is able to detect geometric changes at different scales, ranging from a building cluster to vegetation changes, with high accuracy.

    Location: Hughes Aircraft Electrical Engineering Center (EEB) - 131A

    Audiences: Everyone Is Invited

    Contact: Lizsl De Leon

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • PhD Defense - Tung Sing Leung

    Thu, Apr 09, 2015 @ 09:30 AM - 11:30 AM

    Thomas Lord Department of Computer Science

    University Calendar


    Date: Thursday April 9th, 9.30am
    Location: SAL 213
    Title: Outdoor Visual Navigation Aid For The Blind In Dynamic Environments
    PhD Candidate: Tung-Sing Leung

    Committee:
    Prof. Gerard G Medioni (Chair)
    Prof. Laurent Itti
    Prof. James D. Weiland (outside member)

    Abstract:
    This thesis proposes a visual navigation aid for the blind. Our goal is to develop a wearable system to help the visually impaired navigate in highly dynamic outdoor environments. The proposed solution uses both visual sensing and existing map available online. Our work focus on two parts : visual odometry (VO) and localization. We propose different methods to compute the visual odometry even in clutter environments using either wearable stereo camera or smartphone. For the case of stereo camera, instead of computing egomotion from 3D point correspondences in consecutive frames, we propose to find the ground plane, then decompose the 6-DoF egomotion into a motion of the ground plane, and a planar motion on the ground plane. The ground plane is estimated at each frame either by analysis of the disparity array or approximated from the Inertial measurement unit (IMU) reading. We have extended our visual odometry to monocular system so that the proposed framework is applicable to smartphone which is more accessible than the stereo camera. To further improve the accuracy of the visual odometry and correct the drift caused by dead reckoning during long navigation, we combine visual odometry with the semantic information available in map to estimate the global coordinates of the walking user. The motion estimation results are fed into a Monte Carlo Localization framework which localizes the user by matching the local motion trajectory with the shape of the street network found in the map. We validated our system on real scenario of hours of walking around in both open terrain and urban environment. Experimental results show that our method not only corrects the cumulative drifting error but also manages to recover from temporary loss.

    Location: Henry Salvatori Computer Science Center (SAL) - 213

    Audiences: Everyone Is Invited

    Contact: Lizsl De Leon

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • PhD Defense - Randolph Voorhies

    Wed, Apr 22, 2015 @ 02:00 PM - 04:00 PM

    Thomas Lord Department of Computer Science

    University Calendar


    PhD Candidate: Randolph Voorhies

    Date: 4/22
    Time: 2-4pm
    Location: HNB 100

    Committee:
    Laurent Itti (Chair)
    Gaurav Sukhatme
    James Weiland


    Title:
    Efficient SLAM For Scanning LiDAR Sensors Using Combined Plane and Point Features

    Abstract:
    This work presents a novel SLAM algorithm for scanning (Velodyne style) LiDAR sensors. A Hough transform algorithm is first derived which exploits the unique geometry of these sensors to detect planar features, and it is then shown how these features can be matched over a sequence of scans to reconstruct the path of the sensor. A mathematical framework is then developed to track how well constrained these sequential alignment problems are, and to detect when they become under-constrained. A method is then presented which determines a minimal set of non-planar features to be extracted from scenes to achieve full constraint within this framework. Finally, all of these tools are combined into an online SLAM system which is able to close the loop on a variety of indoor and outdoor datasets without the use of odometry, GPS, or an IMU. Both the frame to frame alignment as well as the SLAM algorithms are compared to state of the art methods and superior performance is shown for both.

    Location: Hedco Neurosciences Building (HNB) - 100

    Audiences: Everyone Is Invited

    Contact: Lizsl De Leon

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • Phd Defense - Weijun Wang

    Thu, Apr 23, 2015 @ 02:00 PM - 03:30 PM

    Thomas Lord Department of Computer Science

    University Calendar


    PhD defense: Weijun Wang

    Title: Tracking Multiple Articulating Humans from a Single Camera

    Time: 2:00PM -3:30PM
    Location: Powell Hall of Engineering(PHE) 631
    Dissertation Committee:
    Chair: Professor Ram Nevatia
    Suya You
    C.-C.Jay Kuo

    Abstract:
    -----------------------------------------------------

    Monocular multi-target tracking aims at locating multiple targets, maintaining their identities across frames and estimating their motion trajectories from a single camera view, which is an important problem with many applications such as automatic surveillance and video retrieval. In particular, humans are often the most concerned targets as daily activities and events in real scenes usually involve human participants. Even though some fairly significant advances have been made on pedestrian tracking in recent years, the problem of tracking multiple humans towards higher-level reasoning is still far from solved. For example , humans might move in groups in real scenes and important social context features have not been effectively explored by the usual simplification that targets' trajectories are independent. Most importantly, unlike well-studied pedestrian detection, articulated human detection remains a challenging task which makes the existing pedestrian tracking approaches less effective on videos with multiple articulating humans. In this work, we focus on exploring important online learned appearance and social context cues to improve tracking performance on pedestrians as well as articulated humans.

    As pedestrian tracking is the foundation of the proposed approach, we first propose to improve its performance by considering social context. We propose a general quadratic formulation to incorporate social dependency into a global optimization problem to improve multi-target tracking accuracy. To ensure the tracking efficiency, we show an approach to convert the new binary quadratic programming formulation to a semidefinite programming problem under convex relaxation, which can be efficiently solved by off-the-shelf methods. With the new formulation, we propose to consider a few simple common trajectory dependency factors, which can be efficiently inferred online to improve tracking performance, especially in semi-crowded scenarios. In scenarios where no trajectory dependency can be explored, our solution is the same and as efficient as those classic linear optimization formulations. Experimental results on standard datasets show the advantages of our approach over state-of-the-art. Moreover, this new formulation provides a general framework to consider various useful high order information to improve multi-target tracking.

    To address the problem of tracking multiple articulating humans from a single camera, we propose a hybrid framework. Our method incorporates offline learned category-level detector with online learned instance-specific detector as a hybrid system. To deal with humans in large pose articulation, which can not be reliably detected by off-line trained detectors, we propose an online learned instance-specific patch-based detector, consisting of layered patch classifiers. With extrapolated tracklets by online learned detectors, we use the discriminative color filters learned online to compute the appearance affinity score for further global association.

    Experimental evaluation on both standard pedestrian datasets and articulated human datasets shows significant improvement compared to state-of-the-art multi-human tracking methods.

    Location: Charles Lee Powell Hall (PHE) - 631

    Audiences: Everyone Is Invited

    Contact: Lizsl De Leon

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File