Logo: University of Southern California

Events Calendar



Select a calendar:



Filter September Events by Event Type:



Events for September 29, 2017

  • Repeating EventMeet USC: Admission Presentation, Campus Tour, and Engineering Talk

    Fri, Sep 29, 2017

    Viterbi School of Engineering Undergraduate Admission

    Receptions & Special Events


    This half day program is designed for prospective freshmen and family members. Meet USC includes an information session on the University and the Admission process, a student led walking tour of campus, and a meeting with us in the Viterbi School. During the engineering session we will discuss the curriculum, research opportunities, hands-on projects, entrepreneurial support programs, and other aspects of the engineering school. Meet USC is designed to answer all of your questions about USC, the application process, and financial aid.

    Reservations are required for Meet USC. This program occurs twice, once at 8:30 a.m. and again at 12:30 p.m.

    Please make sure to check availability and register online for the session you wish to attend. Also, remember to list an Engineering major as your "intended major" on the webform!

    RSVP

    Location: Ronald Tutor Campus Center (TCC) - USC Admission Office

    Audiences: Prospective Freshmen & Family Members

    View All Dates

    Contact: Viterbi Admission

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • Ming Hsieh Institute Seminar Series on Integrated Systems

    Fri, Sep 29, 2017 @ 10:00 AM - 11:30 AM

    Ming Hsieh Department of Electrical and Computer Engineering

    Conferences, Lectures, & Seminars


    Speaker: Dr. Bodhisatwa Sadhu, Research Staff Member, IBM T.J. Watson Research Center

    Talk Title: mmWave Radio Design for 5G Base-stations and Mobile Handsets

    Host: Profs. Hossein Hashemi, Mike Chen, Mahta Moghaddam, and Dina El-Damak

    More Information: MHI Seminar Series IS -Bodhisatwa Sadhu.pdf

    Location: Hughes Aircraft Electrical Engineering Center (EEB) - 132

    Audiences: Everyone Is Invited

    Contact: Jenny Lin

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • AI Seminar

    Fri, Sep 29, 2017 @ 11:00 AM - 12:00 PM

    Information Sciences Institute

    Conferences, Lectures, & Seminars


    Speaker: Stefano Ermon, Stanford University

    Talk Title: Learning with limited supervision

    Abstract: Many of the recent successes of machine learning have been characterized by the availability of large quantities of labeled data. Nonetheless, we observe that humans are often able to learn with very few labeled examples or with only high level instructions for how a task should be performed. In this talk, I will present some new approaches for learning useful models in contexts where labeled training data is scarce or not available at all. I will first discuss and formally prove some limitations of existing training criteria used for learning hierarchical generative models. I will then introduce novel architectures and methods to overcome these limitations, allowing us to learn a hierarchy of interpretable features from unlabeled data. Finally, I will discuss ways to use prior knowledge (such as physics laws or simulators) to provide weak forms of supervision, showing how we can learn to solve useful tasks, including object tracking, without any labeled data.

    Biography: Stefano Ermon is currently an Assistant Professor in the Department of Computer Science at Stanford University, where he is affiliated with the Artificial Intelligence Laboratory. He completed his PhD in computer science at Cornell in 2015. His research interests include techniques for scalable and accurate inference in graphical models, large-scale combinatorial optimization, and robust decision making under uncertainty, and is motivated by a range of applications, in particular ones in the emerging field of computational sustainability. Stefano's research has won several awards, including three Best Paper Awards, a World Bank Big Data Innovation Challenge, and was selected by Scientific American as one of the 10 World Changing Ideas in 2016. He is a recipient of the Sony Faculty Innovation Award and NSF CAREER Award.

    Host: Aram Galstyan

    Location: Information Science Institute (ISI) - 11th floor large conference room

    Audiences: Everyone Is Invited

    Contact: Kary LAU

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • W.V.T. RUSCH ENGINEERING HONORS COLLOQUIUM

    Fri, Sep 29, 2017 @ 01:00 PM - 02:00 PM

    USC Viterbi School of Engineering

    Conferences, Lectures, & Seminars


    Speaker: Prof. Albert Dato, Department of Engineering, Harvey Mudd College

    Talk Title: Fascinating Applications of Graphene

    Location: Henry Salvatori Computer Science Center (SAL) - 101

    Audiences: Everyone Is Invited

    Contact: Su Stevens

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • PhD Defense - Matthias Hernandez

    Fri, Sep 29, 2017 @ 01:00 PM - 02:30 PM

    Thomas Lord Department of Computer Science

    University Calendar


    Committee:
    Gerard Medioni (CS- chair)
    Aiichiro Nakano (CS)
    Antonio Ortega (EE)

    PhD Candidate: Matthias Hernandez

    Venue:
    PHE 223 - Friday, September 29th 1PM-2:30PM


    Title:
    3D inference and registration with application to retinal image analysis and face analysis

    Abstract:
    Image registration is a fundamental topic in image analysis, with applications in tracking, biometrics, medical imaging or 3D reconstruction. It consists in aligning 2 or multiple images of the same scene that are taken in different conditions, such as from different viewpoints, from different sensors or at different times. Similarly, 2D/3D registration aims at aligning captured 2D images with a 3D model.
    In this talk, we study registration problems in challenging cases in which traditional methods do not provide satisfactory results. We show that even weak prior knowledge on the 3D structure provides reliable information that can be used for accurate registration. Specifically, we focus on two specific cases: 2D/3D multimodal retinal imaging and 3D face reconstruction from low-resolution videos.
    For retinal image registration, we propose an integrated framework for registering an arbitrary number of images of different modalities, including a 3D volume. We propose a generic method to extract salient line structures in many image modality, based on dense tensor voting, and a robust registration framework for multiple images. Our approach can handle large variations across modalities and is evaluated on real-world retinal images with 5 modalities per eye.
    For 3D face modeling, we propose to constrain traditional Structure from Motion (SfM) with a face shape prior to guide the correspondence finding process. We initialize a 3D face model xon coarse facial landmarks. We perform 3D reconstruction by maximizing photometric consistency across the video over 3D shape, camera poses and facial expressions. We compare our method to several state-of-the-art methods and show that our method can generate more accurate reconstructions.
    To assess the discriminability of the reconstructed models, we develop an end-to-end 3D-3D facial recognition algorithm. We leverage existing deep learning networks trained on 2D images and fine tune-them on images generated by orthogonal projection of 3D data. We show that while having low amounts of 3D data, our method provides excellent recognition results while being significantly more scalable than state-of-the-art methods.
    Finally, while excellent recognition results can be achieved with laser-scan 3D data, we have observed that reconstructed facial 3D models cannot be relied on for recognition purposes. We analyze which level of accuracy is required for enabling reliable 3D face recognition, and which factors impact recognition from reconstructed data.

    Location: Charles Lee Powell Hall (PHE) - 223

    Audiences: Everyone Is Invited

    Contact: Lizsl De Leon

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • Munushian Seminar - Ming C. Wu, Friday, September 22nd at 2:00pm in EEB 132

    Fri, Sep 29, 2017 @ 02:00 PM - 03:30 PM

    Ming Hsieh Department of Electrical and Computer Engineering

    Conferences, Lectures, & Seminars


    Speaker: Ming C. Wu, University of California, Berkeley

    Talk Title: Silicon Photonic MEMS

    Abstract: Ming C. Wu is Nortel Distinguished Professor of Electrical Engineering and Computer Sciences at the University of California, Berkeley. He is also Co-Director of Berkeley Sensor and Actuator Center (BSAC) and Faculty Director of UC Berkeley Marvell Nanolab. Dr. Wu received his M.S. and Ph.D. in Electrical Engineering and Computer Sciences from the University of California, Berkeley in 1988. He has been with AT&T Bell Laboratories, Murray Hill (1988-1992) and UCLA (1993 to 2004) before joining the faculty at Berkeley. His research interests include optoelectronics, nanophotonics, MEMS, and optofluidics. He has published 8 book chapters, over 500 papers in journals and conferences, and 25 issued U.S. patents.
    Prof. Wu is an IEEE Fellow, and a Packard Foundation Fellow (1992 - 1997). He received the 2007 Paul F. Forman Engineering Excellence Award, the 2017 C.E.K. Mees Medal from Optical Society of America, and the 2016 William Streifer Award from IEEE Photonics Society.

    Biography: Ming C. Wu is Nortel Distinguished Professor of Electrical Engineering and Computer Sciences at the University of California, Berkeley. He is also Co-Director of Berkeley Sensor and Actuator Center (BSAC) and Faculty Director of UC Berkeley Marvell Nanolab. Dr. Wu received his M.S. and Ph.D. in Electrical Engineering and Computer Sciences from the University of California, Berkeley in 1988. He has been with AT&T Bell Laboratories, Murray Hill (1988-1992) and UCLA (1993 to 2004) before joining the faculty at Berkeley. His research interests include optoelectronics, nanophotonics, MEMS, and optofluidics. He has published 8 book chapters, over 500 papers in journals and conferences, and 25 issued U.S. patents.
    Prof. Wu is an IEEE Fellow, and a Packard Foundation Fellow (1992 - 1997). He received the 2007 Paul F. Forman Engineering Excellence Award, the 2017 C.E.K. Mees Medal from Optical Society of America, and the 2016 William Streifer Award from IEEE Photonics Society.

    Host: EE-Electrophysics

    More Info: minghsiehee.usc.edu/about/lectures

    Location: Hughes Aircraft Electrical Engineering Center (EEB) - 132

    Audiences: Everyone Is Invited

    Contact: Marilyn Poplawski

    Event Link: minghsiehee.usc.edu/about/lectures

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • Powering the Future of Imaging and Signal Processing with Data-Driven Systems

    Fri, Sep 29, 2017 @ 02:00 PM - 03:00 PM

    Ming Hsieh Department of Electrical and Computer Engineering

    Conferences, Lectures, & Seminars


    Speaker: Saiprasad Ravishankar, Electrical Engineering & Computer Science Department, University of Michigan

    Talk Title: Powering the Future of Imaging and Signal Processing with Data-Driven Systems

    Series: Medical Imaging Seminar Series

    Abstract: The data-driven learning of signal models including dictionaries, sparsifying transforms, low-rank models, tensor and manifold models, etc., is of great interest in many applications. In this talk, I will present my research that developed efficient, scalable, and effective data-driven models and methodologies for signal processing and imaging. I will mainly discuss my work on transform learning. Various interesting structures for sparsifying transforms such as well-conditioning, double sparsity, union-of-transforms, incoherence, rotation invariance, etc., can be considered, which enable their efficient and effective learning and usage. Transform learning-driven approaches achieve promising results in applications such as image and video denoising, and X-ray computed tomography or magnetic resonance image (MRI) reconstruction from limited or corrupted data. The convergence properties of the algorithms will be discussed. I will also present recent work on efficient dictionary learning in combination with low-rank models, and demonstrate the usefulness of the resulting LASSI method for dynamic MRI. The efficiency and effectiveness of the methods proposed in my research may benefit a wide range of additional applications in imaging, computer vision, neuroscience, and other areas requiring data-driven parsimonious models. Finally, I will provide a brief overview of recent works on physics-driven deep training of image reconstruction algorithms, light field reconstruction from focal stacks, online data-driven estimation of dynamic data from streaming, limited measurements, etc.

    Biography: Saiprasad Ravishankar received the B.Tech. degree in Electrical Engineering from the Indian Institute of Technology Madras, in 2008. He received the M.S. and Ph.D. degrees in Electrical and Computer Engineering, in 2010 and 2014 respectively, from the University of Illinois at Urbana-Champaign, where he was an Adjunct Lecturer in the Department of Electrical and Computer Engineering during Spring 2015, and a Postdoctoral Research Associate at the Coordinated Science Laboratory until August, 2015. Since then, he has been a Research Fellow in the Electrical Engineering and Computer Science Department at the University of Michigan. His research interests include signal, image and video processing, signal modeling, data science, dictionary learning, biomedical and computational imaging, data-driven methods, inverse problems, compressed sensing, machine learning, and large-scale data processing.He has received multiple awards including the Sri Ramasarma V Kolluri Memorial Prize from IIT Madras and the IEEE Signal Processing Society Young Author Best Paper Award for his paper Learning Sparsifying Transforms published in IEEE Transactions on Signal Processing.


    Host: Professor Richard Leahy

    Location: Ronald Tutor Hall of Engineering (RTH) - 105

    Audiences: Everyone Is Invited

    Contact: Talyia White

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • Center for Cyber-Physical Systems and Internet of Things and Ming Hsieh Institute for Electrical Engineering Joint Seminar Series on Cyber-Physical Systems

    Fri, Sep 29, 2017 @ 02:00 PM - 03:00 PM

    Ming Hsieh Department of Electrical and Computer Engineering

    Conferences, Lectures, & Seminars


    Speaker: Yanzhi Wang , Syracuse University

    Talk Title: Towards the limits of energy efficiency and performance of deep learning systems

    Abstract: Deep learning systems have achieved unprecedented progresses in a number of fields such as computer vision, robotics, game playing, unmanned driving and aerial systems, and other AI-related fields. However, the rapidly expanding model size is posing a significant restriction on both the computation and weight storage, for both inference and training, and on both high-performance computing systems and low-power embedded system and IoT applications. In order to overcome these limitations, we propose a holistic framework of incorporating structured matrices into deep learning systems, and could achieve (i) simultaneous reduction on weight storage and computational complexities, (ii) simultaneous speedup of training and inference, and (iii) generality and fundamentality that can be adopted to both software and hardware implementations, different platforms, and different neural network types, sizes, and scalability.

    Besides algorithm-level achievements, our framework has (i) a solid theoretical foundation to prove that our approach will converge to the same "effectiveness" as deep learning without compression, and to demonstrate/prove that our approach approach/achieve the theoretical limitation of computation and storage of deep learning systems; (ii) platform-specific implementations and optimizations on smartphones, FPGAs, and ASIC circuits. We demonstrate that our smartphone-based implementation achieves the similar speed of GPU and existing ASIC implementations on the same application. Our FPGA-based implementations for deep learning systems and LSTM networks could achieve 11X+ energy efficiency improvement compared with the best state-of-the-arts, and even higher energy efficiency gain compared with IBM TrueNorth neurosynaptic processor. Our proposed framework can achieve 3.5 TOPS computation performance in FPGAs, and is the first to enable nano-second level recognition speed for image recognition tasks.


    Biography: Yanzhi Wang is currently an assistant professor in the Department of Electrical Engineering and Computer Science at Syracuse University, from August 2015. He has received his Ph.D. Degree in Computer Engineering from University of Southern California (USC) in 2014, under supervision of Prof. Massoud Pedram, and his B.S. Degree in Electronic Engineering from Tsinghua University in 2009.

    Dr. Wang's current research interests are the energy-efficient and high-performance implementations of deep learning and artificial intelligence systems, neuromorphic computing and new computing paradigms, and emerging deep learning algorithms/systems such as Bayesian neural networks, generative adversarial networks (GANs), and deep reinforcement learning. Besides, he works on the application of deep learning and machine intelligence in various mobile and IoT systems, medical systems, and UAVs, as well as the integration of security protection in deep learning systems. He also works on near-threshold computing for IoT devices and energy-efficient cyber-physical systems. His group works on both algorithms and actual implementations (FPGAs, circuit tapeouts, mobile and embedded systems, and UAVs).

    His work has been published in top venues in conferences and journals (e.g. ASPLOS, MICRO, ICML, DAC, ICCAD, DATE, ASP-DAC, ISLPED, INFOCOM, ICDCS, TComputer, TCAD, etc.), and has been cited for around 3,000 times according to Google Scholar. He has received four Best Paper or Top Paper Awards from major conferences including IEEE ICASSP (top 3 among all 2,000+ submissions), ISLPED, IEEE CLOUD, and ISVLSI. He has another six Best Paper Nominations and two Popular Papers in IEEE TCAD. His group is sponsored by the NSF, DARPA, IARPA, AFRL/AFOSR, Syracuse CASE Center, and industry sources.


    Host: Paul Bogdan

    Location: Corwin D. Denney Research Center (DRB) - 146

    Audiences: Everyone Is Invited

    Contact: Estela Lopez

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • PhD Defense - Anh Tran

    Fri, Sep 29, 2017 @ 02:30 PM - 04:00 PM

    Thomas Lord Department of Computer Science

    University Calendar


    * PhD Candidate: Anh Tran
    * Committee:
    Gerard Medioni (chair)
    Ram Nevatia
    Sandeep Gupta (outside)
    * Title: Face Recognition and 3D Face Modeling from images in the wild.
    (I assume that I can change my thesis title from the one registered for the hooding ceremony)
    * Time: Sep 29 (Fri) 2:30-4:00pm
    * Room: PHE 223
    * Abstract:
    Face recognition and 3D face modeling are key problems in computer vision with many applications in biometrics, human-computer interactions, surveillance, entertainment, and many more. While we have witnessed improvements over the last few years, open problems remain when images and videos in the wild are considered. In this dissertation, we discuss how to address these problems effectively, as well as the connection between them. Face recognition must address appearance changes due to 3D factors, such as head pose, face shape, and expression. Second, 3D face modeling recovers stable and recognizable 3D shape.

    The first part of this thesis focuses on face recognition in the wild. We show that by coupling 3D face augmentation with a state-of-the-art 2D face recognition engine, we can greatly boost recognition accuracy. Our 3D face augmentation synthesizes facial images with different 3D head poses, 3D shapes, and expressions, thereby making our system robust to facial variations introduced by these factors. Our end-to-end system shows state-of-the-art performances on the latest challenging face recognition benchmarks. We also present some additional novel techniques to enhance the proposed system, from speeding-up rendering and matching to a complete landmark-free pipeline, which makes our system scalable and robust to a very-large training data and further break in-the-wild recognition records.

    Inferring the accurate 3D geometry of a face from one or more images is a challenging problem. In the second part of this thesis, we present robust methods to build 3D morphable face models (3DMM), and validate the quality with face recognition tests. First, we define the state of the art of traditional analysis-by-synthesis 3DMM methods. Particularly, we investigate the impact of multiple inputs on the 3D modeling results in both accuracy and distinctiveness. From this observation, we then generate a large amount of 3D "ground-truth" faces, and train a convolutional neural network (CNN) to regress 3D shape and texture directly from any single input photo. The 3D estimates produced by our CNN surpass the state-of-the-art 3D reconstruction accuracy. Our CNN also shows the first competitive face recognition results on the face recognition benchmarks using 3D face shapes as representations, rather than the somewhat opaque deep features used by other systems. Finally, we introduce some additional techniques to push 3D face reconstruction to the next level, thereby estimating expression in 3d and also 3D fine-grained details of the face, aiming towards laser-scan quality in the wild.

    Location: Charles Lee Powell Hall (PHE) - 223

    Audiences: Everyone Is Invited

    Contact: Lizsl De Leon

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • Astani Civil and Environmental Engineering Ph.D. Seminar

    Fri, Sep 29, 2017 @ 03:00 PM - 04:00 PM

    Sonny Astani Department of Civil and Environmental Engineering

    Conferences, Lectures, & Seminars


    Speaker: Meida Chen and Sasan Tavakkol, Astani CEE Graduate Students

    Talk Title: Point Cloud Meshes Segmentation and Information Extraction of Outdoor Scenes for The Creation of Virtual Environments and Simulation andInteractive and Immersive Coastal Hydrodynamic Simulation

    Abstract: By Meida Chen

    Be able to segment, classify, and recognize different types of objects and identify and extract associated features in a photogrammetric generated meshes is essential for creating realistic virtual simulations. Rendering different objects in a virtual environment differently and assign actual physical properties to each object will not only enhance the visual quality but also allow various user interaction with a terrain model. For instance, consider the case of training soldiers in a virtual environment with 3D meshes representing the scene. The task is to recognize the shortest path from location A to location B with the minimum exposure to enemy fire. With the artificial intelligence AI searching algorithm, such as A, the shortest path could be computed, and penalties cloud be assigned to a route based on the number of obstructions that are blocking the enemies line of sight. However, realistically speaking, line of sight that is blocked by buildings and trees should be assigned with different penalties when considering a route, since some materials are easy to be destroyed and damaged. Though this example is an oversimplification, it emphasizes the point that without segmented semantic data, realistic virtual simulations could not be achieved. Thus, in this study the authors established a mesh segmentation and information extraction framework that combines both supervised and unsupervised machine learning algorithms to analyze meshes point clouds that are generated with photogrammetric technique. The segmentation process will be first performed on the generated 3D point clouds. Following that, the generated meshes will be segmented accordingly. Object information such as individual tree locations, the dimension of a tree, and building footprints are then extracted separately. The proposed information extraction processes are designed to overcome the data quality issues in photogrammetric generated point clouds data tend to be noisy, and in some cases parts of a wall and the trunk of a tree cannot be captured due to dense canopy.

    By Sasan Tavakkol

    Recent catastrophic events such as the Tsunami in Japan 2011 and Hurricane Harvey storm surge and winds in the US 2017, have raised the global awareness for an urgent need to understand the response of developed coastal regions to tsunamis and wind waves. We discuss our efforts in developing the first interactive coastal wave simulation and visualization software, called Celeris. This software can significantly help scientists better understand nearshore wave dynamics as it allows them to observe wave interactions in real time, modify the boundary conditions and model parameters as the model is running, and see the effect of changes immediately. Celeris is released under a GNU license and is currently in use by hundreds of coastal researchers and engineering firms over the world. This software uses a hybrid finite volume finite difference method to solve the extended Boussinesq equations on the GPU. We also explore the opportunities in immersive visualization of coastal waves through Virtual Reality and Augmented Reality to help engineers work in an interactive, immersive, and collaborative environment.


    Location: Waite Phillips Hall Of Education (WPH) - B27

    Audiences: Everyone Is Invited

    Contact: Evangeline Reyes

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File
  • Social: Tree of Knowledge Hike

    Fri, Sep 29, 2017 @ 08:00 PM - 12:00 AM

    Viterbi School of Engineering Student Organizations

    Student Activity


    Quickly becoming an iconic LA hiking destination, the Tree of Knowledge features stunning views of the downtown LA nightscape as well as the only surviving tree from the 2007 Hollywood Hills fires. Join ASBME Friday, September 29th as we make the night trek up to the tree and contribute to the famed geocaching box filled with inspirational stories, mesages, and quotes left by previous hikers. We will leave from campus at 8pm and don't forget a flashlight, water, snacks, good shoes, and a jacket.

    Audiences: Everyone Is Invited

    Contact: Associated Students of Biomedical Engineering

    Add to Google CalendarDownload ICS File for OutlookDownload iCal File