Logo: University of Southern California

Events Calendar



Select a calendar:



Filter March Events by Event Type:



Events for March 07, 2017

  • CS Colloquium: Philip Thomas (CMU) - Safe Machine Learning

    Tue, Mar 07, 2017 @ 11:00 AM - 12:20 PM

    Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Philip Thomas, Carnegie Mellon University

    Talk Title: Safe Machine Learning

    Series: CS Colloquium

    Abstract: This lecture satisfies requirements for CSCI 591: Computer Science Research Colloquium.

    Machine learning algorithms are everywhere, ranging from simple data analysis and pattern recognition tools used across the sciences to complex systems that achieve super-human performance on various tasks. Ensuring that they are safe-”that they do not, for example, cause harm to humans or act in a racist or sexist way-”is therefore not a hypothetical problem to be dealt with in the future, but a pressing one that we can and should address now.

    In this talk I will discuss some of my recent efforts to develop safe machine learning algorithms, and particularly safe reinforcement learning algorithms, which can be responsibly applied to high-risk applications. I will focus on a specific research problem that is central to the design of safe reinforcement learning algorithms: accurately predicting how well a policy would perform if it were to be used, given data collected from the deployment of a different policy. Solutions to this problem provide a way to determine that a newly proposed policy would be dangerous to use without requiring the dangerous policy to ever actually be used.

    Biography: Philip Thomas is a postdoctoral research fellow in the Computer Science Department at Carnegie Mellon University, advised by Emma Brunskill. He received his Ph.D. from the College of Information and Computer Sciences at the University of Massachusetts Amherst in 2015, where he was advised by Andrew Barto. Prior to that, Philip received his B.S. and M.S. in computer science from Case Western Reserve University in 2008 and 2009, respectively, where Michael Branicky was his adviser. Philip's research interests are in machine learning with emphases on reinforcement learning, safety, and designing algorithms that have practical theoretical guarantees.

    Host: CS Department

    Location: Ronald Tutor Hall of Engineering (RTH) - 217

    Audiences: Everyone Is Invited

    Posted By: Assistant to CS chair

    OutlookiCal
  • INFORMATION DROPOUT: LEARNING OPTIMAL REPRESENTATIONS THROUGH NOISY COMPUTATION

    Tue, Mar 07, 2017 @ 11:00 AM - 12:00 PM

    Information Sciences Institute

    Conferences, Lectures, & Seminars


    Speaker: Alessandro Achille, UCLA

    Talk Title: INFORMATION DROPOUT: LEARNING OPTIMAL REPRESENTATIONS THROUGH NOISY COMPUTATION

    Series: Natural Language Seminar

    Abstract: The cross-entropy loss commonly used in deep learning is closely related to the information theoretic properties defining an optimal representation of the data, but does not enforce some of the key properties. We show that this can be solved by adding a regularization term, which is in turn related to injecting multiplicative noise in the activations of a Deep Neural Network, a special case of which is the common practice of dropout. Our regularized loss function can be efficiently minimized using Information Dropout, a generalization of dropout rooted in information theoretic principles that automatically adapts to the data and can better exploit architectures of limited capacity.
    When the task is the reconstruction of the input, we show that our loss function yields a Variational Autoencoder as a special case, thus providing a link between representation learning, information theory and variational inference. Finally, we prove that we can promote the creation of disentangled representations of the input simply by enforcing a factorized prior, a fact that has been also observed empirically in recent work.
    Our experiments validate the theoretical intuitions behind our method, and we find that Information Dropout achieves a comparable or better generalization performance than binary dropout, especially on smaller models, since it can automatically adapt the noise structure to the architecture of the network, as well as to the test sample.




    Biography: Alessandro Achille is a PhD student in Computer Science at UCLA, working with Prof. Stefano Soatto. He focuses on variational inference, representation learning, and their applications to deep learning and computer vision. Before coming to UCLA, he obtained a Master's degree in Pure Math at the Scuola Normale Superiore in Pisa, where he studied model theory and algebraic topology with Prof. Alessandro Berarducci.


    Host: Greg Ver Steeg

    More Info: https://arxiv.org/abs/1611.01353

    Location: Information Science Institute (ISI) - 6th Flr -CR#689 (ISI/Marina Del Rey)

    Audiences: Everyone Is Invited

    Posted By: Peter Zamar

    OutlookiCal
  • Epstein Seminar, ISE 651

    Tue, Mar 07, 2017 @ 03:30 PM - 04:50 PM

    Daniel J. Epstein Department of Industrial and Systems Engineering

    Conferences, Lectures, & Seminars


    Speaker: Dr. Daniel Robinson, Assistant Professor, Johns Hopkins University

    Talk Title: Scalable Optimization Algorithms For Large-Scale Subspace Clustering

    Host: Jong-Shi Pang

    More Information: March 7, 2017_Robinson.pdf

    Location: Ethel Percy Andrus Gerontology Center (GER) - 206

    Audiences: Everyone Is Invited

    Posted By: Grace Owh

    OutlookiCal
  • Introduction to Viterbi Gateway Workshop

    Tue, Mar 07, 2017 @ 04:00 PM - 05:00 PM

    Viterbi School of Engineering Career Connections

    Workshops & Infosessions


    Come to this presentation to learn how to navigate the Viterbi Career Gateway,a powerful job & internship search tool available ONLY to Viterbi students.

    Location: Ronald Tutor Hall of Engineering (RTH) - 211

    Audiences: All Viterbi

    Posted By: RTH 218 Viterbi Career Connections

    OutlookiCal