Logo: University of Southern California

Events Calendar



Select a calendar:



Filter February Events by Event Type:


SUNMONTUEWEDTHUFRISAT
26
27
28
30
31
1

2
3
4
5
7
8

16
17
19
21
22


Events for the 5th week of February

  • CS Colloquium: Michael Everett (MIT) - Fully Autonomous Robot Navigation in Human Environments

    Mon, Feb 24, 2020 @ 11:00 AM - 12:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Michael Everett, MIT

    Talk Title: Fully Autonomous Robot Navigation in Human Environments

    Series: CS Colloquium

    Abstract: Today's robots are still quite limited in their ability to process information about multiple other objects in order to plan safe and efficient motions through previously unseen environments. Major technical challenges are currently sidestepped by restrictive engineering solutions (e.g., preventing humans from working alongside factory robots, collecting detailed prior maps in every intended operating environment). This talk will present frameworks that enable long-term autonomy for robots embedded among pedestrians and context-guided exploration in new environments. Furthermore, it will discuss future research directions toward safely training and deploying robots in our society.

    This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Michael Everett is a final-year PhD Candidate at MIT working with Prof. Jonathan How. He received the SM degree (2017) and the SB degree (2015) from MIT in Mechanical Engineering. His research addresses fundamental gaps in the connection of machine learning and real mobile robotics, with recent emphasis on developing the theory of safety/robustness of learned modules. His works have won the Best Paper Award on Cognitive Robotics at IROS 2019, the Best Student Paper Award and finalist for the Best Paper Award on Cognitive Robotics at IROS 2017, and finalist for the Best Multi-Robot Systems Paper Award at ICRA 2017. He has been interviewed live on the air by BBC Radio and his robots were featured by Today Show, Reuters, and the Boston Globe.

    Host: Nora Ayanian

    Location: Ronald Tutor Hall of Engineering (RTH) - 109

    Audiences: Everyone Is Invited

    Contact: Assistant to CS chair

    OutlookiCal
  • CS Colloquium: Robin Jia (Stanford University) - Building Robust Natural Language Processing Systems

    Tue, Feb 25, 2020 @ 11:00 AM - 12:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Robin Jia, Stanford University

    Talk Title: Building Robust Natural Language Processing Systems

    Series: CS Colloquium

    Abstract: While modern NLP systems have achieved outstanding performance on static benchmarks, they often fail catastrophically when presented with inputs from different sources or inputs that have been adversarial perturbed. This lack of robustness exposes troubling gaps in current models' understanding capabilities, and poses challenges for deployment of NLP systems in high-stakes situations. In this talk, I will demonstrate that building robust NLP systems requires reexamining all aspects of the current model building paradigm. First, I will show that adversarially constructed test data reveals vulnerabilities that are left unexposed by standard evaluation methods. Second, I will demonstrate that active learning, in which data is adaptively collected based on a model's current predictions, can significantly improve the ability of models to generalize robustly, compared to the use of static training datasets. Finally, I will show how to train NLP models to produce certificates of robustness---guarantees that for a given example and combinatorially large class of possible perturbations, no perturbation can cause a misclassification.

    This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Robin Jia is a sixth-year Ph.D. student at Stanford University advised by Percy Liang. His research interests lie broadly in building natural language processing systems that can generalize to unexpected test-time inputs. Robin's work has received an Outstanding Paper Award at EMNLP 2017 and a Best Short Paper Award at ACL 2018. He has been supported by an NSF Graduate Research Fellowship.

    Host: Xiang Ren

    Location: Olin Hall of Engineering (OHE) - 132

    Audiences: Everyone Is Invited

    Contact: Assistant to CS chair

    OutlookiCal
  • Computer Science General Faculty Meeting

    Wed, Feb 26, 2020 @ 12:00 PM - 02:00 PM

    Thomas Lord Department of Computer Science

    Receptions & Special Events


    Bi-Weekly regular faculty meeting for invited full-time Computer Science faculty only. Event details emailed directly to attendees.

    Location: Ronald Tutor Hall of Engineering (RTH) - 526

    Audiences: Invited Faculty Only

    Contact: Assistant to CS chair

    OutlookiCal
  • CS Colloquium: Minjoon Seo (University of Washington) - Web-Scale Neural Memory towards Universal Knowledge Interface

    Thu, Feb 27, 2020 @ 11:00 AM - 12:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Minjoon Seo, University of Washington

    Talk Title: Web-scale Neural Memory towards Universal Knowledge Interface

    Series: CS Colloquium

    Abstract: Modern natural language tasks are increasingly dependent on external world knowledge. My PhD study has particularly focused on three challenges in this literature: handling unstructured knowledge, being scalable, and reasoning over knowledge data. I will mainly discuss my recent and on-going work on a web-scale neural memory that tackles all of the three challenges, and show how it serves as an effective interface for interacting with the world knowledge. I will conclude with an argument that designing a seamless and universal knowledge interface is a crucial research goal that can better address knowledge-dependency problem in machine learning tasks.

    This lecture satisfies requirements for CSCI 591: Research Colloquium

    Biography: Minjoon Seo is a final-year Ph.D. student in the Allen School of Computer Science & Engineering at the University of Washington, advised by Hannaneh Hajishirzi and Ali Farhadi. His research interest has been mostly in the learning model for the extraction of (IE), the access to (QA), and the interplay of (Reasoning) knowledge in various forms of language data. He is supported by Facebook Fellowship and AI2 Key Scientific Challenges Award. He co-organizes the Workshop on Machine Reading for Question Answering (MRQA) and the Workshop on Representation Learning for NLP (RepL4NLP).

    Host: Xiang Ren

    Location: Olin Hall of Engineering (OHE) - 132

    Audiences: Everyone Is Invited

    Contact: Assistant to CS chair

    OutlookiCal
  • PhD Defense - Ayush Jaiswal

    Fri, Feb 28, 2020 @ 01:30 PM - 03:30 PM

    Thomas Lord Department of Computer Science

    University Calendar


    PhD Candidate: Ayush Jaiswal
    Date: Friday, February 28, 2020
    Time: 1:30 PM - 3:30 PM
    Location: SAL 213
    Committee: Premkumar Natarajan (Chair), Ram Nevatia, Cauligi S. Raghavendra

    Title: Invariant Representation Learning for Robust and Fair Predictions

    Abstract:

    Learning representations that are invariant to nuisance factors of data improves robustness of machine learning models, and promotes fairness for factors that represent biasing information. This view of invariance has been adopted for deep neural networks (DNNs) recently as they learn latent representations of data by design. Numerous methods for invariant representation learning for DNNs have emerged in recent literature, but the research problem remains challenging to solve: existing methods achieve partial invariance or fall short of optimal performance on the prediction tasks that the DNNs need to be trained for.

    This thesis presents novel approaches for inducing invariant representations in DNNs by effectively separating predictive factors of data from undesired nuisances and biases. The presented methods improve the predictive performance and the fairness of DNNs through increased invariance to undesired factors. Empirical evaluation on a diverse collection of benchmark datasets shows that the presented methods achieve state-of-the-art performance.

    Application of the invariance methods to real-world problems is also presented, demonstrating their practical utility. Specifically, the presented methods improve nuisance-robustness in presentation attack detection and automated speech recognition, fairness in face-based analytics, and generalization in low-data and semi-supervised learning settings.

    Location: Henry Salvatori Computer Science Center (SAL) - 213

    Audiences: Everyone Is Invited

    Contact: Lizsl De Leon

    OutlookiCal