Logo: University of Southern California

Events Calendar



Select a calendar:



Filter July Events by Event Type:



Events for July 24, 2017

  • CAIS Seminar: Dr. Yevgeniy Vorobeychik (Vanderbilt University) - The Art and Science of Adversarial Machine Learning

    Mon, Jul 24, 2017 @ 11:00 AM - 12:00 PM

    Thomas Lord Department of Computer Science

    Conferences, Lectures, & Seminars


    Speaker: Dr. Yevgeniy Vorobeychik, Vanderbilt University

    Talk Title: The Art and Science of Adversarial Machine Learning

    Series: Center for AI in Society (CAIS) Seminar Series

    Abstract: The success of machine learning has led to numerous attempts to apply it in adversarial settings like spam and malware detection. The core challenge in this class of applications is that adversaries are not static data generators, but make a deliberate effort to either evade the classifiers deployed to detect them, or degrade the quality of the data used to train the classifiers. I will discuss our recent research into the problem of adversarial classifier evasion, specifically the theoretical foundations of black-box attacks on classifiers, and several of our efforts in designing evasion-robust classifiers on binary feature spaces, including a principled, theoretically-grounded, retraining method.

    Second, I will discuss scientific foundations of classifier evasion modeling. A dominant paradigm in the machine learning community is to model evasion in "feature space" through direct manipulation of classifier features. In contrast, the cyber security community developed several "problem space" attacks, where actual instances (e.g., malware) are modified, and features are then extracted from the evasive instances. I will show, through a case study of PDF malware detection, that feature-space models are a very poor proxy for problem space attacks. Then I will demonstrate a simple "fix" to identify a small set of features which are invariant (conserved) with respect to evasion attacks, and constrain these features to remain unchanged in feature-space models. Lastly, I will show that such conserved features exist and cannot be inferred using standard regularization techniques, but can be automatically identified for a given problem-space evasion model.

    Biography: Yevgeniy Vorobeychik is an Assistant Professor of Computer Science, Computer Engineering, and Biomedical Informatics at Vanderbilt University. He received Ph.D. (2008) and M.S.E. (2004) degrees in Computer Science and Engineering from the University of Michigan, and a B.S. degree in Computer Engineering from Northwestern University. His work focuses on adversarial reasoning in AI, computational game theory, security and privacy, network science, and agent-based modeling. He received an NSF CAREER award in 2017, was an invited early career spotlight speaker at IJCAI 2016.

    Host: Milind Tambe

    Location: Ronald Tutor Hall of Engineering (RTH) - 217

    Audiences: Everyone Is Invited

    Contact: Assistant to CS chair

    OutlookiCal