BEGIN:VCALENDAR
BEGIN:VEVENT
SUMMARY:PhD Defense - Brandon Paulsen
DESCRIPTION:PhD Candidate: Brandon Paulsen\n
\n
Title: Differential Verification of Deep Neural Networks\n
\n
Time: Monday, June 6 9:00AM PST\n
\n
Location: https://usc.zoom.us/j/94495717078?pwd=WXhJOTN5YVVKNFB3K2ExSVZSakdkZz09\n
\n
Committee:\n
Chao Wang (Advisor)\n
Jyotirmoy Deshmukh\n
Murali Annavaram\n
\n
Abstract:\n
Neural networks have become an integral component of cyber-physical systems, such as autonomous vehicles, automated delivery robots, and factory robots, and they have great potential in many other systems as well. However, flaws in these models are frequently discovered, and thus in high-stakes applications, ensuring their safety, robustness, and reliability is crucial. While many prior works have been devoted to this problem domain, they are limited because they primarily focus on a few narrowly defined safety properties, and they only focus on the most common neural network architectures and activation functions.\n
\n
This dissertation addresses these limitations by (1) studying a new class of safety properties -- differential properties -- for neural networks, and (2) developing accurate algorithms for formally proving (or disproving) them that are applicable to general neural network architectures and activation functions. We focus on neural network equivalence as the canonical example for a differential property, however other safety properties concerning input sensitivity and stability can be cast as differential properties as well.\n
\n
This dissertation makes four key contributions towards developing accurate and general algorithms for proving differential properties. First, we formalize the equivalence problem for neural networks, and then develop a novel technique based on interval analysis for proving equivalence of any two structurally similar feed-forward neural networks with ReLU activations. The key insight in this technique is in deriving formulas that relate the intermediate computations of the two neural networks, which allows us to accurately bound the maximum difference between the two networks over all inputs. Second, we develop a novel symbolic technique that further improves the analysis' accuracy.\n
We demonstrate the effectiveness of these two techniques in proving equivalence of compressed neural networks with respect to the original neural networks. Finally, we propose two novel techniques for automatically synthesizing linear approximations for arbitrary nonlinear functions, thus allowing our differential techniques to apply to architectures and activation functions beyond feed-forward ReLU networks. We demonstrate that our synthesized linear approximations significantly improve accuracy versus the best alternative techniques.\n
\n
\n
DTSTART:20220606T090000
LOCATION:
URL;VALUE=URI:
DTEND:20220606T110000
END:VEVENT
END:VCALENDAR