-
Differential Verification of Deep Neural Networks
Tue, Nov 30, 2021 @ 08:30 AM - 09:30 AM
Thomas Lord Department of Computer Science
Student Activity
PhD Candidate: Brandon Paulsen
Title: Differential Verification of Deep Neural Networks
Date & Time: Tuesday November 30th at 8:30 AM
Committee: Chao Wang (Advisor), Jyotirmoy Deshmukh, Nenad Medvidovic, William Halfond, Murali Annavaram
Zoom link: https://usc.zoom.us/j/97339789019?pwd=blJoYTg3WXJDZzBUcFVRQzZMNUNpQT09
Abstract:
Recently, deep neural networks (DNNs) have found success in a wide variety of application domains such as image recognition, natural language processing, and autonomous vehicle control. However, they are often criticized for their large energy-footprint, which limits their use on computationally- and energy-constrained devices. Recently, this limitation was addressed by DNN compression -- a technique that reduces the computational and energy requirements by, e.g., reducing the floating point precision of the neural network -- but this naturally raises the question: is the compressed network equivalent to the original? Answering this question is crucial for safety-critical systems, and desirable in general. Unfortunately, current DNN verification tools are limited in that they are only designed to analyze a single network, rendering them ineffective for this problem.
For my thesis, I address this limitation by formalizing the problem of differential verification of DNNs, and then developing a novel approach for reasoning about a pair of any two structurally similar feed-forward DNNs with ReLU activations. The key insight in my approach is to reason about the two networks simultaneously, thus greatly improving the precision of the analysis. While the approach is applicable to any pair of structurally similar DNNs, I demonstrate its effectiveness in proving equivalence (within a small error bound) of compressed DNNs with respect to the original DNN, and I further show that my new approach outperforms existing DNN verification tools by orders of magnitude, in terms of scalability. I then show that the first approach can be greatly improved upon by leveraging a novel fine-grained, symbolic technique that captures the relationships between neurons. Finally, I discuss the challenges of extending differential verification to activation functions beyond ReLU and other DNN architectures, and propose a solution.
Location: Zoom
WebCast Link: https://usc.zoom.us/j/97339789019?pwd=blJoYTg3WXJDZzBUcFVRQzZMNUNpQT09
Audiences: Everyone Is Invited
Contact: USC Computer Science