BEGIN:VCALENDAR BEGIN:VEVENT SUMMARY:Differential Verification of Deep Neural Networks DESCRIPTION:PhD Candidate: Brandon Paulsen\n \n Title: Differential Verification of Deep Neural Networks\n \n Date & Time: Tuesday November 30th at 8:30 AM\n \n Committee: Chao Wang (Advisor), Jyotirmoy Deshmukh, Nenad Medvidovic, William Halfond, Murali Annavaram\n \n Zoom link: https://usc.zoom.us/j/97339789019?pwd=blJoYTg3WXJDZzBUcFVRQzZMNUNpQT09\n \n Abstract:\n Recently, deep neural networks (DNNs) have found success in a wide variety of application domains such as image recognition, natural language processing, and autonomous vehicle control. However, they are often criticized for their large energy-footprint, which limits their use on computationally- and energy-constrained devices. Recently, this limitation was addressed by DNN compression -- a technique that reduces the computational and energy requirements by, e.g., reducing the floating point precision of the neural network -- but this naturally raises the question: is the compressed network equivalent to the original? Answering this question is crucial for safety-critical systems, and desirable in general. Unfortunately, current DNN verification tools are limited in that they are only designed to analyze a single network, rendering them ineffective for this problem.\n \n For my thesis, I address this limitation by formalizing the problem of differential verification of DNNs, and then developing a novel approach for reasoning about a pair of any two structurally similar feed-forward DNNs with ReLU activations. The key insight in my approach is to reason about the two networks simultaneously, thus greatly improving the precision of the analysis. While the approach is applicable to any pair of structurally similar DNNs, I demonstrate its effectiveness in proving equivalence (within a small error bound) of compressed DNNs with respect to the original DNN, and I further show that my new approach outperforms existing DNN verification tools by orders of magnitude, in terms of scalability. I then show that the first approach can be greatly improved upon by leveraging a novel fine-grained, symbolic technique that captures the relationships between neurons. Finally, I discuss the challenges of extending differential verification to activation functions beyond ReLU and other DNN architectures, and propose a solution.\n \n DTSTART:20211130T083000 LOCATION: Zoom URL;VALUE=URI: DTEND:20211130T093000 END:VEVENT END:VCALENDAR