Research

Faster, more efficient neural network verification

March 11, 2019

WHAT THE RESEARCH IS:

A novel technique that is able to certify the level of safety (or identify potential issues) of an application faster than current state-of-the-art procedures. While deep learning has shown great promise in applications such as computer vision, reinforcement learning, and speech recognition, safety-critical applications like self-driving cars will not benefit from these advances until models can be effectively validated. This new method, applicable to deep learning, more efficiently verifies whether a given set of inputs could result in an undesired output — thus ensuring that no inappropriate decisions will be taken.

HOW IT WORKS:

First, we construct a convex overapproximation of the set of possible outputs by leveraging the structure of the piecewise linear (in this case, ReLU) activations. This overapproximation can be obtained with linear programming (LP) solvers and allows us to rapidly check whether outputs could potentially be undesirable. To avoid being overly conservative, we refine the overapproximation by partitioning the input set repeatedly into smaller subsets with correspondingly smaller convex overapproximations.

For this type of divide-and-conquer procedure, choosing how to generate the partitions of the input set can have a significant impact on how long the verification problem will take. Existing techniques determine the safety of a set of inputs by recursively partitioning the input set into smaller subsets. Our proposed technique instead uses the optimal primal and dual variables of the LP (generated during the convex overapproximation process) to compute a measure of sensitivity (the so-called shadow prices) that allows us to estimate how new partitions will affect subsequent overapproximations, thereby reducing the number of splits. The result is an algorithm that can verify deep neural networks in a fast and resource-efficient manner — substantially reducing computation time.

Breaking the input sets into smaller subsets using a more efficient algorithm. In this instance, no input from the initial set can ever yield an output belonging to the danger set (denoted by an exclamation mark).

WHY IT MATTERS:

These results open the door to improved verification methods for a wide variety of machine learning applications, including vision and control. Efficiently and reliably verifying the behavior learned by deep neural networks is an important step in integrating these technologies into safety-critical domains. Future work will focus on extending the verification to more general classes of neural networks and input sets with arbitrary topology.

READ THE FULL PAPER:

Fast Neural Network Verification via Shadow Prices