Seminar: Graduate Seminar
Seize the Moment: An Information Gain Guided Split for Scaling Neural Network Verification
Verifying the local robustness of neural networks is crucial for understanding their safety level.
However, the complexity of a complete analysis is exponential in the number of unstable neurons, which introduce nonlinearity. To scale, many complete verifiers split the verification task into smaller subtasks and select a split by relying on heuristics or learning. We study the problem of finding optimal splits and phrase it as information gain maximization whose goal is to reduce the number of unstable neurons. The challenge is that the information gain is defined over probabilities that are intractable to compute.
We present SIGNAL which efficiently computes a split by relying on a differentiable estimate of this information gain. Our key idea is to extend moment propagation to the setting of local robustness verification. We prove that our differentiable estimate converges to the true value as the neurons’ input dimensionality grows. We integrate SIGNAL in the alpha-beta-CROWN verifier. For fully-connected networks, whose neurons have high input dimensionality, SIGNAL scales prior approaches by at least 1.6x on average. For the task of computing the largest robust $\epsilon$-ball, within 2 hours, SIGNAL computes 1.25x larger radius for fully-connected and convolutional networks.
M.Sc. student under the supervision of Dr. Dana Drachsler-Cohen.

