NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:7897
Title:Computing Linear Restrictions of Neural Networks


		
This paper proposes an efficient algorithm to compute a one-dimensional restriction of a deep ReLU network (which is a piece-wise affine function). The authors leverage this algorithm to study adversarial examples and the "integrated gradients" method. Reviewers found this work clearly written and easy to follow. Despite some concerns about the significance of the approach, the reviewer discussion and author rebuttal revealed a clear potential for future use of this technique, which will be useful to improve our understanding of large deep neural networks. The AC thus recommends acceptance of this work.