NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:4082
Title:Lower Bounds on Adversarial Robustness from Optimal Transport


		
The paper proposes a classifier-independent lower bound for binary classification in the adversarial setting. More precisely, Theorem 1 connects the "Bayes optimal" adversarial robustness error to a notion of separability, that is the transportation distance between the positive and negative points in the feature space, induced by moving points around according to the attack model (i.e the constraints on the attacker). The idea of making use of the Kantorovich-Rubinstein transportation distance (also known as Wasserstein distance) to increase robustness is in the air presently, this paper show how it can be used. It is interesting to also point out that the authors also show that their lower bound can be efficiently computed by convex optimization. The contribution is clearly related to learning theory, but also have interesting empirical validation. The paper is very well written and organised, containing conceptual examples related to multivariate Gaussians, for which concrete computations are done. Namely, they show how their approach compares against the robust accuracy of a adversarially trained model. I personally think that the results propose in this paper will have an impact for the ML community that are interested in robustness and adversarial attack prevention, I am recommending this paper for an acceptation.