NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:6609
Title:Are Labels Required for Improving Adversarial Robustness?

Reviewer 1


		
This paper proposes, based on a theoretical analysis of a simple statistical setting, two strategies for improving adversarial generalization via unlabeled data. The method presented, UAT++, attempts to combine these two strategies and improves adversarial generalization significantly using only additional unlabeled data. Overall, the paper is well-written, easy to follow, and has solid yet simple theoretical grounding. The experimental analysis is thorough and all confounding factors (e.g. network architecture) are correctly ablated. The paper compares to a variety of state-of-the-art methods to prove that unsupervised data can indeed improve adversarial robustness. The paper could be improved by addressing what happens to UAT-OT in the Gaussian setting of Schmidt et al (either via a Theorem or intuitive explanation, or even an explanation of why the algorithm is harder to analyze). It would also be interesting to see whether the same algorithm can also improve L2 robustness (one would guess so, but if the phenomenon is limited to l-infinity this would be independently interesting). Minor comments: - [1] and [2] are the same citation - The graphs are somewhat hard to read (e.g. the red is hard to tell apart from the orange after printing, and the violet lines are rather faint). Using a different colorscheme/formatting would improve the readability.

Reviewer 2


		
originality: The regularization term has been developed before in semi-supervised learning on unlabeled data [26] and in adversarial training as a smoothness term [41]. This paper combines the two and conducts adversarial training by applying the regularization term on unlabeled data. The theoretical and empirical analysis are new. quality: The paper is technically sound overall. However, some crucial points are not well addressed. For example, (1) the proper number of unlabeled data m. It is observed that the proposed method (UAT++) behaves differently on CIFAR10 and SVHN (Fig.1), wrt m. On CIFAR10, it outperforms others starting from m>=4k. On the other hand, on SVHN, it performs worse than even the baseline with a small m and performs similar to VAT. Although theorem 1 is provided on the theoretical aspect of m, there is no connections and analysis of it with the empirical observations. (2) In table 2, it is observed that the performance could drop with an increasing m. The explanation that the unsupervised data "contains more out-of-distribution images" renders the argument at the beginning of sec 4.2 less effective ("robust to distribution shift", "fully leverage data which is not only unlabeled but also uncurated"). clarity: The paper is clearly written and well organized. significance: The idea of improving model robustness using unlabeled data is interesting and is likely to inspire more efforts in this direction. The idea has been empirically verified to some extent; however, some crucial aspects as mentioned above might requires more efforts to be better addressed. ================Updates==================== The rebuttal from the authors has addressed my major concerns on the mis-match between theory and empirical results, as well as the claimed robustness against distribution shift v.s. the actual results. I believe the investigations on using unlabeled data for improving adversarial robustness is of great importance and this work makes a useful step towards it.

Reviewer 3


		
Quality and Clarity - the main idea is good and overall the paper is interesting. In my opinion, some parts are not clear enough (for example - subsection 3.1, when elaborating on the strategies). Adding pseudo code might help here. Originality - it's the first time that using unlabeled data are shown to effective in the robust learning regime, both theoretically and empirically. Significance - robust models require more samples in order to generalize. Showing that unlabeled data alleviate this problem is crucial because it is much easier (and cheaper) to collect.