NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:3378
Title:Progressive Augmentation of GANs

Reviewer 1


		
The proposed approach of making the discriminator's task progressively harder to regularize GAN training appears novel, and makes sense. I have a relatively large reservation regarding datasets; all use small images by today's standards, CELEBA-HQ128 being the only one at its resolution and the rest in smaller resolutions. Given the success of progressive growing (Karras18) in pushing to high resolutions, I feel we're not seeing the entire picture here. Direct comparison to progressive GAN – and perhaps a combination of the two – would also be interesting. I realize large resolutions mean higher computational demands, but the comparison could be made also in smaller resolutions. Improvements over dropout appear modest, but to the authors' credit they clearly state they searched for and used the best-performing variant. It would be interesting to know how variable the results are based on which layer dropout is applied to. Overall, I think this is a solid contribution, but one that does not appear to bring dramatic new capability or understanding.

Reviewer 2


		
The paper does a nice job at reviewing some of the existing work for improving image quality and stabilizing the training. The PA method is novel and surprisingly effective, with performance demonstrated over a wide range of datasets and is complementary to existing methods. The theoretical investigation of PA is a welcome addition. I am particularly pleased that the authors developed the means for automatic (metric based) scheduling of the difficulty progression. This is the kind of detail that is often left to a hand tweaked schedule which makes reimplementation and adaption much more difficult, the work spent here will definitely benefit future research. The paper is to the point and easy to read. I expect that the method may even have applications outside of those investigated, where the gap in difficulty between the discriminator and generator is larger.

Reviewer 3


		
This paper introduces a novel regularization method (e.g. progressive augmentation) of the original GANs to avoid the overshooting of discriminators and improve the stability of GAN training. Instead of weakening or regularizing the discriminator, the idea is to augment the data samples or features with random bits to increase the discrimination task difficulty. In this way, it could prevent the discriminator from being overconfident and maintain a healthy competition, which would enable the generator to be continuously optimized. The augmentation could be progressively levelled up during the training by evaluating the kernel inception distance between synthetic samples and training data samples. The proposed method has been demonstrated on different datasets and compared with other regularization techniques. The results show a performance improvement of the progressive augmentation (though there is no noticeable increase in visual quality). The paper also shows the flexibility of the proposed method. The progressive augmentation could be used with other regularizers and the combination could have good performance. The future work focuses on the implementation in semi-supervised learning, generative latent modelling and transfer learning. Overall, the content of this paper is complete and rich and has good technical quality. The results are well analysed and evaluated, and the claims of the paper are supported. The clarity is good but could be better. The author's response has been taken into account.