NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:2946
Title:Time-series Generative Adversarial Networks

Reviewer 1


		
Originality: The work appears original to me, but it is not the first on GANs using temporal data. There is previous work on temporal GANs used to generate video sequences (Masaki Saito et al, Temporal Generative Adversarial Nets with Singular Value Clipping). This previous work also calls their approach TGAN, but appears to be different. It may be better to use a different name for the method presented in this work. It would be good to point out differences between the approaches. Quality: The submission appears technically sound to me, I didn't check every detail though. The evaluation uses a number of different approaches for comparison, all of which perform worse than the newly introduced method. From the description it sounds the approach is mostly used/useful for datasets with only small number of variates, but hasn't been used on eg video data. In terms of evaluating weaknesses of the approach, it may be interesting to do that. Clarity: The paper is well written but clarity could be improved in several cases: - I found the notation / the explicit split between "static" and temporal features into two variables confusing, at least initially. In my view this requires more information than is provided in the paper (what is S and Xt). - even with the pseudocode given in the supplementary material I don't get the feeling the paper is written to be reproduced. It is written to provide an intuitive understanding of the work, but to actually reproduce it, more details are required that are neither provided in the paper nor in the supplementary material. This includes, for example, details about the RNN implementation (like number of units etc), and many other technical details. - the paper is presented well, e.g., quality of graphs is good (though labels on the graphs in Fig 3 could be slightly bigger) Significance: - from just the paper: the results would be more interesting (and significant) if there was a way to reproduce the work more easily. At present I cannot see this work easily taken up by many other researchers mainly due to lack of detail in the description. The work is interesting, and I like the idea, but with a relatively high-level description of it in the paper it would need a little more than the peudocode in the materials to convince me using it (but see next). - In the supplementary material it is stated the source code will be made available, and in combination with paper and information in the supplementary material, the level of detail may be just right (but it's hard to say without seeing the code). Given the promising results, I can imagine this approach being useful at least for more research in a similar direction.

Reviewer 2


		
1. This may be a good idea, but the evaluation is too simple to meet the acceptance threshold. I think this may be a good technique, but without extensive evaluation, it is not convincing. All the experiments are performed on UCI/Synthetic datasets, instead of larger benchmarks like previous work, e.g., RCGAN. 2. In your main text, I do not see you explain what S and X is, and I need to guess. There are many unclear points in your paper, that I need to refer related works to figure it out. 3. The evaluation metrics are not convincing, with two trained models. This is a critical issue, please find a way to fix it.

Reviewer 3


		
In this paper, the authors present a new generative model for time series data. The approach is based on GANs, with three key parts: 1) a supervised loss, 2) a reconstruction loss and 3) a joint training between the embedding and adversarial networks. To my knowledge, the TGAN approach is novel. It has the potential to be widely applicable to many time series problems. The paper is extremely well written and a pleasure to read. I commend the authors for explaining the technical details in a very clear manner. Figures 1 and 2 are particularly helpful at illustrating the key concepts of the paper. The evaluation is very well done. A standard evaluation section for a GAN paper often only shows examples of data generated by the GAN model (e.g. images). This typical approach is very qualitative and it is refreshing to see the authors include quantitative results from different types of experiments. There could be minor quibbles with each type of experimental setup, but as a whole, the empirical evidence is compelling. My main concern with the approach is that training GANs can be challenging. Does the training process for TGAN involve similar difficulties like mode collapse and having to give the discriminator more optimization steps than the generator during training? In addition, how sensitive is the performance of TGAN to parameters lambda and nu? Adding more parameters to an already notoriously difficult training optimization makes me nervous. The paper could be strengthened with a brief discussion of these issues. Comments after author feedback ------------------------------------------ The authors have done a good job of addressing my concerns. My review remains the same and I still feel the paper should be accepted.