NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:4929
Title:A Meta-Analysis of Overfitting in Machine Learning


		
In this paper the authors performed a large-scale empirical study of overfitting due to test set reuse in the machine learning community. The survey includes models from 112 Kaggle competitions and concluded that there is little evidence of substantial overfitting. The reviewers thought that the topic is important, that the paper was well written and easy to follow, and that the experiments are well executed. The AC and SAC had concerns that this submission might be violating the slicing-too-thin policy for NeurIPS, as described in the Call for Paper (https://neurips.cc/Conferences/2019/CallForPapers), based on similarity with another submission (5286) from an overlapping set of authors. Since these concerns appeared late in the review process (reviewers were never given a chance to comment on the submissions' similarity), ultimately the PCs reviewed the situation. They determined that the submission was sufficiently different from 5286 and could be accepted. [This meta-review was reviewed and revised by the Program Chairs]