NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:1521
Title:On Learning Over-parameterized Neural Networks: A Functional Approximation Perspective


		
This paper gives convergence guarantees for training neural networks via gradient descent. The approach consists of considering GD as an operator and of analyzing it through its eigenvalues. The interest of the analysis is focus on the overparameterized setting of such network. This is an interesting paper, with interesting theoretical results. It is clearly above the acceptance threshold. We are not recommending any oral presentation, because, as it is the paper proposed improvements are not important enough when comparing with the existing literature. Namely, in the discussion phase, the reviewers agreed that the authors should cite the papers recommended by Reviewer #3: “A similar perspective, in a somewhat simpler setting, was studied in Vempala--Wilmes (2017). Convergence guarantees for empirical risk minimization where also obtained in the overparameterized setting in Arora et. al. (2019), among others.”