NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:582
Title:General Proximal Incremental Aggregated Gradient Algorithms: Better and Novel Results under General Scheme

Reviewer 1


		
Originality The PIAG algorithm has attracted a lot of attentions in recent years, but few of them use Lyapunov function for analyzing. As a result, I think the originality is good enough for accept. Quality The main contribution lies on the analysis. The proof is neat, but there are some typos, especially in taking expectation. I think the authors should spend a few words to clarify which r.v. is taking expectation. The experiment is weak. There is no baselines for comparison. Clarity The clarity is also hurt by the issue in taking expectation. I think the authors could fix them in the revision. The authors should spend more words on KL property. There are quite a lot of references on KL property, e.g.[1][2]. But I cannot find any of them. Significance i. Convergence rate. The non-asymptotic rate achieve by assuming 1) boundedness of $\{x_t\}$; 2) semi-algebraic of objective, which is a sufficient condition for KL property. This is not new in optimization community. ii. Lyapunov function. I think the main contribution is that this is the first paper for analyzing PIAG by Lyapunov function. iii. Experiment. The experiment is not convincing. The authors should include more comparisons in the revision, e.g. [3]. iv. The paper is mainly based on the assumption that $\sum_i \sigma^2_i<\infty$. However, the authors did not provide a method to guarantee the assumption in stochastic setting. This dramatically hurt the contribution of the paper. Overall, I would upgrade my score if the authors could give a feasible plan for fixing above issues. [1] Bolte, Jérôme, Shoham Sabach, and Marc Teboulle. "Proximal alternating linearized minimization for nonconvex and nonsmooth problems." Mathematical Programming 146.1-2 (2014): 459-494. [2] Attouch, Hédy, et al. "Proximal alternating minimization and projection methods for nonconvex problems: An approach based on the Kurdyka-Łojasiewicz inequality." Mathematics of Operations Research 35.2 (2010): 438-457. [3] Peng, Wei, Hui Zhang, and Xiaoya Zhang. "Nonconvex Proximal Incremental Aggregated Gradient Method with Linear Convergence." Journal of Optimization Theory and Applications (2018): 1-16. __________________________________ I have read the authors' feedback. I believe most of the issues will be fixed in the revision. The error summable assumption is still a challenge. I think we can make it hold in certain cases, but more analysis might be required. I think the paper is significant enough for the conference. Thus I’d like to upgrade my score.

Reviewer 2


		
The submission provides an interesting framework coupled with Lyapunov-based methods to build a unified framework for IAG methods. The idea is interesting and appealing. However, the submission suffers from a lack of clarity that is damaging these ideas: - many definitions are lacking (e.g., the semi-algebraic property and the distance used at l. 154 should be at least commented); - results are commented but I feel some assumptions should be discussed (e.g., l. 192, is this assumption realistic when \sigma_k = k^{-\eta}?); - plots are hard to read and would benefit from additional comments. Overall, the submission seems like it brings valuable ideas but needs additional work around the results and assumptions discussions (the related work sections are nicely written to that regard). Updated review: It appears that I did not understand well the novelty of some points of the paper. I intend to increase my score, but I still think the paper severely lacks clean statement and discussions about the results and their comparison to existing work.

Reviewer 3


		
Originality: This paper presents a general PIAG algorithm an inexact extension of previously proposed exact PIAG from [1]. The paper analyzes convergence properties of this algorithm under different assumptions of convexity of the component functions. This is an extension of [1], [2] where linear convergence was proven under strong convexity. The paper also adds a dimension of line search and proves larger stepsizes than earlier. The Lyapunov function based proof technique has been used earlier in [1] and [2], however, the perspective of convergence in expectation, almost surely, and semi-algebraic property is a new addition. Quality: The paper contains high quality results and proofs. Although, the primary focus of the paper is on theory, the quality of numerical section and the figure can be much improved. Clarity: The paper is overall well written and the contributions and organization are explained clearly upfront. However, there are few places where concepts or quantities are used without defining them clearly. For example, * Line 132: can be explained better or add citation to the result if taken from somewhere else * Line 150: the assumption on sigma_i’s can be explained better with some discussion/note (since inexactness is the main difference in the algorithm between this paper and [1]) * Line 191, Lemma 2: uses a quantity ‘t’ without defining it * Line 210: uses “coercivity of f” without defining it * Line 439, Theorem 5: uses incorrect solution of quadratic equation, the inequality should be LHS <= 0.5*( \sqrt(H^2+4H) - H) Significance: This paper extends the existing work on convergence of PIAG and related methods to non-strongly convex and nonconvex areas. The extension with line search makes this algorithm more applicable to real problems when convergence times (vs. theoretical rates) matter more. [1] https://arxiv.org/pdf/1608.01713.pdf [2] https://www.di.ens.fr/~fbach/Defazio_NIPS2014.pdf