NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:610
Title:Learning to Propagate for Graph Meta-Learning

Reviewer 1


		
Originality: While the components of the algorithm (i.e., message passing, attention, graph-embeddings etc.) are not new, this particular combination is novel, and required a nontrivial effort deploy effectively. Quality: The work is technically sound, and the evaluation criteria are robust and extensive (though I am curious how this compares with MAML style metalearning techniques). Clarity: The paper is fairly clear, but could be improved considerably with code for implementation. (There are also a handful of minor spelling errors throughout). Graph-based methods and message-passing (in particular) are relatively uncommon, thus providing more detail could only help the traction of the work. Significance: The work seems significant (as evidenced by the clear advantage on the provided baselines), but I will admit to a small amount of bewilderment at the sheer number of separate ways in which people have chosen to test few-shot metalearning systems. This is partially the community's fault (for failure to align on a clear baseline), but it would help it the authors spent a bit more time contextualizing their choice of baseline.

Reviewer 2


		
It's a novel combination of well-known methods. The submission is technically sound. However, it is not very well written with a clustering of notations and difficult to follow all the equations and algorithms. The paper is not very well organized without conclusion. The proposed method achieves state-of-the-art accuracy on the newly introduced datasets. Not sure it is a fair comparison with the baseline methods as other published methods are not specifically designed for these tasks as well.

Reviewer 3


		
1. Originality: The framework proposed by the paper is a novel model (GPN model) based on the prototypical network. The prototype embeddings are refined iteratively through the current presentation and neighboring prototypes from similar tasks using a gating mechanism. The refinement process is similar to the multi-head mechanism but is new and unique. 2. Quality & Clarity: The paper overall has a clear and concise description of the methodology, which is empirically supported by the experimental results. 3. Significance: The paper has compared the proposed GPN model with several state-of-the-art few-shot learning methods including the prototypical net baseline. The evaluation process on ImageNet is reasonable which demonstrates the effectiveness of the method on both closely-related and distant tasks. However, as the data are created by the authors and not released, it may be hard to reproduce the results based on the current information in the submission. Minors: 1. In Line 173, the authors mention it is possible to use the history prototypes for better training performance for GPN model. I am wondering how this general technique can be applied to and influence other baselines?