NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:4758
Title:Learning Macroscopic Brain Connectomes via Group-Sparse Factorization

Reviewer 1


		
- The paper builds on ENCODE, formulating the connectome decomposition problem as a tensor product of \Phi and \D, where \D is a dictionary of dMRI signal predictors. Though this framework is taken from a previous publication, it would be nice to include at least a brief introduction and summary of this D -- how was it constructed, what do its elements represent, and what's the intuition behind it? - What is this dimension N_{\theta}? It first appears in Section 2 with no definition. - How was the regularizing parameter \lambda chosen in the empirical example? How sensitive is the algorithm to choice of this parameter?

Reviewer 2


		
# Overview This paper proposes a new technique to reconstruct entire white matter fasciculus directly from diffusion magnetic resonance imaging (dMRI) data. Moreover, that technique doesn't rely on an iterative process to generate the fascicles as classic tractography algorithms do. Instead, it learns a sparse 3D tensor that encodes the location (i.e. voxel) and orientation of all (relevant) white matter fascicles in a brain subject. Fitting such tensor can be done with gradient-based optimization. However, because it has so many parameters (i.e. ~10 billion for 823 fascicles, 1057 discretized orientations, and 11,823 voxels) and it is a highly sparse problem, the authors propose to use a screening algorithm to select a plausible set of orientations for each voxel based solely on diffusion information. To do so, they propose a modification of the Orthogonal Matching Pursuit (OMP) algorithm where the orthogonality constraint is relaxed in favor of having high multiple correlations and ensuring each orientation itself is useful at explaining the diffusion information. In addition, the author also proposes a group sparse regularizer to ensure biologically plausible fascicles, i.e. locally, fascicles should be smooth and continuous. The authors validate their approach to two major white matter structures in the brain that were extracted by a connectome expert. They show the proposed technique can learn those two structures efficiently and results in low reconstruction error with smooth fascicles. The authors also show their proposed screening algorithm is more suitable than the original OMP algorithm in this context. Overall, I found the paper is well written, the proposed technique interesting and the motivation is clear. To the best of my knowledge what's proposed in this paper is novel. Moreover, the authors do mention this is preliminary work and focused on providing a sound formulation and providing an initial empirical investigation into the efficacy of the approximations. # Major concerns Even though the authors claim it is efficient, I wonder how well the proposed method scale with the number of tracts and voxel resolution? Traditional whole-brain tractography can easily have many hundreds of thousands of tracts. That said, one could easily argue those techniques oversample the number of tracts. I'm curious to know if the authors have tried to learn a smaller Phi (i.e. reducing the size of the 3rd dimension) and see at what point there aren't enough tracts to properly explain the diffusion signal. Visualizing the resulting Phi is essential for any practical use of this technique. As mentioned in the Appendix, it requires a whole other optimization process in order to place and connect the segments of the tract together. I'm concerned about how reliable that process is, and what impact the greedy algorithm has on the result. # Minor concerns From the main paper, it is not clear how you limit the tractography to only a particular fasciculus. From Algorithm 3 in the Appendix, it seems you have a set of masks. From the text, it is not clear how D is obtained. In Figure 6, it is not clear how corresponding ground truth tracts were determined. If I understood correctly, the solution obtained for Phi could have permutation in its 3rd dimension compared to the ground truth. So, to establish a correspondence between a predicted tract and the ground truth, did you rely on the lowest reconstruction error? Since source code availability is not mentioned in the paper (but included in supplementary), I'd invite the authors to release their code. # Typos line 238 and 239: l1 -> $\ell_1$

Reviewer 3


		
Overall this work has multiple original contributions, the main one being the formulation of tractography estimation as a fully unsupervised learning problem. While the base of the formulation is not particularly novel (minimization of a linear least-squares objective via subgradient descent), the authors introduce a convex regularizer that encourages biologically plausible solutions through continuity in space and orientation of learned fascicles, which is a novel addition to help yield better solutions specifically in this domain. Furthermore, the use of a custom objective for screening orientations for each voxel by greedy selection is an original contribution that seems crucial for achieving plausible results as demonstrated by the experiments. The work is clearly written in general, with each component explained well. One comment is that there seems to be no explanation for the choice of the regularization parameter in Section 5.2; it would be worthwhile to justify this choice, or explain a proposed method for choosing this parameter in general. The overall technical quality is good, with convincing empirical evaluation and theoretical claims that are elaborated in the Supplement and appear to be sound. However, one primary shortcoming of the work is that there is no indication of how the specific choice of convex regularizer affects the results, and the empirical evaluation seems to indicate that the combined effect of the greedy selection method and the regularizer is necessary for the learned fascicles to be continuous (i.e., convex regularizer + OMP still leads to subpar results, indicating that the convex regularizer alone may be insufficient). Another main component that is lacking was brought up by the authors themselves in the Discussion: "understanding strengths and weaknesses compared to current tractography approaches". An empirical comparison to alternative tractography estimation methods would validate the unsupervised learning approach overall, and not just the use of a custom greedy selection method for screening. Since the missing components described above seem crucial, I lean towards rejecting the submission, though the inclusion of either component would lead me to increase my score. Other minor comments: - Line 228: seems like this should be "Figure 5a" - Line 231: seems like this should be "Figure 5b" - Lines 251-252: Figure 6 references have typos Update based on author feedback: Having read the authors' response, I feel that my concern about the effect of the custom regularizer versus the greedy screening method was appropriately addressed, and I no longer consider this an issue. I still feel that a lack of any kind of comparison to other tractography methods is a main shortcoming of the paper, despite the authors' claim that this is outside the scope of this submission. However, the authors' response has led me to lean more towards accepting the paper overall, and I have updated my score to reflect this.