NIPS 2016
Mon Dec 5th through Sun the 11th, 2016 at Centre Convencions Internacional Barcelona
Paper ID:1074
Title:Bayesian latent structure discovery from multi-neuron recordings

Reviewer 1

Summary

The authors propose a Bayesian model for inferring the structure of functional connectivity between neurons in a GLM framework. They separate the problem into specifying (1) a prior over graphs that describe the presence or absence of connections and (2) the distribution of weights for existing connections.

Qualitative Assessment

Overall a very solid and well written paper and a valuable contribution to the field. While I do not have any major objections, in my opinion the weakest point of the paper is the model evaluation on the retinal data. It is reassuring to see that the model indeed recovers on and off cells as well as the distance dependence of couplings, but I think much simpler methods would probably have accomplished the same thing on this dataset. More specifically: (a) Fig. 4d+f: Simply performing PCA and using the weight of the first PC would quite likely have recovered the two cell types equally well, so I am not sure how informative this analysis is about the authors’ model. (b) Fig. 4b: It’s not clear to me why an LDS with two latent dimensions should recover the distance dependence (maybe provide some intuition/justification). However, given that the distance-dependent model for the weights performs very well, it seems like simply using inferred weights (e.g. in an L1-regularized GLM) should be fairly predictive of distance as well. (c) How do other related approaches, such as that by Stevenson et al. [18] compare on this dataset? The fact that it’s omitted makes me think that they probably perform equally well. Minor comments - Abstract. The third sentence strikes me as very odd. There are many published approaches that take into account temporal dependencies in multi-neuron recordings and try to account for variability in responses. I am not sure what the authors want to communicate here, but the way it’s written it’s definitely not correct. - Figure captions. Please describe in a concise way what each panel shows (Figs. 2+4). ‘See text’ is pretty much the worse possible caption. Some of the figures can be shrunk a bit if space is needed. - l.218-221. The model the authors claim is the best is in fact only the second best according to Fig. 4a (Dist/Dist is the best one). The margin is very small, though, but I wonder whether the authors have a good explanation for why the distance-dependent model performs equally well as the SBM for the weights, given that the SBM correctly infers the (on/off) cell types – something the distance-dependent model shouldn’t be able to capture well.

Confidence in this Review

3-Expert (read the paper in detail, know the area, quite certain of my opinion)


Reviewer 2

Summary

This paper applies Poly-gamma augmentation for more tractable estimation of structured functional connectivity between neurons. Overall, this is an excellent, clearly written paper with a broad potential impact.

Qualitative Assessment

My only complaint is the lack of detail about the latent distance model and the stochastic block model - Were these constrained to a two dimensional embedding and two clusters, respectively? The comparison with LDS is a bit of a stretch, given that the LDS is modeling the dynamics rather than the coupling it would actually be surprising if the 2d state mapped cell position. Vidne et al JCNS 2012, for instance, has some evidence that a higher-dimensional embedding mixes information about type and relative distance to some extent. As the authors may know, there is also some relevant work by Linderman et al. on graph priors for GLMs that should likely be cited.

Confidence in this Review

3-Expert (read the paper in detail, know the area, quite certain of my opinion)


Reviewer 3

Summary

In this paper the authors extend previous sampling methods to infer structure from multi neuron recordings. In particular, they extend GLMs with an explicit model for network structure and weights, rather than weights alone. The authors then use MCMC (Gibbs sampling) to infer both functional structure and weights. First, they test their method with simulated data, which yields a better inference of structure compared to GLMs. Finally, a dataset from retinal ganglion cells is studied to demonstrate the ability of the framework to infer both location and structure from real data.

Qualitative Assessment

Overall, this is a very good paper, clear and well written. It also highlights the importance of full Bayesian inference methods to uncover structure in neural data. However, a few (mostly minor) points could be clarified: - A graphical representation of the bayesian network being inferred would greatly help the reader. - The labels MCMC/MAP in Fig. 2 are not clear enough, as with your Bayesian method you could also use the MAP, rather than the expectation. Please change the labels to clarify this. Also, the MAP solution in your method should yield a better description of the data than the expectation. Could please you clarify this? - It is only in the supp material that the authors mention that convergence was monitored. Given that this is an important point in MCMC methods, it would be good to clarify this in the main text. The authors do now state how exactly was convergence monitored. Was it using the Gelman–Rubin statistical method (Brooks and Gelman, 1998)? Chain autocorrelation? Please clarify. - When applied to multi-unit recordings a crucial first step is spike sorting. How are the 27 RGCs identified? Please clarify this. - Other people have used MCMC and Bayesian models for neuronal data (e.g. Pnevmatikakis et al. Neuron 2016 and Costa et al. Frontiers 2013). These studies also highlight the importance of sampling methods to enable a complete picture, and also the use of these frameworks for optimal experimental design / active learning. Please discuss the role of the experimental design (duration of the recording, spike sorting, etc.) in your work. - Would be useful to know the total time used for sampling. Please add ticks to Figure 3&4 panels, where relevant. - It is not clear if the method used in Fig. 2f is a fair comparison. Would this be the current state of the art? - Is not clear our your method compares with previous methods used to infer functional connectivity from spike trains. Are you the first ones? For example Tomm et al. JNeurophysiol 2014 also inferred structure and weights (using a rather different approach), it might be worth noting the key differences here. - In F4c what are the different shapes suppose to represent, different locations? Why are the same shapes repeated? Please clarify. - What is the meaning of NGLM (F4b)? Please clarify. Is the NGLM predictions in F4b for held-out data as well?

Confidence in this Review

2-Confident (read it all; understood it all reasonably well)


Reviewer 4

Summary

This work was motivated from a neuroscience problem of inferring the network structure. Based on the widely used GLM model, the authors explicitly model different connectivity patterns, which is novel. They also developed a very efficient and scalable Bayesian approach to infer the network structure and other latent variables of the neurons. The method has been tested using synthetic data and experimental data, and it shows very promising applications in neuroscience research.

Qualitative Assessment

This paper is a very strong work for studying neural connections in the network. It avoids the many spurious results in previous methods by including more priors on the network structure. All sections in this paper show rigorous analysis and I enjoy the reading. I only have two questions regarding the details in the paper: 1, The influences of the all neurons' spiking activity on single neuron's instantaneous firing rate are modeled with an exponential function. This means the neuron's response functions are homogeneous to all synaptic inputs and auto-history. Actually the influence of neuron's auto-history is very different with the synaptic inputs from other neurons. This is mainly due to refractory period. I think it's not hard to change the model, but it might be worth to emphasize this difference. 2, neurons' activity are influenced by more than stimulus and neuronal coupling, thus we probably need to model more factors into the GLM model in data analysis. It might be helpful the authors can discuss the flexibility/scalability of adding more terms into the GLM model.

Confidence in this Review

3-Expert (read the paper in detail, know the area, quite certain of my opinion)


Reviewer 5

Summary

In their paper "Bayesian latent structure discovery from multi-neuron recordings", the authors propose a novel approach to infer functional latent structure from neural population spike trains. Their approach combines generalized linear models with latent variable network models. The authors then develop a Markov chain Monte Carlo algorithm to tackle Bayesian inference. They demonstrate their method on synthetic data and on data recorded from retinal ganglion cells. In the latter analysis, the authors manage to infer cell types and locations from spike train recordings alone.

Qualitative Assessment

The proposed model and inference algorithm are sound. The results that the authors obtain from the retinal ganglion cell recordings using their methods are truly impressive. In the synthetic data experiments, however, the ground-truth model exactly matches the proposed model. Thus, I do not find the dramatic improvements over alternative models very surprising. Relatedly, it is not clear how well the method would perform on more challenging networks. I have concerns regarding the robustness of the results. As the authors mention, the log posterior is not concave anymore, introducing the problem of local optima. The authors briefly mention variability of the weighted adjacency matrix from one run to the next in the Supplementary File. How robust is the recovery performance of latent structure? The authors do not address this question in their "Synthetic Data Experiments" section. I would welcome dedication of a little space to this important problem in the main paper. To the best of my knowledge, the proposed combination of GLMs and random network models together with the MCMC Bayesian inference algorithm for inferring latent structure is novel and represents a significant contribution. The paper addresses an important and timely challenge, namely the identification of latent structure in large neural recordings. The authors also make a Python implementation of their method available on Github which will aid dissemination of the method. The paper is very clearly written. All figures and tables complement the text seamlessly. Regarding clarity, my only criticism is that the notation of the parameters in Table 2 is not spelled out clearly enough and requires a little guesswork.

Confidence in this Review

2-Confident (read it all; understood it all reasonably well)


Reviewer 6

Summary

The authors describe a statistical model of neural data that combines random network models (distributions over the connections and weights between neurons) with a standard point process spiking model, a generalized linear model (GLM). By restricting the spiking process to logistic spike count models, the authors apply the Polya-gamma augmentation trick to perform efficient inference through Gibbs sampling. The authors apply their technique to simulated data (recovering latent network structure in the simulated network) and to spiking activity of retinal ganglion cells (RGCs). This technique lets the authors quantitatively reason over network connectivity, providing a way to reason about hypotheses regarding how network structure relates to recorded neural activity.

Qualitative Assessment

In general, the motivation, model, and experiments are clearly described and well presented. The usefulness of the application to retinal data is overstated. In the second paragraph, the authors mention that there are a "plethora of RGC types" that have been identified and then go on to discuss that their method will "automatically discover this structure from correlated patterns of neural activity". This is incorrect for two reasons: First, the authors demonstrate that their method only discovers two RGC types (the standard ON vs OFF classification) and their results do not recover the "plethora of RGC types" mentioned in the introduction. Secondly, at least in the retina, the network activity is directly tied to the stimulus--and the experimental choice of stimulus--and therefore their method is, implicitly, tied to the choice of stimulus. For example, consider if the experiment had used a full field stimulus or only concentrated the stimulus on a small part of the visual field. Then, the network activity (and resulting conclusions of the modeling effort) would be markedly different. I am just pointing out that when analyzing responses of sensory areas the analysis is not independent of the stimulus, and the authors should discuss and highlight the implications of the experimental stimulus. Another point about the application to retinal data--the authors make a point of highlighting that their methods discover ON and OFF cell types in the data without knowledge of the stimulus using a minute of data (first paragraph of section 5). However, experimentally, we *do* have access to the stimulus (in fact, we choose it) and it does not make sense to perform stimulus independent analysis (especially for something like the discovery of OFF or ON cell types, which can be done in seconds using a light and dark flash stimulus). However, I do recognize that the stimulus independent analysis is very useful in two very important contexts: (1) deeper in the brain, where the relevant stimulus may be unknown or hard to control, and (2) in sensory recordings in response to uncontrolled stimuli, such as natural stimuli. These are both scenarios where the techniques put forth in this paper could prove to be powerful ways of identifying latent structure that has been difficult to crack using more conventional stimulus-response modeling approaches. I think making this point explicit in the introduction or conclusion would further strengthen the motivations of the paper. As a final point on the comparison to retinal data, I would appreciate if the authors compared their technique to some naive baselines. For example, the correlation between the spike trains of two cells drops off with distance in the retina. If one simply used the spike train correlation as an estimate of distance, how would that compare to the NGLM approach in Figure 4b? In addition, how does the network structure compare to a simpler approach of the MAP estimate of a GLM with l1 or l2 priors on the coupling terms? The authors mention this simple model at the end of section 2.2, it would be nice to know how inferred network structure from a regularized GLM compares to their NGLM technique. I would have appreciated more discussion surrounding the types of network models used (e.g. the stochastic block model and latent distance model). These were introduced very briefly in section 2.3, and I would have appreciated more explanation of what kinds of latent structure these descriptions could capture and how they relate to existing hypotheses of connectivity in neural systems. In general, the paper is well written, I enjoyed reading it, and it raises interesting questions about how to model network structure given recorded spiking activity. To really demonstrate the advantages of their approach, more comparisons to simpler and existing approaches are needed. More discussion about how their methods will be relevant for other neural systems besides the retina is also welcome.

Confidence in this Review

2-Confident (read it all; understood it all reasonably well)