NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:3822
Title:Approximate Bayesian Inference for a Mechanistic Model of Vesicle Release at a Ribbon Synapse

Reviewer 1

[Edit: I read all reviews and the author responses, and still think this is a great work. The author responses answered my questions as well as points raised by other reviewers, providing additional clarification.] This paper formulates a fully probabilistic model of the vesicle-release dynamics at the sub-cellular biophysical level in the ribbon synapse. The paper then develops a likelihood-free inference method, tests it on a synthetic dataset, and finally infers the parameters of vesicle release in the ribbon synapse from real data. Originality: The paper presents a novel combination of biophysical modeling of ribbon synapse and a likelihood-free inference of the parameters. To my knowledge, the fully stochastic modeling of the vesicle-release dynamics is itself new. The likelihood-free inference is based on the recent developments of related methods, but includes problem-specific constructions such as the use of certain summary statistics in the loss function. Quality: The formulation is technically strong; each step is well supported, with the derivations provided in the supplementary material. Modeling assumptions are clearly stated. The overall claim of the paper (effectiveness of the inference) is first confirmed on a synthetic dataset, then is further supported by the observation of known trends in the real-data application. Clarity: This paper is superbly written. Materials are logically organized. There is a good balance between the technical content, and appropriate comments/interpretations to keep the reader on track and provide relevant insights. Significance: The work is significant in two important ways. First, it develops a solid, analytic method for approximate Bayesian inference, which may be further extended to other systems in future research. Although the success of this particular problem does not guarantee another successful application of the method to a more complicated problem, this is certainly an encouraging first work. In addition, I would be interested to know about the running time / complexity of the inference algorithm. Second, the inference result provides a new mechanical understanding of the release dynamics of the ribbon synapse, to a level that was not possible before; the result will serve as a baseline for further biophysical investigations. Minor comments: - Line 102 / the numbers of vesicles moving between pools: It is important to state explicitly that these are the numbers "per unit time", or even better, call these "rates". - Paragraph "Vesicle release": it is not clear how the correlation \rho is defined. - Please number the equations. - Lines 124-126: references for D_max and the (reduced) R_max?

Reviewer 2

This paper introduces a likelihood-free inference algorithm for estimating the parameters governing vesicle release in a ribbon synapse. This is based on a mechanistic model of exocytosis that involves several stages of nonlinear stimulus filtering and synaptic vesicle movement, docking, and release. Bayesian inference in this regime is challenging: due the great number of model components and biophysical constraints, even evaluating the likelihood function can be intractable. The authors address this problem by adopting a likelihood-free (also known as approximate Bayesian computation, ABC) approach, where, by repeatedly sampling from a prior distribution, the forward model is simulated many times and the parameters that well-describe the observed data are retained and used to update the prior, and the process repeats. The modelling work is interesting and to a high standard. The writing is clear and motivates the problem well. This work appears to be a novel application of ABC to the problem of vesicle release modelling, although it's not clear if the approach is actually better than previous methods as no comparisons are performed. Major comments: - The parameter k is often underestimated (Fig 3B and Fig 8) yet seems to have a minimal effect on the resulting nonlinearity. It would be beneficial to mention why the slope of the nonlinearity is difficult to obtain in the main text. - What are the different colours in the density plots in Figure 2? Presumably one is the updated prior. - Should p_d in the equation after line 110 be p_{d_t}? This was confusing on a first read as I was searching for p_d among the p_t, p_{d_t}, p(d_t | D_{t - 1}) etc. - It would be helpful to state something along the lines that the summation in the equation after line 113 is to ensure the probabilities sum to 1, as this was not immediately clear to me. - I'm aware that selection of summary statistics is something of an art in ABC methods, but are the results very sensitive to the scalings (line 377)? The different release counts are scaled differently. Is that to account for the more infrequent release counts? Minor comments: - Line 73: Is it that the likelihood function cannot be evaluated at all, or more that evaluation of the likelihood function is inefficient/intractable? - Figure 4A caption: releaed -> released - Figure 6 caption: hold constant -> held constant - Appendix line 362: Detailes -> Details - Line 376: "for for" -> for Update following rebuttal: The authors have addressed all of my concerns and I maintain my support for accepting this paper.

Reviewer 3

The paper is an original application of likelihood free inference to parameter estimation in a mechanistic model of the ribbon synapse. It is not super advanced on the inference aspects, but well executed and described. My feeling is that the community which would be most excited would be ribbon synapse neuroscientists, and they would be most excited about the results of the parameter estimation rather than the method itself. I'm not sure the neuroscience NeurIPS community at large would get that much out of the paper which is not already covered by existing (and more advanced) likelihood free inference papers such as ref [6] already cited in the paper. A few questions: Is the linear filter really constrained to have a single form with a single stretch parameter? I would have assumed that given the discrete Gaussian or binary noise stimulus, it would be possible to estimate the full filter directly from data. Why is the filter constrained in this way? It appears that the posterior over model parameters inferred via the method is a factorized distribution. What is the reason for leaving out the correlations? I would have assumed that it would be relatively straightforward, given that the distributions are all Normal? Is it because of the truncation applied to some of the distributions? The acceptance criterion is to accept the best j particles. I wonder how well an acceptance criterion based on the loss value would work? Is it possible that the variance of the posterior is over or under estimated by the best j criterion? For instance, if more than j particles are within an acceptable loss value, then we would get an underestimate. And conversely, if the worst particles in the j best particles have unacceptable loss values, then the variance will have been over estimated.