Dynamic organization of visual cortical networks revealed by machine learning applied to massive spiking datasets

  1. University of Illinois Urbana Champaign, Department of Electrical and Computer Engineering

Editors

  • Reviewing Editor
    Peter Latham
    University College London, London, United Kingdom
  • Senior Editor
    Timothy Behrens
    University of Oxford, Oxford, United Kingdom

Reviewer #1 (Public Review):

Summary:

This work proposes a new method, DyNetCP, for inferring dynamic functional connectivity between neurons from spike data. DyNetCP is based on a neural network model with a two-stage model architecture of static and dynamic functional connectivity.

This work evaluates the accuracy of the synaptic connectivity inference and shows that DyNetCP can infer the excitatory synaptic connectivity more accurately than a state-of-the-art model (GLMCC) by analyzing the simulated spike trains. Furthermore, it is shown that the inference results obtained by DyNetCP from large-scale in-vivo recordings are similar to the results obtained by the existing methods (jitter-corrected CCG and JPSTH). Finally, this work investigates the dynamic connectivity in the primary visual area VISp and in the visual areas using DyNetCP.

Strengths:

The strength of the paper is that it proposes a method to extract the dynamics of functional connectivity from spike trains of multiple neurons. The method is potentially useful for analyzing parallel spike trains in general, as there are only a few methods (e.g. Aertsen et al., J. Neurophysiol., 1989, Shimazaki et al., PLoS Comput Biol 2012) that infer the dynamic connectivity from spikes. Furthermore, the approach of DyNetCP is different from the existing methods: while the proposed method is based on the neural network, the previous methods are based on either the descriptive statistics (JSPH) or the Ising model.

Weaknesses:

Although the paper proposes a new method, DyNetCP, for inferring the dynamic functional connectivity, its strengths are neither clear nor directly demonstrated in this paper. That is, insufficient analyses are performed to support the usefulness of DyNetCP.

First, this paper attempts to show the superiority of DyNetCP by comparing the performance of synaptic connectivity inference with GLMCC (Figure 2). However, the improvement in the synaptic connectivity inference does not seem to be convincing. While this paper compares the performance of DyNetCP with a state-of-the-art method (GLMCC), there are several problems with the comparison. For example:

(1) This paper focused only on excitatory connections (i.e., ignoring inhibitory neurons).

(2) This paper does not compare with existing neural network-based methods (e.g., CoNNECT: Endo et al. Sci. Rep. 2021; Deep learning: Donner et al. bioRxiv, 2024).

(3) Only a population of neurons generated from the Hodgkin-Huxley model was evaluated.

Thus, the results in this paper are not sufficient to conclude the superiority of DyNetCP in the estimation of synaptic connections. In addition, this paper compares the proposed method with the standard statistical methods Jitter-corrected CCG (Figure 3) and JPSTH (Figure 4). Unfortunately, these results do not show the superiority of the proposed method. It only shows that the results obtained by the proposed method are consistent with those obtained by the existing methods (CCG or JPSTH). This paper also compares the proposed method with standard statistical methods, such as jitter-corrected CCG (Figure 3) and JPSTH (Figure 4). It only shows that the results obtained by the proposed method are consistent with those obtained by the existing methods (CCG or JPSTH), which does not show the superiority of the proposed method.

In summary, although DyNetCP has the potential to infer synaptic connections more accurately than existing methods, the paper does not provide sufficient analysis to make this claim. It is also unclear whether the proposed method is superior to the existing methods for estimating functional connectivity, such as jitter-corrected CCG and JPSTH. Thus, the strength of DyNetCP is unclear.

Reviewer #2 (Public Review):

Summary:

Here the authors describe a model for tracking time-varying coupling between neurons from multi-electrode spike recordings. Their approach extends a GLM with static coupling between neurons to include dynamic weights, learned by a long-short-term-memory (LSTM) model. Each connection has a corresponding LSTM embedding and is read out by a multi-layer perceptron to predict the time-varying weight.

Strengths:

This is an interesting approach to an open problem in neural data analysis. I think, in general, the method would be interesting to computational neuroscientists.

Weaknesses:

It is somewhat difficult to interpret what the model is doing. I think it would be worthwhile to add some additional results that make it more clear what types of patterns are being described and how.

Major Issues:

Simulation for dynamic connectivity. It certainly seems doable to simulate a recurrent spiking network whose weights change over time, and I think this would be a worthwhile validation for this DyNetCP model. In particular, I think it would be valuable to understand how much the model overfits, and how accurately it can track known changes in coupling strength. If the only goal is "smoothing" time-varying CCGs, there are much easier statistical methods to do this (c.f. McKenzie et al. Neuron, 2021. Ren, Wei, Ghanbari, Stevenson. J Neurosci, 2022), and simulations could be useful to illustrate what the model adds beyond smoothing.

Stimulus vs noise correlations. For studying correlations between neurons in sensory systems that are strongly driven by stimuli, it's common to use shuffling over trials to distinguish between stimulus correlations and "noise" correlations or putative synaptic connections. This would be a valuable comparison for Figure 5 to show if these are dynamic stimulus correlations or noise correlations. I would also suggest just plotting the CCGs calculated with a moving window to better illustrate how (and if) the dynamic weights differ from the data.

  1. Howard Hughes Medical Institute
  2. Wellcome Trust
  3. Max-Planck-Gesellschaft
  4. Knut and Alice Wallenberg Foundation