1. Introduction
Neuronal firing activity in the cortex can be highly irregular (Britten et al., 1993; Shadlen and Newsome, 1998; Compte et al., 2003; London et al., 2010). Because the precise timing of spikes may contain substantial information about the external stimuli, irregular activity may serve as a rich encoding and processing space for neural computation (Hertz and Prügel-Bennett, 1996; Gütig and Sompolinsky, 2006; Sussillo and Abbott, 2009; Monteforte and Wolf, 2012). To understand how the brain processes information, it is important to investigate how such irregularity emerges in the brain.
Some studies conclude that irregular firing may be regarded as noise, thus, conveying little information (Shadlen and Newsome, 1994; Han et al., 2015). Meanwhile, other studies show that timing of spikes and the temporal activity patterns of irregular neuronal firings in vivo are able to convey specific information (Richmond and Optican, 1990; Pillow et al., 2005; Whalley, 2013). A germinating mechanism underlying irregular activity was proposed in the balanced network theory (van Vreeswijk and Sompolinsky, 1996; Troyer and Miller, 1997; Vreeswijk and Sompolinsky, 1998; Vogels et al., 2005; Miura et al., 2007). In a balanced network, sparsely-connected neurons possess strong architectural coupling but weak pair-correlations in their activity. The excitatory and inhibitory inputs into each neuron, on average, dynamically balance, suppressing the mean of the total input. Consequently, fluctuations of the input become dynamically dominant, giving rise to irregular firing events of each neuron. The hallmarks of a balanced network include a broad and heterogeneous distribution of the single-neuron firing rate and a linear response of the mean population firing rate to the external input (Vreeswijk and Sompolinsky, 1998; Mehring et al., 2003; Renart et al., 2010). Consistent with theoretically predicted scenarios, certain experimental observations have been interpreted as consequences of balanced networks. For example, in vitro, the sustained irregular activity of neurons in slices of the ferret prefrontal and occipital cortex was shown to be driven by the balance of proportional excitation and inhibition (Shu et al., 2003). In vivo, the excitatory and inhibitory inputs to a neuron in ferret's prefrontal cortex were also found to be dynamically balanced (Haider et al., 2006).
As shown in recent experimental data, the structure of developing hippocampal networks in rats and mice conforms to a scale-free (SF) topology, with the number of connections per neuron following a power-law distribution (Bonifazi et al., 2009). Bidirectional and clustered three-neuron connection motifs were experimentally observed to occur with a frequency significantly above chance in the visual system (Song et al., 2005), thus strongly deviating from statistically homogeneous networks. The network in the somatosensory cortex of neonatal animals was found to be a small-world (SW) network (Perin et al., 2011), that is, its connectivity has properties of high clustering and short average path lengths (Newman, 2003b). These experimental observations show that the neuronal cortical connectivity is rather heterogeneous. Therefore, in this work, we investigate the influence of the wide distribution of the recurrent connectivity on neuronal network dynamics.
In general, it is theoretically challenging to understand the dynamical consequences of these complex network architectures (Boccaletti et al., 2006). Several studies have explored the dynamics of networks with heterogeneous connections. For instance, in Roxin (2011), the role of the broad degree distribution on the correlation of synaptic currents has been investigated. In addition, it has been observed that, in a heterogeneous network, a neuron with more presynaptic connections tends to fire less (Pyle and Rosenbaum, 2016). In a recent study (Landau et al., 2016), it has been found that a heterogeneous network is unbalanced in general because some neurons either never fire or fire fairly regularly in the network. The balanced state of the entire network (the global balanced state) can be achieved by setting strong correlations among the presynaptic excitatory, presynaptic inhibitory, and external inputs for each neuron, or through incorporating adaptation and plasticity into the dynamics of each neuron (Landau et al., 2016).
Theoretical and computational works so far have mainly focused on the global balanced state, i.e., each neuron in the network fires irregularly by receiving balanced excitation and inhibition. However, experimental studies have shown that information is often encoded by the firing of a relatively small set of neurons in the population, whereas other neurons in the network do not fire at all. This phenomenon is often referred to as sparse coding and has been observed in many cortical regions. For instance, sparse firing activity has been observed in the barrel cortex of mice (O'Connor et al., 2010), the auditory cortex of rats (Hromádka et al., 2008), and the primary olfactory cortex of rats (Poo and Isaacson, 2009), which is elicited by a variety of stimuli. Because a large proportion of neurons is silent during information processing, it is suggested that the global balanced state may not commonly exist in cortical regions.
Based on all the above observations, there are several important issues that remain to be clarified: whether the small group of those active neurons embedded in large heterogeneous neuronal networks can be in a balanced state and, if so, how such a balanced sate in the active subnetwork of heterogeneous networks differs from a global balanced state in homogeneous networks; whether the existence of a balanced active subnetwork sensitively depends on the topology of complex networks; what dynamical characteristics those active neurons have in order to maintain a balanced state in heterogeneous networks; how a balanced active subnetwork emerges from heterogeneous network dynamics; and what the dynamical implications of a balanced active subnetwork has for general complex networks. Below we will address these issues by investigating both the SF networks and SW networks with various types of single-node dynamics. Note that the definition of the SF network in our simulations deviates from the exact definition in which a network is called scale-free if its degree distribution exhibits power-law behavior, at least in its upper tail, i.e., P(k) ∝ k−γ as k → +∞ (Reed, 2006). In numerical simulations, the power-law degree distribution we use takes the form as P(k) ∝ k−γ for k ∈ [K0, K1]. Here, the degree of each neuron has a lower bound K0 determined as K0 ≈ 0.95%N according to an experimental observation (Bonifazi et al., 2009), where N is the network size. In addition, the degree of each neuron also has an upper bound K1 determined by Equations (8–10) to ensure that the network is sparsely connected. Because the mean connectivity of a sparse network is much smaller that the network size, the value of K1 is smaller than N.
2. Results
To contrast with networks of heterogeneous topologies below, we first recapitulate the balanced state in a homogeneous network, i.e., an Erd o¨ o¨ s-Rényi (ER) network of binary neurons (Vreeswijk and Sompolinsky, 1998). In this balanced network, an important feature of its connectivity structure is that neurons are sparsely connected with strong synaptic strength. As discussed in section 4 specifically, the average number of connections K to each neuron from both presynaptic excitatory and presynaptic inhibitory populations is much smaller than the total number of neurons in the network, and the coupling strength is of the order 1/K−−√ 1/K . This scaling ensures persistent fluctuations of inputs in the large-K limit.
As shown in Figure S1, the hallmarks of the balanced state in a homogeneous neuronal network are summarized as follows: balanced net input, irregular activity, stationary population-averaged activity, heterogeneity of firing rate, linear response. A detailed description of the properties can be found in Figure S1 (Supplementary Material). All the balanced phenomena in the binary model can be demonstrated analytically from the standpoint of the classical balanced network theory (Vreeswijk and Sompolinsky, 1998). Note that both the theory and simulations are based on the assumptions that the network is homogeneous, i.e., of the ER type, and that the neuron is of the binary type. These assumptions are high simplifications of the biological reality. Biological neuronal networks tend not to be homogeneous, e.g., the connections can be of SF (Scannell et al., 1999; Sporns et al., 2004, 2007; Kaiser et al., 2007) or SW type (Sporns and Zwi, 2004; Sporns, 2006; Perin et al., 2011). In general, it is expected that the topology could strongly influence the dynamics of neuronal networks (Shkarayev et al., 2009). A natural and important extension of the theory is to examine the existence of a balanced state in heterogeneous networks. In the following, we first investigate the SF neuronal network, then discuss the case of the SW network. As an extension to the binary neuron model, we resort to the I&F model in our simulation (Carandini et al., 1996; Rauch et al., 2003; Cai et al., 2005; Rangan et al., 2005; Zhou et al., 2009, 2013).
2.1. Uncorrelated SF Network With I&F Neurons
In this section, we address the question of whether there exists balanced-network dynamics in an uncorrelated SF network using the current-based I&F neuronal model coupled with delta-pulse synaptic currents. This model is computationally simple but biologically more realistic than the binary model (the model details can be found in section 4).
Here, we focus on the SF topology with uncorrelated in-degree between neighboring neurons, and generate the SF networks with a given mean connectivity 2K (each neuron on average has K presynaptic excitatory neurons and K presynaptic inhibitory neurons). A network is called scale-free if its degree distribution exhibits power-law behavior, at least in its upper tail, i.e., P(k) ∝ k−γ as k → +∞ (Reed, 2006). It should be pointed out that the mean connectivity 2K and the decay exponent γ of the power-law distribution are the two main factors that determine the SF network connectivity structure (details can be seen in section 4. We again invoke the coupling strength of order 1/K−−√ 1/K to ensure that the network is fluctuation-driven when K is large. For each neuron in the network, the number of its presynaptic excitatory neurons is set to be highly correlated with that of presynaptic inhibitory neurons, consistent with the setting in the classic ER network as well as the experimental observation (Liu, 2004). The external input to each neuron is allowed to be uncorrelated with the cortical input, which will lead to the break of the global balanced state. However, it remains unclear whether a balanced state can exist in the subgroup consisting of the active neurons in such a network.
Our simulation results lead to the conclusion that only a group of neurons in this SF network can have firing activity and their dynamics follow a balanced state with all its hallmarks. In Figures 1A,B, we illustrate an example of the balance between the excitatory and inhibitory synaptic inputs to the firing neurons. We report the synaptic input at each moment by its time average within a small time window—we select a time bin of 2.5 ms. As shown in Figure S2, we can observe that the net input of each firing neuron can have a relatively small amplitude due to the cancellation of its excitatory and inhibitory parts. In addition, the firing rate of each individual active neuron is linearly correlated with its time-averaged net input, which is consistent with a recent study (Argaman and Golomb, 2018). Just as for neurons in the homogeneous balanced network, the CV value as shown in Figure 1C for the ISIs of each spiking neuron in the SF network is broadly distributed. This is consistent with the irregular activity of these neurons with heterogeneous connectivity. As shown in Figure 1D, the population activity is asynchronous and stationary as the percentage of firing neurons fluctuates in time around a constant with a small amplitude. In Figure 1E, we show that strong heterogeneity is captured by the bimodal distribution of the single-neuron firing rate. Compared with the distribution of firing rate in the homogeneous system (Figure S1E), the firing rate distribution in the SF case manifests a sharp peak near the origin (blue bar). Our result shows that there exists a group of neurons with no firing activity (we will further discuss the significance of this phenomenon below). We point out that a group of neurons with no firing activity has been previously found in other heterogeneous networks, e.g., the network with broad Gaussian distributions of degrees (Landau et al., 2016; Argaman and Golomb, 2018). Finally, in Figure 1F, we show the linear response of both the excitatory and inhibitory populations to the external rate. These features still exist asymptotically as the size of the network increases. In particular, the fluctuations of the synaptic currents received by active neurons do not vanish but are kept as order one, even for very large network size (Figure S3). To summarize, by the above hallmarks of the balanced state, the stationary state of those neurons with firing activity in the SF I&F neuronal network with delta-pulse synaptic currents can be readily identified as a balanced state.
FIGURE 1
2.2. Quiescent and Active Groups in the SF Network
As shown in Figure 1E, we find that the neuronal dynamics of SF networks separate the neuronal population into two subnetworks in our simulations: one consisting of neurons that fire no spikes (blue bar), which will be referred to as the quiescent group; the other consisting of firing neurons (red bar), referred to as the active group (core). Subsequently, we investigate the mechanism underlying how the SF network system evolves into these two different groups and what are the characteristics of dynamics for neurons in these two groups.
In a balanced network, the excitatory and inhibitory inputs to each neuron need to approximately cancel each other. Therefore, the mean-field balanced conditions in the large-K limit shall hold (Vreeswijk and Sompolinsky, 1998) as follows:
KJEEmE+fEνE=−KJEImI, KJIEmE+fIνI=−KJIImI, (1) KJEEmE+fEνE=-KJEImI, KJIEmE+fIνI=-KJIImI, (1)
where mα is the mean firing rate of the αth population, Jαβ is the coupling strength from the βth population to the αth population, and fα and να are the strength and the rate of the external Poisson input to the αth population for α, β = E, I. As shown in Figure 2A, the excitatory and inhibtory inputs are indeed proportional to each other in both the active and quiescent groups. This is consistent with a recent experimental observation (Xue et al., 2014). In addition, it can be clearly observed that the quiescent group is strongly inhibited because the inhibitory input in the quiescent group is more than twice that in the active group given the same excitatory input. By calculating the time-averaged total input to a neuron normalized by its standard deviation and denoting as ϑ, it is clear from Figure 2B that the distribution of ϑ has a long negative tail for the quiescent group. Consequently, rarely can fluctuations drive their membrane potentials across the threshold. Note that the distribution of ϑ is concentrated around zero for the active group, thus indicating the neurons in the active group have fluctuation-dominated inputs. In addition, the fact that the quiescent group is strongly inhibited can also be reflected in the cross-correlation structure between the excitatory and inhibitory synaptic inputs to each neuron in the quiescent group (Roxin, 2011). As shown in Figure S4A, the average cross-correlation is higher for neurons in the quiescent group than that in the active group, indicating that the increase of the excitatory input is quickly followed and canceled by the inhibitory input such that a neuron in the quiescent group is more inhibited than a neuron in the active group at each moment. Therefore, neurons in the quiescent group are not in the balanced state.
FIGURE 2
Next, we investigate the issue of how the coupling strength of the network affects the emergence of the active group. In particular, we focus on the competition between the excitatory and inhibitory coupling strength quantified by the ratio ϕ = JEI/JEE with fixed JII/JEI. In our simulation, we fix the network topology while varying the value of ϕ. Note that the degree distribution of the entire network is given by an SF network construction, thus independent of ϕ; while the degree distribution of the active group depends on ϕ — different coupling strengths give rise to different dynamics, which in turn generate different active subnetworks dynamically. Figure 2C displays the degree distribution of the entire network and those of the quiescent groups with different values of ϕ. It is important to observe that these degree distributions agree with one another in the region of large degrees, that is, the quiescent group tends to be composed of the neurons with a large degree. This is consistent with a recent observation (Pyle and Rosenbaum, 2016) that a neuron with a larger degree tends to fire less. Because the neurons in the quiescent group have large degrees, each pair of them tend to share a large amount of common inputs. As shown in Figures S4B,C, by calculating the cross-correlation between the excitatory inputs or inhibitory inputs received by two neurons from the same group, we find that the average cross-correlation value between two inputs of the same type (excitatory or inhibitory) across all pairs of neurons in the quiescent group is indeed significantly greater than that in the active group.
Next, we deploy the coarse-grained approach to further deepen our understanding of the dynamics in this SF system.
2.3. Fokker-Planck Analysis of the SF Network Dynamics
From the mean-field balanced conditions (1), one can obtain the relationship between the population-averaged mean firing rate and the external drive:
mE=1KJIIfEνE−JEIfIνIJEIJIE−JIIJEE, mI=1KJIEfEνE−JEEfIνIJEIJIE−JIIJEE. (2) mE=1KJIIfEνE-JEIfIνIJEIJIE-JIIJEE,mI=1KJIEfEνE-JEEfIνIJEIJIE-JIIJEE. (2)
As shown in Figure 3A, the predictions of the balanced conditions Equation (2) cannot adequately capture the linear response of the population-averaged mean firing rates to the external inputs obtained in the simulation. To understand quantitatively the influence of the degree heterogeneity of the SF network, we perform the analysis of the Fokker-Planck (FP) equations corresponding to the network dynamics below.
FIGURE 3
As shown in Figure 3B, we first note that the firing events between neurons are extremely weakly correlated in the SF network. Therefore, the input into each neuron in the system can be regarded as three Poisson trains (Cinlar, 1972): the external, the excitatory, and the inhibitory synaptic inputs. Accordingly, we can derive the FP equation to describe the dynamics of an I&F neuron with Poisson inputs (Brunel, 2000; Cai et al., 2006). To derive the FP equation for a group of coupled neurons, we need to take the structure of the SF network into account (see section 4 for details). By treating all the neurons that possess the same number of presynaptic neurons as one ensemble, we then derive the FP equation for each ensemble and further obtain its stationary-state solution. We find that the mean firing rate mk for the kth ensemble decays exponentially with the neuronal degree k in that ensemble. Consequently, neurons with a sufficiently large degree will not fire or have extremely low firing rates that can barely be detected in numerical results with a finite simulation time, thus they will be classified into the quiescent group. The exponential decay of the firing rate from the FP analysis has been further verified in numerical simulations as shown in Figure 3C.
2.4. Balanced Active Core and Conditions for Its Existence
From the above discussion, it can be clearly seen that the entire SF network is unbalanced, whereas the active subnetwork is balanced. We now focus on the balanced subnetwork that contains only the active neurons and the connectivity structure of these neurons. We will refer to this subnetwork as an active core, which captures the spiking activity and the effective communication of the entire neuronal network.
We first investigate the issue of how to quantitatively characterize the features of the active core. From Figure 4A, it is important to note that the degree distribution of the neurons in the active core is sharply peaked, resembling that of neurons in homogeneous networks. Why does the degree distribution of the active core in the heterogeneous SF network possess the characteristics of a homogeneous ER network? For each neuron, we first examine the fraction of its active presynaptic neurons amongst all its presynaptic neurons, which will be denoted as p below. The distribution of p as shown in the insert of Figure 4A is sufficiently narrow to be approximated as a constant. In general, the value p will be affected by the E-I input strength ratio ϕ and the decay exponent of the degree distribution γ, as shown in Figure S5. Note that, for each neuron, p can also be viewed as the probability of finding one of its presynaptic neurons to be active. The probability of finding a neuron with w active presynaptic neurons can then be derived from the law of total probability, P(w)=∑kP(k)P(w|k) P(w)=∑kP(k)P(w|k) , where P(k) is the probability of finding a neuron having k presynaptic neurons, as the case here, whose distribution follows a power-law, P(k) = ck−γ. By ignoring the correlation between the degree distribution of the active core and the formation of the active core, the conditional probability P(w|k) can be approximated by a binomial distribution P(w|k)=Cwkpw(1−p)k−w P(w|k)=Ckwpw(1-p)k-w . Further approximating the binomial distribution by a Gaussian, we can derive an approximation for P(w):
P(w)≈∑kck−γ12πkp(1−p)√exp[−(w−pk)22kp(1−p)]. (3) P(w)≈∑kck−γ12πkp(1−p)exp[−(w−pk)22kp(1−p)]. (3)
FIGURE 4
The probability P(w) is a sum of a series of Gaussian terms with the coefficient of each term weighted by k−γ. Therefore, a larger value of k has a smaller contribution to the sum. In particular, for sufficiently large γ, the dominant term can exactly be a Gaussian. When γ is O(1) as set in our simulations, the degree distribution of the active core still resembles a Gaussian and can be captured by Equation (3). As shown in Figure 4A and Figures S5B–D, the prediction by Equation (3) is in very good agreement with the measured degree distribution of the active core for various values of γ.
Denoting the size and mean connectivity (in-degree) in the active core as Nactive and Kactive respectively, we next examine the relationship between Nactive and N as well as Kactive and K. Recall that K is the average presynaptic connectivity of the original SF network. Numerically, as shown in Figure 4B, Nactive and Kactive increase linearly with N and K respectively. As a consequence, when K → +∞, N → +∞, we also have Kactive → +∞, Nactive → +∞. Therefore, the dynamics of the active core possess the same asymptotic behaviors as those of an ER network in the large-K limit.
By considering the active core as a homogeneous network, we can numerically solve its population-averaged mean firing rate from the following equations derived from the balanced condition in the large-K (Kactive) limit
mactive,E=1KactiveJIIfEνE−JEIfIνIJEIJIE−JIIJEE, mactive,I=1KactiveJIEfEνE−JEEfIνIJEIJIE−JIIJEE, (4) mactive,E=1KactiveJIIfEνE−JEIfIνIJEIJIE−JIIJEE, mactive,I=1KactiveJIEfEνE−JEEfIνIJEIJIE−JIIJEE, (4)
where the averaged connectivity of the active core Kactive is read out from the simulation. As shown in Figure 4C, the linear response property of the active core can be well-captured by the predictions from Equation (4). The successful prediction also suggests the validity of the assumption in the analysis that the active core can be viewed as a balanced homogeneous network.
Note that the active core encompasses all spike events in the SF neuronal network, with connectivity similar to that of an ER network. The characteristics of the balanced state persist in the active core, that is the properties of balanced net input, irregular activity, stationary population-averaged activity, heterogeneity of firing rate, and linear response all hold. Clearly, our results demonstrate that there exists a balanced active core in the SF neuronal network.
Next, through theoretical analysis and numerical simulations, we have found that, in order to obtain the balanced active core, the following three conditions shall hold:
(1) The cortical and external input strengths shall satisfy the following relation
fEνEfIνI>JEIJII>1, (5) fEνEfIνI>JEIJII>1, (5)
which can be derived from the balanced condition (Equation 4) by requiring the firing rates to be positive values and setting JIE = JEE for simplicity. Note that Equation (5) is consistent with the conditions derived from a homogeneous network (Vreeswijk and Sompolinsky, 1998).
(2) The excitatory and inhibitory in-degrees for each neuron shall be highly correlated. This condition is consistent with the experimental observation that a conserved ratio of the numbers of excitatory and inhibitory synapses has been observed throughout the dendrites of cultured hippocampal neurons (Liu, 2004).
(3) The smallest degree K0 in the network is required to be the same order as the population-averaged degree K. In fact, by analyzing the FP equation of the neuronal ensemble with the smallest degree K0, the total input to each neuron in this ensemble shall be close to zero in order to achieve the balance between the excitatory and inhibitory inputs, i.e.,
fKν0+JαEΓ1+ΓrEK0−JαI11+ΓrIK0≈0, (6) fKν0+JαEΓ1+ΓrEK0-JαI11+ΓrIK0≈0, (6)
where f is the external input strength, ν0, rE, and rI are the average firing rates of the presynaptic external, cortical excitatory, and cortical inhibitory neuronal populations, respectively, K is the number of the presynaptic external neurons identical to the mean connectivity of the SF network, Γ is the ratio between the number of each neuron's presynaptic excitatory neurons to the number of each neuron's presynaptic inhibitory neurons. Because f, JαE, and JαI are of order O(1/K−−√) O(1/K) , ν0, rE, rI, and Γ are of order O(1) O(1) , K0 is required to be the same order as K in order to make the total input canceled out.
As one example shown in Figure 5A, breaking the first condition results in the synchronous state of the network. In addition, as shown in Figure 5B, breaking the second or third condition results in the deviation of the ratio from excitatory to inhibitory current input to neurons in the active group from unity, which obviously breaks the balanced input condition and the balanced state of the subnetwork as a consequence. In order to keep the excitation and inhibition balanced in the active neurons, it requires sufficiently high correlation between the numbers of excitatory and inhibitory presynaptic connections as shown in Figure 5C. Note that these conditions for the existence of the balanced active core are different from that proposed by the previous works for the existence of the global balanced state (Landau et al., 2016), in which the number of cortical inhibitory input is required to be correlated with both the number of cortical excitatory inputs and the number of external inputs. In contrast, the existence of a balanced active core only requires that the number of cortical inhibitory inputs be correlated with that of cortical excitatory input but not that of external input.
FIGURE 5
2.5. Correlated SF Networks With I&F Neurons
Because the architectural degree-correlation may play an important role in the dynamics of a system (Shkarayev et al., 2009), we generate SF networks with degree-correlation between neighboring nodes using a reshuffling strategy (Xulvi-Brunet and Sokolov, 2005). A balanced active core can still arise in correlated SF neuronal networks. An example is shown in Figure S6, in which the five hallmarks of balanced net input, irregular activity, stationary population-averaged activity, heterogeneity of firing rate, and linear response again perseverate robustly. The degree distribution exponent γ of the SF network used here is γ = 2.6, which is the same as that of the SF network for the uncorrelated case reported above (Figure 1). The degree correlation coefficient for the SF network in Figure S6 is ρ = 0.03. Similar to an SF network without degree correlation, the distribution of single neuron firing rates also possesses a high peak at zero. The SF network with degree correlation can also be decomposed into two subnetworks of distinct dynamics characterized by their firing rates. Figure S7B demonstrates that the structure of the corresponding active core also displays that of homogeneous networks.
By generating SF networks with different correlation coefficients and with γ = 2.6, all these SF systems exhibit dynamics with a balanced active core. The degree distribution of the active core can be successfully described by Equation (3) for all values of ρ ranging from −0.3 to 0.31 as shown in Figure S7. The properties of the dynamics in these active cores are again similar to those of an ER balanced network.
In summary, our results show that the degree correlation between different nodes does not affect the properties of the balanced active core in SF networks. For SF neuronal networks with degree correlations, the existence of the active core persists with the structure similar to that of an ER network and the active core possesses all the characteristics of the balanced state.
3. Discussion
In this work, we have shown that a sparsely but strongly connected SF network of I&F neurons can reach the balanced state in the active subgroup if the network satisfies three conditions: the inequality of the ratio of external and coupling strength between excitation and inhibition, high correlation between the numbers of presynaptic cortical excitatory and presynaptic cortical inhibitory neurons for each neuron, and the order of the smallest degree K0 being the same as that of the population averaged degree K. Despite the fact that all neurons in the SF network receive external inputs, the network is naturally separated into two subnetworks: one is the quiescent group consisting of silent neurons and the other is the active group consisting of neurons with non-zero firing rates. The separation of active and quiescent subgroups has also been observed in other heterogeneous networks with broad Gaussian degree distributions (Landau et al., 2016; Argaman and Golomb, 2018). The subnetwork consisting of all the active neurons with the connections between these neurons is then defined as the active core here. From our simulation, this active core possesses a degree-distribution characteristic of a homogeneous ER network, which can be described well by our theoretical analysis (Equation 3). In addition, the active core displays similar dynamical properties of the balanced state of an ER network (Figures 1, 4).
In addition, our results suggest that the balanced active core can be found in various heterogeneous networks as the results shown below. We also find that the silent neurons always possess larger degrees than the active neurons. This can be understood intuitively as follows: if the number of in-degrees in the external input is fixed, and the ratio of the numbers of excitatory and inhibitory synapses is maintained, neurons in the heterogeneous network that receive a large number of recurrent connections receive effectively more inhibition and are therefore silent.
3.1. Balanced Active Core in Other Networks
In addition to the pulse-coupled I&F neurons, for the SF network of either binary neurons or smooth-current-based I&F neurons, as shown in Figures S8–S10, the balanced active core also can be found. These results imply that the existence of the balanced active core is robust with respect to detailed single-neuron dynamics.
It has been shown that different architectural degree-correlations can induce different dynamical properties in SF networks (Krapivsky and Redner, 2001). These correlations can strongly influence the dynamics of the system (Shkarayev et al., 2009). However, as far as the balanced state is concerned, there still exists a balanced active core in SF neuronal networks with degree correlations as shown in Figures S6, S7, in which an ER-like active core controls its dynamics. In addition, the properties of the balanced state have been studied for various SF networks with a different decay exponent of degree distribution γ. As shown in Figures S5, S11–S13, the value of γ does not affect the existence of the balanced active core, but affects the size of the active core.
As is shown that certain neuronal networks in the brain exhibit small-world (SW) characteristics (Perin et al., 2011), we have also conducted simulations with SW connectivity. An active core of the balanced dynamics is again observed in the SW neuronal network with different rewiring probabilities (Figure S14). The degree distribution in the active core is still close to that of an ER network. Our results suggest that the balanced state embedded in the active core may broadly exist for various heterogeneous networks (Figure 4 and Figures S5, S7, S10, S14).
3.2. Heterogeneity in External Input
Accounting for the fact that the external inputs may vary from neuron to neuron, we have also examined the case of heterogeneous inputs in the simulation. Here, we choose the rate of the external input to the ith neuron in the αth population νiα ναi from a Gaussian probability distribution with its mean να and standard deviation CV·να for α = E, I, where CV is the coefficient of variation. As shown in Figure S15, for CV ranging from 0.1 to 0.4, we can still observe the existence of the active core, in which neurons receive balanced excitatory and inhibitory inputs. This indicates that the the broadly-distributed external input may not affect the existence of the active core (Figure S16).
In addition, Figures S17A,B provides an example of the heterogeneous strength of the external input following a log-normal distribution (Song et al., 2005) with a uniformly-distributed rate for different neurons. For this case, the dynamics still manifest a balanced active core whose in-degree distribution is again in excellent agreement with the prediction of Equation (3) shown in Figure S17C. These results may suggest that the active core can exist for various external inputs.
3.3. Biological Relevance of the Balanced Active Core
Many neuronal networks in the brain exhibit statistically heterogeneous connectivity structures. It has been observed that the connections of the neurons in layer 5 of the rat visual cortex display various highly clustered three-neuron connectivity patterns (Song et al., 2005). In addition, neuronal connectivity has been found to possess SF properties in rat hippocampal networks (Bonifazi et al., 2009). The network connectivity between neurons in the somatosensory cortex of neonatal animals possesses the attributes of a SW network (Perin et al., 2011).
In addition, experimental studies have shown that there often exists a small subnetwork of highly active neurons along with a large proportion of neurons being silent in neocortex of the brain. For example, during a head-fixed object localization task, only about half of all the neurons in a barrel column have been found to fire (O'Connor et al., 2010). Experimental recordings in the primary auditory cortex of unanesthetized rats have shown that 50% of the neural population failed to respond to any of the simple stimuli (Hromádka et al., 2008). Furthermore, in vivo, each odor can only evoke the activity of about 10% of neurons from anterior piriform cortex Layer 2/3 (Poo and Isaacson, 2009).
Our results show that, starting from heterogeneous network connectivity, the emergent network dynamics naturally captures the phenomenon of sparse coding and balanced inputs in a group of neurons. In contrast, in the traditional theory of a balanced network, the majority of neurons are balanced and thus fire actively. Therefore, in such a network, information can hardly be encoded by only a few of active neurons with the other neurons being quiescent. Note that, in order to achieve a balanced active core, our model assumes that the numbers of cortical excitatory and inhibitory inputs should be highly correlated, which has been supported by experimental observation (Liu, 2004).
3.4. Comparison With Previous Studies
Several studies have explored the dynamics of networks with heterogeneous connections. For example, in a recent study (Landau et al., 2016), there has been found a large fraction of neurons silent in the networks with sufficiently broad degree distributions. Moreover, it has been demonstrated that a heterogeneous network cannot reach a global balanced state in general when the cortical excitatory, cortical inhibitory and external in-degrees are uncorrelated, because some neurons either never fire or fire fairly regularly in the network (Landau et al., 2016). To achieve the global balanced state, the authors introduced adaptation, plasticity, or degree correlation into the network. In particular, by setting the number of cortical inhibitory input to be correlated with that of cortical excitatory input and with that of external excitatory input, the whole network will stay in the global balanced state (Landau et al., 2016), in which all neurons in the network receive balanced input and fire irregularly. Thus the balanced active group in such a case is the entire network. Different from their settings, here we set the correlation only between the numbers of cortical excitatory and inhibitory inputs, leaving the number of external input to be uncorrelated, and clearly show the mean firing rate of each neuron will exponentially decay with its in-degree, giving rise to the emergence of the active core in the network consisting of small-degree neurons. In addition, we further investigate the property of the active core, i.e., the subnetwork composed of the active neurons, and find that the balanced state does exist in the active core of the whole network. Our simulation shows that the balanced active core exists in a variety of networks with different degree distributions (scale-free, small-world, and broad Gaussian), while the cortical excitatory inputs are required to be correlated with cortical inhibitory inputs. Therefore, the existence of the active core seems not to be dependent on the degree distribution, but dependent on the correlation of cortical excitatory and inhibitory inputs. Moreover, by increasing the correlation between the number of external inputs and the number of cortical inputs to each neuron, the size of the active core will increase accordingly. Note that the increased correlation level has different effects on the active and quiescent groups. Intuitively, neurons in the active core are already balanced by its definition, thus the increase of correlation can only moderate the firing rate of these active neurons but have little effect on the size of the active core. However, for neurons in the quiescent group, such as neurons with degree k, the increase of correlation will drive these neurons toward the balanced state by satisfying the balanced condition, i.e., kfν0 + kJαErE − kJαIrI ≈ 0. Therefore, as the correlation level increases, the size of the active core in the network will increase by recruiting more and more neurons that used to be in the quiescent group. Eventually, the size of the active core can be the same as the network size. We note that the balanced active core can also be found in networks with broad Gaussian distributions if the numbers of cortical excitatory input and cortical inhibitory input to each neuron are correlated, meanwhile they are uncorrelated with the number of the external input.
In another study (Argaman and Golomb, 2018), they have investigated a network of 150 inhibitory neurons in the barrel cortex with heterogeneous connections between each other, and neurons in this network receive heterogeneous excitatory inputs from thalamic neurons. The network does not have strong coupling and sparse connection. In addition, the number of inhibitory cortical input is set to be uncorrelated with the number of excitatory input. In this type of modestly-sized network, it has been found that the fraction of silent neurons is very small, and most of neurons seem to be in the balanced state. Here we have investigated strongly coupled but sparsely connected networks consisting of a large number of both excitatory and inhibitory neurons. With these different settings, we have shown that there is a large fraction of silent neurons in the network, and the balanced state only exists for a small fraction of neurons in the entire network, i.e., the active core.
Moreover, we have found that the degree distribution of the active core is close to the homogeneous connectivity structure. We note in passing that the emergence of the balanced active core does not naturally result from the high correlation between the numbers of the cortical excitatory inputs and the cortical inhibitory inputs. This can be illustrated by the following facts: first, large-degree neurons with such high correlation structure fail to reach the balanced state in general; second, small-degree neurons with such high correlation structure also fail to reach the balanced state if the network topology does not satisfy the third condition as discussed in section 4 (also shown in Figure 5). Another study has used a similar theoretical framework to investigate the effective gain in a heterogeneous network (Roxin, 2011). They also treated neurons with the same in-degree as one ensemble. However, they directly used the firing-rate-based neuron model, rather than deriving the FP equation as in our case. Moreover, the balanced property of the network has not been investigated in their work. In summary, the finding of the existence of the balanced active core embedded in a heterogeneous network distinguishes our work from several studies exploring the dynamics of networks with heterogeneous connections. Our work provides a potential dynamical scenario for the emergence of a balanced active core in a heterogeneous network in the brain.
4. Materials and Methods
4.1. Degree Distribution and Degree Correlation
In the study of networks, the degree of a node in a network is the number of connections it has to other nodes. For a directed network, nodes have two different degrees, the in-degree, which is the number of incoming edges to a node, and the out-degree, which is the number of outgoing edges from a node. In this work, we mainly focus on the in-degree distribution, and just use degree instead of in-degree in this work for ease of discussion. The degree distribution P(k) of a network is the probability of finding a k-node, where the k-node is a node of degree k. The degree distribution of a directed ER network follows the Poisson distribution, P(k) = λke−λ/k!, which can be approximated by a Gaussian distribution for large λ (λ ≫ 1), λ being the average degree of the network. The degree distribution of an SF network, by definition, follows a power-law distribution P(k) ∝ k−γ, γ being the decay exponent (Barabási et al., 1999).
Beyond the degree distribution, it is also important to characterize the degree-correlation between neighboring nodes for large networks of complex structures (Pastor-Satorras et al., 2001; Newman, 2003a). In general, a network may display degree-correlations if the wiring probability between the high- and low-degree nodes statistically significantly differs from the independent random wirings between nodes. In our work, the degree-correlation is quantified by the Pearson correlation coefficient between the in-degrees for pairs of nodes linked by a directed edge.
4.2. The Generation of Scale-Free Neuronal Networks
To generate an SF neuronal network, we first generate the in-degree (out-degree) of each neuron in the αth population based on the power-law distribution Pα, in (Pα, out) for α = E, I. For ease of discussion, we set the first NE nodes in the N-node network to be excitatory neurons, and those remaining are inhibitory. Then we generate N in/out-degree pairs (ki, li) for each neuron, and calculate the sums ∑iki ∑iki and ∑ili ∑ili . We use the method in Newman et al. (2001) to force the conservation of in-degree and out-degree, i.e., ∑iki=∑jlj ∑iki=∑jlj . To be specific, when ∑iki≠∑jlj ∑iki≠∑jlj , we randomly select a neuron i and regenerate a new pair of degree (ki, li) from the corresponding degree distributions. We repeat the procedure until ∑iki=∑jlj ∑iki=∑jlj . Then, we further define Γi as the ratio of the number of presynaptic excitatory neurons to that of presynaptic inhibitory neurons for the ith neuron, so that ki, E = kiΓi/(1 + Γi) is the number of the excitatory incoming connections, and ki, I = ki/(1 + Γi) is the number of the inhibitory incoming connections for the ith neuron. Various levels of cross-correlations between the number of excitatory cortical inputs {ki, E} and the number of inhibitory cortical inputs {ki, I} can be obtained by choosing different values of {Γi}. Finally, we make direct connections in the network according to {(ki, E, ki, I), li} with the configuration model (Newman et al., 2001; Newman, 2003b). Note that the degrees of the connected nodes in such an SF network are uncorrelated (Aiello et al., 2000; Newman et al., 2001). To generate an SF network with degree correlation, we use a simple edge-node reshuffling strategy, which is a simplified version of the algorithm in Xulvi-Brunet and Sokolov (2005). In our simulations, unless otherwise specified, the decay exponent is chosen to be γ = 2.6 in our work, which is within the normal range of γ for real-world SF networks according to Barabási et al. (1999).
Cijαβ Cαβij is denoted as the element of the adjacency matrix with Cijαβ=1 Cαβij=1 if there is a directed edge from the jth neuron in the βth population to the ith neuron in the αth population, where α, β = E, I. If each neuron is connected, on average, to K presynaptic excitatory neurons and K presynaptic inhibitory neurons, because each neuron is connected to a large number of presynaptic neurons in the cortex (Braitenberg and Schüz, 1998), the value of K should be chosen sufficiently large to reflect this fact of connectivity. In addition, by electrophysiological recordings from cortical neurons, the probability of connection is shown to be often rather low, thus yielding a sparse network (Holmgren et al., 2003). Therefore, the value of K should be chosen to be much smaller than the size of the population. As the cells in the primary visual cortex of adult cats were found experimentally firing much more irregularly in vivo than the cells in vitro when the same stimulus was used (passing the same current through the electrode), fluctuations of the synaptic inputs are particularly important for irregular spiking (Holt et al., 1996). In light of this, we choose the scaling of the coupling strength to be of order 1/K−−√ 1/K , imparting fluctuations of order one to persist in the large-K limit in the total synaptic input to a neuron (van Vreeswijk and Sompolinsky, 1996; Vreeswijk and Sompolinsky, 1998; Vogels et al., 2005). We adopt this scaling for all the neuron models used in this work.
Next we explain how to find a power-law degree distribution with the decay exponent γ and the mean connectivity 2K for the generation of an SF network. Because the network size is always finite in numerical simulations, the degree of each neuron varies and has a lower bound denoted as K0 and an upper bound denoted as K1. Therefore, the power-law distribution takes the form as
P(k)=Ck−γ for k∈[K0,K1], P(k)=Ck-γfor k∈[K0,K1],
with a normalization constant C. By the definition of probability and its mean, we have
∑k=K0K1P(k)=1, ∑k=K0K1kP(k)=2K. (7) ∑k=K0K1P(k)=1,∑k=K0K1kP(k)=2K. (7)
Intuitively, for fixed K and γ in an SF network, two parameters K0 and K1 cannot be simultaneously determined by Equation (7) since there are three unknowns, C, K0, and K1 and only two equations in Equation (7). This can be shown as follows.
(1) For γ > 0 and γ ≠ 1, 2, Equation (7) can be approximately reformed as
1≈∫K1K0P(k)dk=CK1−γ1−K1−γ01−γ,2K≈∫K1K0kP(k)dk=CK2−γ1−K2−γ02−γ. 1≈∫K0K1P(k)dk=CK11−γ−K01−γ1−γ,2K≈∫K0K1kP(k)dk=CK12−γ−K02−γ2−γ.
Subsequently, we can obtain the following relationship
2K≈1−γ2−γ⋅(K1/K0)2−γ−1(K1/K0)1−γ−1K0. (8) 2K≈1-γ2-γ·(K1/K0)2-γ-1(K1/K0)1-γ-1K0. (8)
(2) For γ = 1, Equation (7) can be calculated that
1≈∫K1K0P(k)dk=Cln(K1/K0),2K≈∫K1K0kP(k)dk=C(K1−K0). 1≈∫K0K1P(k)dk=Cln(K1/K0),2K≈∫K0K1kP(k)dk=C(K1−K0).
Then, we can obtain
2K≈K1/K0−1ln(K1/K0)K0. (9) 2K≈K1/K0−1ln(K1/K0)K0. (9)
(3) For γ = 2, it can be calculated that
1≈∫K1K0P(k)dk=C(1K0−1K1),2K≈∫K1K0kP(k)dk=ln(K1/K0). 1≈∫K0K1P(k)dk=C(1K0−1K1),2K≈∫K0K1kP(k)dk=ln(K1/K0).
Similarly, we have
2K≈ln(K1/K0)1−K0/K1K0. (10) 2K≈ln(K1/K0)1-K0/K1K0. (10)
Then, given the value of K and γ, we can choose proper K0 and K1 following one of Equations (8)–(10) to ensure that the balanced condition (3) holds. Since the starting point of the power-law distribution of the degree normalized by the network size from an experimental observation is about 0.95% (Bonifazi et al., 2009), we choose K0≈380=0.95%×(4×104) K0≈380=0.95%×(4×104) accordingly in many of our simulations, where 4 × 104 is the network size. The value of K0 is set to be different from 380 only when we investigate the effect of the network size in Figure 4B. Note that K1 cannot be larger than the network size.
4.3. The Current-Based I&F Model With Delta-Pulse Coupling
In our work, the sub-threshold membrane potential of an I&F neuron in a population obeys the following dynamics (Dayan et al., 2001; Newhall et al., 2010; Zhou et al., 2010)
dviαdt=−gL(viα−ϵR)+Iiα(t), (11) dvαidt=-gL(vαi-ϵR)+Iαi(t), (11)
where viα vαi is the membrane potential of the ith neuron in the αth population (α = E, I), gL is the leakage conductance, ϵR is the resting voltage, and Iiα(t) Iαi(t) is the driving current. The voltage viα vαi evolves according to Equation (11) while it remains below the firing threshold ϵT. When viα vαi reaches ϵT, the ith neuron is said to fire a spike, and viα vαi is set to the value of the reset voltage ϵR. Upon resetting, viα vαi is governed by Equation (11) again. At the same time, appropriate currents induced by the spike are injected into all other postsynaptic neurons. We use physiological values for the parameters gL = 50 s−1, ϵR = −70 mV and ϵT = −55 mV. Upon non-dimensionalization, we have normalized ϵT = 1.0 and ϵR = 0.0.
The instantaneous current Iiα(t) Iαi(t) injected into the ith neuron of the αth population has the following form Iiα(t)=IiαE(t)+IiαI(t) Iαi(t)=IαEi(t)+IαIi(t) , where IiαI(t)=−JαI∑j=1NICijαI∑sδ(t−τIjs) IαIi(t)=−JαI∑j=1NICαIij∑sδ(t−τjsI) is the inhibitory input, whereas IiαE(t)=fα∑sδ(t−ςαis)+JαE∑j=1NECijαE∑sδ(t−τEjs) IαEi(t)=fα∑sδ(t−ςisα)+JαE∑j=1NECαEij∑sδ(t−τjsE) is the excitatory input — δ(·) is the Dirac delta function, Jαβ is the coupling strength from the βth population to the αth population (α, β = E, I), and fα is the strength of the external Poisson input to the αth population. The first term in IiαE(t) IαEi(t) corresponds to the current from the external input. The external input of the ith neuron in the αth population is modeled by a Poisson process {ςαis} {ςisα} with rate να. At the time, t=ςαis t=ςisα , of the sth input spike to the ith neuron in the αth population, the neuron's voltage jumps by the amount of fα. The second term in IiαE(t) IαEi(t) and the term in IiαI(t) IαIi(t) correspond to the currents induced by the coupled neurons in the excitatory and inhibitory populations in the network, in which {τEjs} {τjsE} is the spike train from the jth neuron in the excitatory population, {τIjs} {τjsI} is the spike train from the jth neuron in the inhibitory population, and s denotes the sth spike in the train.
In the simulation, the values of parameters in the model are set as follows: JEE=JIE=1.0/K−−√ JEE=JIE=1.0/K , JII=1.8/K−−√,JEI=2.0/K−−√ JII=1.8/K,JEI=2.0/K , fE=fI=1.0/K−−√ fE=fI=1.0/K , and νE = ν0K, νI = 0.8ν0K. We vary the value of ν0 to control the rate of the external input. To perform the numerical simulation of this I&F model, we use an event-driven scheme (Brette et al., 2007), with which the numerical results of dynamics can be obtained up to the machine accuracy.
4.4. Fokker-Planck Equation for a Single Neuron
Under a Poisson external input, the spiking events of a neuron in the network, in general, are not Poissonian, i.e., {τEjs} {τjsE} and {τIjs} {τjsI} in the current Iiα(t) Iαi(t) are not a Poisson process for a fixed neuron j. However, the input to the ith neuron is a spike train summed over output spike trains from many other neurons in the network. If the firing event of each neuron is statistically independent of one another, then the spike train obtained by summing over a large number of output spike trains of neurons asymptotically tends to a Poisson process (Cinlar, 1972). In a balanced network, the firing event of each neuron is extremely weakly correlated with, and nearly independent of, other neurons (Vreeswijk and Sompolinsky, 1998). Therefore, for each neuron, the summed incoming spikes from its presynaptic neurons can be approximated by a Poisson train. Under the Poisson approximation, we can obtain the Fokker-Planck (FP) equation corresponding to Equation (11) for each neuron in the population (Cai et al., 2006). For the ith neuron in the αth population, we have
∂∂tρiα=∂∂v[(gLv−μiα)ρiα]+σiα22∂2∂v2ρiα, (12) ∂∂tραi=∂∂v[(gLv−μαi)ραi]+σαi22∂2∂v2ραi, (12)
where ρiα(v,t) ραi(v,t) is the probability density at time t of finding the membrane potential at v of the ith neuron in the αth population. Here μiα μαi is the mean total input,
μiα=fανα+JαEνiαE−JαIνiαI, (13) μαi=fανα+JαEναEi-JαIναIi, (13)
and (σiα)2 (σαi)2 is the strength of fluctuations of the total input,
(σiα)2=f2ανα+J2αEνiαE+J2αIνiαI. (14) (σαi)2=fα2να+JαE2ναEi+JαI2ναIi. (14)
Note that νiαE ναEi and νiαI ναIi are the rates of the summed respective excitatory and inhibitory inputs from other neurons in the network, fα and να are the strength and rate of the external Poisson input to the αth population, respectively.
Equation (12) can be cast into the conservation form ∂∂tρiα(v,t)+∂∂vSiα(v,t)=0 ∂∂tραi(v,t)+∂∂vSαi(v,t)=0 , with Siα(v,t)=−(σiα)22∂∂vρiα−gL(v−μiαgL)ρiα Sαi(v,t)=−(σαi)22∂∂vραi−gL(v−μαigL)ραi being the probability density flux through v at time t. For Equation (12), we need to specify boundary conditions at v = −∞, the reset potential ϵR , and the threshold ϵT. The probability flux through ϵT gives the instantaneous firing rate at t, miα(t)=Siα(ϵT,t) mαi(t)=Sαi(ϵT,t) . For the I&F neuron, its membrane potential cannot exceed the threshold, therefore, ρiα(v,t)=0 ραi(v,t)=0 for v ≥ ϵT. At the reset potential v = ϵR , there is a probability flux coming from the neuron that just crosses the threshold: what goes out at time t at the threshold must come back at time t at the reset potential, thus Siα(ϵ+R,t)−Siα(ϵ−R,t)=miα(t) Sαi(ϵR+,t)-Sαi(ϵR-,t)=mαi(t) . The natural boundary condition at v = −∞ is ρiα ραi tends sufficiently rapidly toward zero to be integrable, limv→−∞ρiα(v,t)=0 limv→-∞ραi(v,t)=0 and limv→−∞vρiα(v,t)=0 limv→-∞vραi(v,t)=0 . By definition, ρiα(v,t) ραi(v,t) satisfies the normalization condition ∫VT−∞ρiα(v,t)dv=1 ∫−∞VTραi(v,t)dv=1 .
The stationary solution of Equation (12) can be obtained as Brunel (2000)
ρiα,k=2miα(σiα)2 exp (−gL(σiα)2(v−μiα)2) ∫ϵTvΘ(u−ϵR)exp(gL(σiα)2(u−μiα)2)du. (15) ρα,ki=2mαi(σαi)2exp(−gL(σαi)2(v−μαi)2)∫vϵTΘ(u−ϵR)exp(gL(σαi)2(u−μαi)2)du. (15)
Furthermore, by using the normalization condition, the firing rate miα mαi can be obtained as
miα=gL⎧⎩⎨⎪⎪⎪⎪⎪⎪π−−√∫ϵT−μiα/gLσiα/gL√ϵR−μiα/gLσiα/gL√exp(x2)[1+erf(x)]dx ⎫⎭⎬⎪⎪⎪⎪⎪⎪−1, (16) mαi=gL{π∫ϵR−μαi/gLσαi/gLϵT−μαi/gLσαi/gLexp(x2)[1+erf(x)]dx }−1, (16)
where erf(x) is the error function.
4.5. Fokker-Planck Equation for a Homogeneous Network
For the balanced state in homogeneous neuronal networks, one can reach a probabilistic characterization of the network beyond the dynamics of a single neuron. Because each neuron in the balanced state of a homogeneous network can be regarded as nearly statistically identical in a particular population, the input spike train of each neuron, which is summed from all presynaptic neurons, is Poisson with rate Kαmα(t), by noting that each neuron has KE presynaptic excitatory neurons and KI presynaptic inhibitory neurons on average. Here mα(t) is the population-averaged firing rate for a neuron in the αth population, α = E, I. Then, one can obtain
∂∂tρα(v,t)+∂∂vSα(v,t)=0, (17) ∂∂tρα(v,t)+∂∂vSα(v,t)=0, (17)
where ρα(v, t) is the probability of finding a neuron in the αth population whose membrane potential is v at time t (Brunel, 2000), and the probability density flux Sα(v,t)=−σα22∂∂vρα Sα(v,t)=-σα22∂∂vρα −gL(v−μαgL)ρα -gL(v-μαgL)ρα , where the input is characterized by μα = fανα+JαEKEmE−JαIKImI and σα2=f2ανα+J2αEKEmE+J2αIKImI σα2=fα2να+JαE2KEmE+JαI2KImI . By the same argument for Equation (12), the boundary conditions for Equation (17) can be similarly obtained.
Similar to the single-neuron case, here the mean firing rate over the neuronal population can be obtained in a self-consistent way as
mα=gL⎧⎩⎨⎪⎪⎪⎪π−−√∫ϵT−μα/gLσα/gL√ϵR−μα/gLσα/gL√exp(x2)[1+erf(x)]dx⎫⎭⎬⎪⎪⎪⎪−1. (18) mα=gL{π∫ϵR−μα/gLσα/gLϵT−μα/gLσα/gLexp(x2)[1+erf(x)]dx}−1. (18)
4.6. Fokker-Planck Equation for a Scale-Free Network
One can further derive the FP equations for an SF network with the following structural property — for each neuron in the network, the ratio of the number of its presynaptic excitatory neurons to the number of its presynaptic inhibitory neurons is almost a constant across the population. By denoting this constant ratio as Γ (Γ = 1 for the SF networks in our simulations), and treating all the neurons with the same number of presynaptic neurons as one ensemble, we can derive the FP equation for the kth ensemble (the neuron ensemble with k presynaptic neurons) in the αth population,
∂∂tρkα=∂∂v[(gLv−μkα)ρkα]+(σkα)22∂2∂v2ρkα, ∂∂tραk=∂∂v[(gLv−μαk)ραk]+(σαk)22∂2∂v2ραk,
where μkα μαk is the average total input
μkα=fανα+kJαEΓ1+ΓrkE−kJαI11+ΓrkI, (19) μαk=fανα+kJαEΓ1+ΓrEk-kJαI11+ΓrIk, (19)
and (σkα)2 (σαk)2 describes the strength of fluctuations of the total input
(σkα)2=fα2να+kJαE2Γ1+ΓrkE+kJαI211+ΓrkI. (20) (σαk)2=fα2να+kJαE2Γ1+ΓrEk+kJαI211+ΓrIk. (20)
Here rkE=∫K1K0T(n|k)mnEdn rEk=∫K0K1T(n|k)mEndn and rkI=∫K1K0T(n|k)mnIdn rIk=∫K0K1T(n|k)mIndn are the presynaptic excitatory and inhibitory neurons' mean firing rates for the kth ensemble respectively, T(n|k) is the conditional probability of finding a directed connection that originates from an n-node (a neuron with n presynaptic neurons) given that it ends at a k-node (a neuron with k presynaptic neurons). And K0 (K1) denotes the smallest (largest) degree. In the stationary state, one can obtain the firing rate of neurons in the k-ensemble as
mkα=gL⎧⎩⎨⎪⎪⎪⎪⎪⎪π−−√∫ϵT−μkα/gLσkα/gL√ϵR−μkα/gLσkα/gL√exp(x2)[1+erf(x)]dx⎫⎭⎬⎪⎪⎪⎪⎪⎪−1. (21) mαk=gL{π∫ϵR−μαk/gLσαk/gLϵT−μαk/gLσαk/gLexp(x2)[1+erf(x)]dx}−1. (21)
When there is no degree correlation between each node, the conditional probability can be calculated as T(n|k) = nP(n)/(2K) independent of k, where P(n) is the power-law degree distribution. Then according to Equations (19–20), μkα μαk decreases linearly with the degree k because the network is inhibition dominant, while (σkα)2 (σαk)2 increases linearly with the degree k. Moreover, for neurons with a large number of presynaptic connections, i.e., large k, one can find that μkα~−k μαk~-k and σkα~k−−√ σαk~k . Therefore, ϵR−μkα/gL≫σkα/gL−−√ ϵR-μαk/gL≫σαk/gL , and ϵT−μkα/gL≫σkα/gL−−√ ϵT-μαk/gL≫σαk/gL . The mean firing rate can thus be further approximated as
mkα≈gL(ϵT−μkα/gL)π√(ϵT−ϵR)exp {− (ϵT−μkα/gLσkα/gL√)2}. (22) mαk≈gL(ϵT−μαk/gL)π(ϵT−ϵR)exp{−(ϵT−μαk/gLσαk/gL)2}. (22)
Because μkα~−k μαk~-k and σkα~k−−√ σαk~k , according to Equation (22), the firing rate of a neuron mkα mαk in the kth ensemble decays exponentially with k. Consequently, neurons with a sufficiently large degree will possess a very low firing rate which can barely be detected in numerical results with a finite simulation time, thus they will be classified to the quiescent group.
We comment that, for a homogeneous network with a broad degree distribution, the group of quiescent neurons also exists. As the width of the degree distribution becomes broader, the number of the quiescent neurons becomes larger. This phenomenon can also be explained from the result of the corresponding FP analysis in Roxin et al. (2011).
Author Contributions
QG, SL, WD, DZ, and DC conceived and designed the research, performed experiments and analyzed data, and wrote the paper.
Funding
This work is supported by NYU Abu Dhabi Institute G1301 (QG, SL,WD, DZ, and DC); NSFC-11671259, NSFC-11722107, NSFC-91630208, and Shanghai Rising-Star Program-15QA1402600 (DZ); NSFC-31571071 (DC); Shanghai 14JC1403800, 15JC1400104 and SJTU-UM Collaborative Research Program (DZ, DC).
Conflict of Interest Statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Acknowledgments
The authors dedicate this paper to their late co-author and mentor DC. We thank the editor and reviewers for helpful comments on our manuscript.
Supplementary Material
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fncom.2018.00109/full#supplementary-material
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2019. This work is licensed under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
It is hypothesized that cortical neuronal circuits operate in a global balanced state, i.e., the majority of neurons fire irregularly by receiving balanced inputs of excitation and inhibition. Meanwhile, it is observed in experiment that sensory information is often sparsely encoded by only a small set of firing neurons, while neurons in the rest of the network are silent. The phenomenon of sparse coding challenges the hypothesis of global balanced state in the brain. To reconcile this, here we address the issue of whether a balanced state can locally exist in a small number of firing neurons by taking account of the heterogeneity of network structure such as scale-free and small-world networks. We propose necessary conditions and show that, under these conditions, for sparsely but strongly connected heterogeneous networks with various types of single-neuron dynamics, despite that the whole network receives external inputs, there is a small active subnetwork (active core) inherently embedded within it. The neurons in this active core have relatively high firing rates while the neurons in the rest of the network are quiescent. Surprisingly, although the whole network is heterogeneous and unbalanced, the active core possesses a balanced state and its connectivity structure is close to homogeneous Erdos-Renyi network. The dynamics of the active core can be well predicted using the Fokker-Planck equation. Our results suggest that, the balanced state may be maintained by a small group of spiking neurons embedded in a large heterogeneous network in the brain. The existence of the small active core reconciles the balanced state and the sparse coding, and also provides a potential dynamical scenario underlying sparse coding in neuronal networks.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer