1. Introduction
Integrated information (II) [1,2,3,4] is a measure of internal information exchange in complex systems and it has recently attracted a lot of interest, because initially it was proposed to quantify consciousness [5]. Despite the fact that this initial aim is still a matter of research and debate [6,7,8,9], the II concept itself is by now a widely acknowledged tool in the field of complex dynamics analysis [10,11,12]. The general concept gave rise to specific “empirical” formalizations of II [13,14,15,16] aimed at computability from empirical probability distributions based on real data. For a systematic taxonomy of II measures, see [17], and a comparative study of empirical II measures in application to Gaussian autoregressive network models has been recently done in [18].
Our recent study [19] addressed the role of astrocytic regulation of neurotransmission [20,21,22] in generating positive II via small networks of brain cells—neurons and astrocytes. Empirical “whole minus sum” II, as defined in [13], was calculated in [19] from the time series produced by a biologically realistic model of neuro-astrocytic networks. A simplified, analytically tractable stochastic “spiking–bursting” model (in complement to the realistic one) was designed to describe a specific type of activity in neuro-astrocytic networks which manifests itself as a sequence of intermittent system-wide excitations of rapid pulse trains (“bursts”) on the background of random “spiking” activity in the network [23,24]. The spiking–bursting model is a discrete-time, discrete-state stochastic process which mimics the main features of this behavior. The model was successfully used in [19] to produce semi-analytical estimates of II in good agreement with direct computation of II from time series of the biologically realistic network model. We have suggested a possible explanation that a generation of positive II was the reason why mammal brain evolved to develop an astrocyte network to overlap with a network of neurons, but, still, it remained unclear what are the underlying mechanisms driving a complex neural behavior to generate positive II. In this paper we address this challenging question.
The present study aims at creating a theoretical formalism for using the spiking–bursting model of [19] as an analytically tractable reference point for applying integrated information concepts to systems exhibiting similar bursting behavior (in particular, to other neuron–astrocyte networks). The analytical treatment is based on exact and asymptotic expressions for integrated information in terms of exactly known probability distributions for the spiking–bursting model. The model is constructed as the simplest possible (although essentially non-Gaussian) to reflect the features of neuron–astrocyte network dynamics which lead to generating positive II. We also aim at extending the knowledge of comparative features of different empirical II measures, which are currently available mainly in application to Gaussian autoregressive models [17,18], by applying two such measures [13,16] to our discrete-state model.
In Section 2 and Section 3 we specify the definitions of the II measures used and the model. Specific properties of the model which lead to redundancy in its parameter set are addressed in Section 4. In Section 5 we provide an analytical treatment for the empirical “whole minus sum” [13] version of II in application to our model. This choice among other empirical II measures is inherited from the preceding study [19] and is in part due to its easy analytical tractability, and also due to its ability to change sign, which naturally identifies a transition point in the parameter space. This property may be considered a violation of the natural non-negativeness requirement for II [16]; on the other hand, the sign of the “whole minus sum” information has been given interpretation in terms of “net synergy” [25] as a degree of redundancy in the evolution of a system [18]. In this sense this transition may be viewed as a useful marker in its own right in the tool-set of measures for complex dynamics. This motivates our particular focus on identifying the sign transition of the “whole minus sum” information in the parameter space of the model. We also identify a scaling of II with a small parameter which determines time correlations in the bursting (astrocytic) subsystem.
In Section 6 we compare the outcome of the “whole minus sum” II measure [13] to that of the “decoder based” measureΦ* , which was specifically designed in [16] to satisfy the non-negativeness property. We computeΦ*directly by definition from known probability distributions of the model. Despite their inherent difference consisting in changing or not changing sign, the two compared measures are shown to bear similarities in their dependence upon model parameters, including the same scaling with the time correlation parameter.
2. Definition of II Measures in Use
The empirical “whole minus sum” version of II is formulated according to [13] as follows. Consider a stationary stochastic processξ(t)(binary vector process), whose instantaneous state is described by N binary digits (bits), each identified with a node of the network (neuron). The full set of N nodes (“system”) can be split into two non-overlapping non-empty subsets (“subsystems”) A and B; such a splitting is referred to as bipartitionAB. Denote byx=ξ(t)andy=ξ(t+τ)two states of the process separated by a specified time intervalτ≠0. The states of the subsystems are denoted asxA,xB,yA,yB.
Mutual information between x and y is defined as
Ixy=Hx+Hy−Hxy,
where
Hx=−∑xp(x)log2p(x)
is entropy (base 2 logarithm gives result in bits); summation is hereinafter assumed to be taken over the whole range of the index variable (here x),Hy=Hx, due to assumed stationarity.
Next, a bipartitionABis considered, and “effective information”Φeffas a function of the particular bipartition is defined as
Φeff(AB)=Ixy−IxA,yA−IxB,yB.
Finally, “whole minus sum” II denoted asΦis defined as effective information calculated for a specific bipartitionABMIB(“minimum information bipartition”) which minimizes specifically normalized effective information:
Φ=Φeff(ABMIB),
ABMIB=argminABΦeff(AB)min{H(xA),H(xB)}.
Note that this definition prohibits positive II, wheneverΦeffturns out to be zero or negative for at least one bipartitionAB.
We compare the result of the “whole minus sum” effective information (3) to the “decoder based” information measureΦ* , which is modified from its original formulation of [16] by setting the logarithms base to 2 for consistency:
Φ*(AB)=Ixy−Ixy*(AB),
where
Ixy*(AB)=maxβ−∑yp(y)log2∑xp(x)qAB (y|x)β+∑xyp(xy)log2 qAB (y|x)β,
qAB(y|x)=p(yA|xA)p(yB|xB)=p(xA yA)p(xB yB)p(xA)p(xB).
3. Spiking–Bursting Stochastic Model
Physiologically, spikes are short (about 1 millisecond in duration) pulses of voltage (action potential) across the neuronal membrane. Bursts are rapid sequences of spikes. The main feature of the neuron–astrocyte network model in [19] is the presence of network-wide coordinated bursts, when all neurons are rapidly spiking in the same time window. Such bursts are coordinated by the astrocytic network and occur on the background of weakly correlated spiking activity of individual neurons. The spiking–bursting model was suggested in [19] as the simplest mathematical description of this behavior. In this model, time is discretized into small bins, and neurons are represented by binary digits taking on values 0 or 1, denoting the absence or the presence of at least one spike within the specific time bin. Respectively, a network-wide burst is represented by a time interval during which all neurons are locked at value 1 (which corresponds to a train of rapid spiking in the underlying biological system). The idea behind the model is illustrated by the graphical representation of its typical time evolution, as shown in Figure 1. The graphs of the model dynamics can be seen as envelopes of respective time recordings of membrane voltage in actual neurons: each short rectangular pulse of the model is assumed to correspond to at least one narrow spike of voltage, and a prolonged pulse (several discrete time bins in duration) represents a spike train (burst).
Mathematically, this “spiking–bursting” model is a stochastic model, which produces a binary vector valued, discrete-time stochastic process. In keeping with [19], the model is defined as a combinationM={V,S} of a time-correlated dichotomous component V which turns on and off system-wide bursting (that mimics global bursting of a neuronal network, when each neuron produces a train of pulses at a high rate [19]), and a time-uncorrelated component S describing spontaneous (spiking) activity (corresponding to a background random activity in a neural network characterized by relatively sparse random appearance of neuronal pulses—spikes [19]) occurring in the absence of a burst. The model mimics the spiking–bursting type of activity which occurs in a neuro-astrocytic network, where the neural subsystem normally exhibits time-uncorrelated patterns of spiking activity, and all neurons are under the common influence of the astrocytic subsystem, which is modeled by the dichotomous component V and sporadically induces simultaneous bursting in all neurons. A similar network architecture with a “master node” spreading its influence on subordinated nodes was considered, for example, in [1] (Figure 4b therein).
The model is defined as follows. At each instance of (discrete) time the state of the dichotomous component can be either “bursting” with probabilitypb, or “spontaneous” (or “spiking”) with probabilityps=1−pb. While in the bursting mode, the instantaneous state of the resulting processx=ξ(t)is given by all ones:x=11..1(further abbreviated asx=1). In cases of spiking, the state x is a (time-uncorrelated) random variate, which is described by a discrete probability distributionsx(where an occurrence of “1” in any bit is referred to as a “spike”), so that the resulting one-time state probabilities read
p(x≠1)=ps sx,
p(x=1)=p1,p1=ps s1+pb,
wheres1is the probability of spontaneous occurrence ofx=1 (hereafter referred to as a system-wide simultaneous spike) in the absence of a burst. (In a real network, “simultaneous” implies occurring within the same time discretization bin [19]).
To describe two-time joint probabilities forx=ξ(t)andy=ξ(t+τ), consider a joint statexywhich is a concatenation of bits in x and y. The spontaneous activity is assumed to be uncorrelated in time, which leads to the factorization
sxy=sx sy.
The time correlations of the dichotomous component are described by a2×2matrix
pq∈{s,b},r∈{s,b}=psspsbpbspbb
whose components are joint probabilities to observe the respective spiking (index “s”) and/or bursting (index “b”) states in x and y. (In a neural network these correlations are conditioned by burst duration [19]; e.g., if this (in general, random) duration mostly exceedsτ, then the correlation is positive.) The probabilities obeypsb=pbs(due to stationarity),pb=pbb+psb,ps=pss+psb, thereby allowing one to express all one- and two-time probabilities describing the dichotomous component in terms of two independent quantities, which for example, can be a pair{ps,pss}; then
psb=pbs=ps−pss,
pbb=1−(pss+2psb),
or{pb,ρ} as in [19], whereρis the Pearson correlation coefficient defined by
psb=ps pb(1−ρ).
In Section 4 we justify the use of another effective parameterϵ (13) instead ofρto determine time correlations in the dichotomous component.
The two-time joint probabilities for the resulting process are then expressed as
p(x≠1,y≠1)=pss sx sy,p(x≠1,y=1)=πsx,p(x=1,y≠1)=πsy,p(x=1,y=1)=p11,
π=pss s1+psb,p11=pss s12+2psb s1+pbb.
Note that the above notation can be applied to any subsystem instead of the whole system (with the same dichotomous component, as it is system-wide anyway).
The mentioned probabilities can be interpreted in terms of the underlying biological system as follows (see details in [19]):pbis the probability of observing the astrocytic subsystem in the excited (high calcium concentration) state, which induces global bursting activity in all neurons, within a specific time discretization bin;pbbis the probability of observing the mentioned state in two time bins separated by the time lagτ, andρis the respective time-delayed Pearson correlation coefficient of the astrocytic activity;sxis the probability of observing a specific spatial pattern of spiking x within one time bin in spontaneous neuronal activity (in the absence of an astrocyte-induced burst), and in particulars1is the probability that all neurons fire a spike within one time bin in spontaneous activity. In this senses1measures the overall strength of spontaneous activity of the neuronal subsystem. When spiking activity is independent across neurons, the set of parameters{s1,pb,ρ} fully determines the “whole minus sum” II in the spiking–bursting model. In [19] these parameters were fitted to match (in the least-squares sense) the two-time probability distribution (11) to the respective “empirical” (numerically obtained) probabilities for the biologically realistic model of the neuron–astrocyte network. This fitting produced the dependence of the spiking–bursting parameters{s1,pb,ρ} upon the biological parameters; see Figure 7 in [19].
4. Model Parameter Scaling
The spiking–bursting stochastic model, as described in Section 3, is redundant in the following sense. In terms of the model definition, there are two distinct states of the model which equally lead to observing the same one-time state of the resultant process with 1s in all bits: firstly—a burst, and secondly—a system-wide simultaneous spike in the absence of a burst, which are indistinguishable by one-time observations. Two-time observations reveal a difference between system-wide spikes on one hand and bursts on the other, because the latter are assumed to be correlated in time, unlike the former. That said, the “labeling” of bursts versus system-wide spikes exists in the model (by the state of the dichotomous component), but not in the realizations. Proceeding from the realizations, it must be possible to relabel a certain fraction of system-wide spikes into bursts (more precisely, into a time-uncorrelated portion thereof). Such relabeling would change both components of the model{V,S} (dichotomous and spiking processes), in particular, diluting the time correlations of bursts, without changing the actual realizations of the resultant process. This implies the existence of a transformation of model parameters which keeps realizations (i.e., the stochastic process as such) invariant. The derivation of this transformation is presented in Appendix A and leads to the following scaling.
sx≠1=αsx≠1′,
1−s1=α(1−s1′),
ps′ =αps,
ps′ s′=α2 pss,
whereαis a positive scaling parameter, and all other probabilities are updated according to Equation (9).
The mentioned invariance in particular implies that any characteristic of the process must be invariant to the scaling (12a–d). This suggests a natural choice of a scaling-invariant effective parameterϵdefined by
pss=ps2(1+ϵ)
to determine time correlations in the dichotomous component. In conjunction with a second independent parameter of the dichotomous process, for which a straightforward choice isps, and with full one-time probability table for spontaneous activitysx, these constitute a natural full set of model parameters{sx,ps,ϵ}.
The two-time probability table (8) can be expressed in terms ofpsandϵ by substituting Equation (13) into Equation (9):
pq∈{s,b},r∈{s,b}=ps2+ϵps2ps pb−ϵps2ps pb−ϵps2pb2+ϵps2.
The requirement of non-negativeness of probabilities imposes simultaneous constraints
ϵ≥−1
and
ps≤psmax=11+ϵ1−|ϵ|if−1≤ϵ<0,11+ϵifϵ≥0,
or equivalently,
−ϵmax2≤ϵ≤ϵmax=pb ps.
Comparing the off-diagonal termpsb in (14) to the definition of the Pearson’s correlation coefficientρ in (10), we get
ϵ=ρpb ps=ρϵmax;
thus, the sign ofϵhas the same meaning as that ofρ. Hereinafter we limit ourselves to non-negative correlationsϵ≥0.
5. Analysis of the Empirical “Whole Minus Sum” Measure for the Spiking–Bursting Process
In this Section we analyze the behavior of the “whole minus sum” empirical II [13] defined by Equations (3) and (4) for the spiking–bursting model in dependence of the model parameters, particularly focusing on its transition from negative to positive values.
5.1. Expressing the “Whole Minus Sum” Information
Mutual informationIxy for two time instances x and y of the spiking–bursting process is expressed by inserting all one- and two-time probabilities of the process according to (6), (11) into the definition (1), (2). The full derivation is given in Appendix B and leads to an expression which was used in [19]
Ixy=2(1−s1){ps}+2{p1}−(1−s1)2{pss}−2(1−s1){π}−{p11},
where we denote for compactness
{q}=−qlog2q.
We exclude from further consideration the following degenerate cases which automatically giveIxy=0 by definition (1):
s1=1,orps=0,orps=1,orρ=ϵ=0,
where the former two correspond to a deterministic “always 1” state for which all entropies in (1) are zero, and the latter two produce no predictability, which impliesHxy=Hx+Hy.
The particular cases1=0 in (18) reduces to
Ixy |s1=0=2{ps}+{pb}−{pss}+2{psb}+{pbb},
which coincides with mutual information for the dichotomous component taken alone and can be seen as a function of just two independent parameters of the dichotomous component, for which we chosepsandϵ as suggested in Section 4. Using the expressions for the two-time probabilities (14), we rewrite (21a) in the form
Ixy |s1=0=2{ps}+{pb}−{ps2+ϵps2}+2{ps pb−ϵps2}+{pb2+ϵps2},wherepb=1−ps,=I0(ps,ϵ).
Expression (21b) explicitly defines a functionI0(ps,ϵ) , which turns out to be a universal function allowing one to express mutual information (18) and effective information (3) in terms of the model parameters, as we show below. Typical plots ofI0(ps,ϵ)versuspsat several fixed values ofϵ are shown with blue solid lines in Figure 2.
The formula (18) can be recovered back from (21a,b) by virtue of the scaling (12a–d), by assumings1′=0in (21b) and substituting the corresponding scaled valueps′ =(1−s1)ps as per (12c) in place of the first argument of functionI0(ps′ ,ϵ) defined in (21b), while parameterϵremains invariant to the scaling. This produces a simplified expression
Ixy=I0(1−s1)ps,ϵ,
which is exactly equivalent to (18) for anys1. We emphasize that hereinafter expressions containingI0(·,·) —(22), (23), (30b), etc.—imply thatps in (21b) must be substituted with the actual first argument ofI0(·,·), e.g., by(1−s1)ps in (22). The same applies when the approximate expression forI0(·) (35) is used.
Given a bipartitionAB (see Section 2), this result is applicable as well to any subsystem A (B), withs1replaced bysA(sB) which denote the probability of a subsystem-wide simultaneous spikexA=1(xB=1) in the absence of a burst, and with same parameters of the dichotomous component (hereps,ϵ ). Then effective information (3) is expressed as
Φeff=I0(1−s1)ps,ϵ−I0(1−sA)ps,ϵ−I0(1−sB)ps,ϵ.
Hereafter in this section we assume the independence of spontaneous activity across the network nodes (neurons), which implies
sA sB=s1,
then (23) turns into
Φeff=f(sA),
where
f(s)=I0(1−s1)ps,ϵ−I0(1−s)ps,ϵ−I0(1−s1/s)ps,ϵ.
Essentially, according to (25a,b), the functionf(s)shows the dependence of effective informationΦeffupon the choice of the bipartition, which is characterized by the value ofsA=s(if A is any non-empty subsystem, thensAis defined as the probability of spontaneous occurrence of 1s in all bits in A in the same instance of the discrete time), while the function parameters1determines the intensity of spontaneous spiking activity. Note that the functionI0(·,·) in (21b) is defined only when the first argument is in the range(0,1); thus, the definition domain off(s) in (25b) is
s1<s<1.
5.2. Determining the Sign of the “Whole Minus Sum” Information
According to (4), the necessary and sufficient condition for the “whole minus sum” empirical II to be positive is the requirement thatΦeffbe positive for any bipartitionAB. Due to (25a,b), this requirement can be written in the form
mins∈{sA}f(s)>0,
where{sA}is the set ofsAvalues for all possible bipartitionsAB.
Expanding the set of s in (27) to the whole definition domain off(s) (26) leads to a sufficient (generally, stronger) condition for positive II
f(s)>0foralls∈(s1,1).
Note thatf(s) by definition (25b) satisfiesf(s=s1)=f(s=1)=0,f′(s=s1)>0and (due to the invariance to mutual renaming of subsystems A and B)f(s1/s)=f(s) . (All mentioned properties and subsequent reasoning can be observed in Figure 3, which shows a few sample plots off(s)). The latter symmetry implies that the quantity of extrema off(s)ons∈(s1,1)must be odd, one of them always being ats=s1 . If the latter is the only extremum, then it is a positive maximum, and (28) is thus fulfilled automatically. In case of three extrema,f(s1) is a minimum, which can change sign. In both these cases the condition (28) is equivalent to the requirement
f(s1)>0,
which can be rewritten as
g(s1)>0,
where
g(s1)=f(s1)=I0(1−s1)ps,ϵ−2I0(1−s1)ps,ϵ.
The reasoning above essentially reduces the problem of determining the sign of II to determining the sign of the extremumf(s1).
The equivalence of (29) to (28) could be broken iff(s)had five or more extrema. As suggested by the numerical calculation on a grid ofps∈[0.01,0.99]andρ∈[0.01,1] , both with step 0.01, this exception never holds, although we did not prove this rigorously. Based on the reasoning above, in the following we assume the equivalence of (29) (and (30)) to (28).
A typical scenario of transformations off(s)with the change ofs1 is shown in Figure 3. Here the extremumf(s1)(shown with a dot) transforms with the decrease ofs1from a positive maximum into a minimum, which in turn decreases from positive through zero into negative values.
Note that by construction, the functiong(s1) defined in (30b) expresses effective informationΦeff from (3) for the hypothetic bipartition characterized bysA=sB=s1, which may or may not exist in the actual particular system. If such “symmetric” bipartition exists, then the valuesA=s1belongs to the set{sA} in (27), which implies that (29) (same as (30)) is equivalent not only to (28), but also to the necessary and sufficient condition (27). Otherwise, (28) (equivalently, (29) or (30)), formally being only sufficient, still may produce a good estimate of the necessary and sufficient condition in cases when{sA}contains values which are close tos1(corresponding to nearly symmetric partitions, if such exist).
Except for the degenerate cases (20),g(s1)is negative ats1=0
g(s1=0)=−I0(ps,ϵ)<0
and has a limitg(s1→1−0)→+0(−0and+0denote the left and right one-sided limits), because
lims1→1−0I0(1−s1)ps,ϵ2I0(1−s1)ps,ϵ=2;
hence,g(s1)changes sign at least once ons1∈(0,1). According to numerical evidence, we assume thatg(s1)changes sign exactly once on(0,1)without providing a rigorous proof for the latter statement (it was confirmed up to machine precision for each combination ofps∈[0.01,0.99]andρ∈[0.01,1] , both with step 0.01; also note that for the asymptotic case (38) this statement is rigorous). In line with the above, the solution to (30a) has the form
s1min(ps,ϵ)<s1<1,
wheres1min(ps,ϵ)is the unique root ofg(s1)on(0,1). Several plots ofs1min(ps,ϵ)versuspsatϵfixed and versusϵatpsfixed, which are obtained by numerically solving for the zero ofg(s1) , are shown in Figure 4 with blue solid lines.
This result identifies a region in the parameter space of the model, where the “whole minus sum” information is positive. From the viewpoint of the underlying biological system, the quantitys1min determines the minimal sufficient intensity of spontaneous neuronal spiking activity for positive II. According to the result in Figure 4, within the assumption of independent spiking across the network (24), valuess1≳0.17lead to positive II regardless of other parameter values, and this threshold decreases further whenpsis increased, which implies decreasing the frequency of occurrence of astrocyte-activated global burstingpb=1−ps.
5.3. Asymptotics for Weak Correlations in Time
Further insight into the dependence of mutual informationIxy(and, consequently, ofΦeffand II) upon parameters can be obtained by expanding the definition ofI0(ps,ϵ) in (21b) in powers ofϵ(limit of weak correlations in time), which yields
I0(ps,ϵ)=12log2ps1−ps2·ϵ2+O(ϵ3).
Estimating the residual term (see details in Appendix C) indicates that the approximation by the leading term
I0(ps,ϵ)≈ϵ22log2ps1−ps2
is valid when
|ϵ|≪1,
|ϵ|≪pb ps2=ϵmax2.
Solving (36b) forpsrewrites it in the form of an upper bound forps
ps<11+|ϵ|
(the use of “≪” sign is not appropriate in (36c), because this inequality does not imply a small ratio between its left-hand and right-hand parts). Note that the inequalities (36b), (36c) are not weaker than the formal upper boundsϵmax in (16) andpsmaxin (15) which arise from the definition ofϵ (13) due to the requirement of positive probabilities.
Approximation (35) is plotted in Figure 2 with red dashed lines along with corresponding upper bounds of approximation applicability range (36c) denoted by red dots (note that largeϵ violates (36a) anyway, thus in this case (36c) has no effect). Mutual information (35) scales withϵwithin range (36) asϵ2and vanishes withϵ→0 . The same holds for effective information (23). Since the normalizing denominator in (4b) contains one-time entropies which do not depend onϵat all, this scaling ofΦeffdoes not change the minimum information bipartition, finally implying that II also scales asϵ2. That said, as factorϵ2does not affect the sign ofΦeff, the lower bounds1min in (33) exists and is determined only bypsin this limit.
Substituting the approximation (35) forI0(·,·)into the definition ofg(s1) in (30b) after simplifications reduces the equationg(s1)=0 to the following (see the comment below Equation (22)):
ps(2−1)s1−s1+(1−ps)(2−1)=0,
whose solution in terms ofs1on0<s1<1equalss1min , according to the reasoning behind Equation (33). Solving (37) as a quadratic equation in terms ofs1produces a unique root on(0,1), which yields
s1min(ps)|ϵ→0=1−1−4ps(1−ps)(2−1)22ps(2−1)2.
Result of (38) is plotted in Figure 4 with red dashed lines: in panel (a) as a function ofps , and in panel (b) as horizontal lines whose vertical position is the result of (38), and horizontal span denotes the estimated applicability range (36b) (note that condition (36a) also applies, and becomes stronger than (36b) whenps<1/2).
6. Comparison of Integrated Information Measures
In this Section we compare the outcome of two versions of empirical integrated information measures available in the literature, one being the “all-minus-sum” effective informationΦeff (3) from [13] which is used elsewhere in this study, and the other “decoder based” informationΦ* as introduced in [16] and expressed by Equations (5a–c). We calculate both measures by their respective definitions using the one- and two-time probabilities from Equations (6a,b) and (11a–d) for the spiking–bursting model withN=6bits, assuming no spatial correlations among bits in spiking activity, with same spike probability P in each bit. In this case
sx=Pm(x) (1−P)N−m(x),P=s11N,
wherem(x)is the number of ones in the binary word x.
We consider only a symmetric bipartition with subsystems A and B consisting ofN/2=3 bits each. Due to the assumed equal spike probabilities in all bits and in the absence of spatial correlations of spiking, this implies complete equivalence between the subsystems. In particular, in the notations of Section 5 we get
s1=sA sB,sA=sB=s1.
This choice of the bipartition is firstly due to the fact that the sign of effective information for this bipartition determines the sign of the resultant “whole minus sum” II (although the actual value of II is determined by the minimal information bipartition, which may be different). This has been established in Section 5 (see reasoning behind Equations (27)–(30) and further on); moreover, the functiong(s1) introduced in Equation (30b) expresses effective information for this particular bipartition
Φeff(AB)=g(s1),
thus the analysis of effective information sign in Section 5 applies to this symmetric bipartition.
Moreover, the choice of the symmetric bipartition is consistent with available comparative studies of II measures [18], where it was substantiated by the conceptual requirement that highly asymmetric partitions should be excluded [2], and by the lack of a generally accepted specification of minimum information bipartition; for further discussion, see [18].
We have studied the dependence of the mentioned effective information measuresΦeffandΦ*upon spiking activity, which is controlled bys1, at different fixed values of the parameterspsandϵcharacterizing the bursting component. Typical dependence ofΦeffandΦ*upons1, taken atps=0.6with several values ofϵ , is shown in Figure 5, panel (a).
The behavior of the “whole minus sum” effective informationΦeff (41) (blue lines in Figure 5) is found to agree with the analytical findings of Section 5:
-
Φefftransitions from negative to positive values at a certain threshold value ofs1=s1min , which is well approximated by the formula (38) whenϵ is small, as required by (36a,b); the result of Equation (38) is indicated in each panel of Figure 5 by an additional vertical grid line labeleds1min on the abscissae axis—cf. Figure 4;
-
Φeffreaches a maximum on the intervals1min<s1<1and tends to zero (from above) ats1→1;
-
Φeffscales withϵasϵ2 , when (36a,b) hold.
To verify the scaling observation, we plot the scaled values of both information measuresΦeff/ϵ2,Φ*/ϵ2 in the panels (b)–(d) of Figure 5 for several fixed values ofpsandϵ. Expectedly, the scaling fails atps=0.7,ϵ=0.4 in panel (d), as (36b) is not fulfilled in this case.
Furthermore, the “decoder based” informationΦ* (plotted with red lines in Figure 5) behaves mostly the same way, apart from being always non-negative (which was one of key motivations for introducing this measure in [16]). At the same time, the sign transition points1minof the “whole minus sum” information associates with a rapid growth of the “decoder based” information. Whens1is increased towards 1, the two measures converge. Remarkably, the scaling asϵ2is found to be shared by both effective information measures.
7. Discussion
In general, the spiking–bursting model is completely specified by the combination of a full single-time probability tablesx(consisting of2Nprobabilities of all possible outcomes, where N is the number of bits) for the time-uncorrelated spontaneous activity, along with two independent parameters (e.g.,psandϵ) for the dichotomous component. This combination is, however, redundant in that it admits a one-parameter scaling (12) which leaves the resultant stochastic process invariant.
Condition (30) was derived assuming that spiking activity in individual bits (i.e., nodes, or neurons) constituting the system is independent among the bits, which implies that the probability tablesxis fully determined by N spike probabilities for individual nodes. The condition is formulated in terms ofps,ϵand a single parameters1 (system-wide spike probability) for the spontaneous activity, agnostic of the “internal structure” of the system, i.e., the spike probabilities for individual nodes. This condition provides that the “whole minus sum” effective information is positive for any bipartition, regardless of the mentioned internal structure. Moreover, in the limit (36) of weak correlations in time, the inequality (30a) can be explicitly solved in terms ofs1 , producing the solution (33), (38).
In this way, the inequality (33) together with the asymptotic estimate (38) supplemented by its applicability range (36) specifies the region in the parameter space of the system, where the “whole minus sum” II is positive regardless of the internal system structure (sufficient condition). The internal structure (though still without spike correlations across the system) is taken into account by the necessary and sufficient condition (27) for positive II.
The mentioned conditions were derived under the assumption of absent correlation between spontaneous activity in individual bits (24). If correlation exists and is positive, thens1>sA sB, orsB<s1/sA. Then comparing the expressions forΦeff (23) (general case) to (25) (space-uncorrelated case), and taking into account thatI0(ps)is an increasing function, we findΦeff<f(sA) , cf. (25a). This implies that any necessary condition for positive II remains as such. Likewise, in the case of negative correlations we getΦeff>f(sA), implying that a sufficient condition remains as such.
8. Conclusions
The present study substantiates, refines and quantifies qualitative observations in regard to II in the spiking–bursting model which were initially made in [19]. The existence of lower bounds in spiking activity (characterized bys1 ) required for positive “whole minus sum” II which was noticed in [19] is now expressed in the form of an explicit inequality (33) with the estimate (38) for the bounds1min . The observation of [19] that typicallys1min is mostly determined by burst probability and weakly depends upon time correlations of bursts also becomes supported by the quantitative result (33), (38). In particular, there is a range of spiking activity intensitys1≳0.17, where the “whole minus sum” information is positive regardless of other system parameters, provided the spiking activity is spatially uncorrelated or negatively correlated across the system. When the burst probability is decreased (which implies less frequent activation of the astrocyte subsystem), the threshold value for spiking activitys1minalso decreases.
We found that II scales asϵ2, whereϵ is proportional (as per Equation (17)) to the Pearson’s time delayed correlation coefficient of the bursting component (which essentially characterizes the duration of bursts), forϵsmall (namely, within (36)), when other parameters (i.e.,psand spiking probability tablesx) are fixed. For the “whole minus sum” information, this is an analytical result. Note that the reasoning behind this result does not rely upon the assumption of spatial non-correlation of spiking activity (between bits), and thus applies generally to arbitrary spiking–bursting systems. According to a numerical calculation, this scaling approximately holds for the “decoder based” information as well.
Remarkably, II can not exceed the time delayed mutual information for the system as a whole, which in case of the spiking–bursting model in its present formulation is no greater than 1 bit.
The model provides a basis for possible modifications in order to apply integrated information concepts to systems exhibiting similar, but more complicated behavior (in particular, to neuronal [26,27,28,29] and neuron–astrocyte [24,30] networks). Such modifications might incorporate non-trivial spatial patterns in bursting, and causal interactions within and between the spiking and bursting subsystems.
The model can also be of interest as a new discrete-state test bench for different formalizations of integrated information, while available comparative studies of II measures mainly focus on Gaussian autoregressive models [17,18].
Author Contributions
Formal analysis, software, visualization, writing-original draft preparation, O.K.; conceptualization, methodology, validation, writing-review and editing, O.K., S.G. and A.Z. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the Ministry of Science and Higher Education of the Russian Federation: analytical studies (Section 4 and Section 5) by project number 075-15-2020-808, numerical studies (Section 6) by project number 0729-2020-0061. S.G. thanks the RFBR (grant number 20-32-70081). A.Z. is thankful for the MRC grant MR/R02524X/1. The APC was funded by the Ministry of Science and Higher Education of the Russian Federation (projects 075-15-2020-808 and 0729-2020-0061 in equal shares).
Conflicts of Interest
The authors declare no conflict of interest.
Appendix A. Derivation of Parameters Scaling of the Spiking-Bursting Model
In order to formalize the reasoning in Section 4, we introduce an auxiliary 3-state process W with set of one-time states{s',d,b}, wheres' and b are always interpreted as spiking and bursting states in terms of Section 3, and d is another state, which is assumed to produce all bits equal 1 like in a burst, but in a time-uncorrelated manner (which is formalized by Equation (A4) below) like in a system-wide spike. When W is properly defined (by specifying all necessary probabilities, see below) and supplemented with a time-uncorrelated process S as a source of spontaneous activity for the states', these together constitute a completely defined stochastic model{W,S}.
This 3-state based model may be mapped on equivalent (in terms of resultant realizations) 2-state based models as in Section 3 in an ambiguous way, because the state d may be equally interpreted either as a system-wide spike, or as a time-uncorrelated burst, thus producing two different dichotomous processes (which we denote as V andV') for the equivalent spiking-bursting models. The relationship between the states of W, V andV'is illustrated by the following diagram.
Entropy 22 01334 i001
As soon as d-states of W are interpreted in V as (spiking) s-states, the spontaneous activity process S accompanying V has to be supplemented with system-wide spikes wheneverW=d, in addition to the spontaneous activity processS'forV' . In order to maintain the absence of time correlations in spontaneous activity (which is essential for the analysis in Section 5), we assume time-uncorrelated choice betweenW=s'andW=dwhenV=s(which manifests below in Equation (A4)). Then the difference between the spontaneous components S andS'comes down to a difference in the corresponding one-time probability tablessxandsx'.
In the following, we proceed from the dichotomous process V defined as in Section 3, then define a consistent 3-state process W, and further obtain another dichotomous processV'for an equivalent model. Finally, we establish the relation between the corresponding probability tables of spontaneous activitysxandsx'.
The first dichotomous process V has states denoted by{s,b}and is related to W according to the ruleV=swhenW=s'orW=d, andV=bwheneverW=b (see diagram (A1)). Assume fixed conditional probabilities
p(W=s'|V=s)=α,
p(W=d|V=s)=β=1-α,
which implies one-time probabilities for W as
ps' =αps,pd=βps.
The mentioned requirement of time-uncorrelated choice betweenW=s'andW=dwhenV=sis expressed by factorized two-time conditional probabilities
p(W=s' s'|V=ss)=α2,
p(W=s'd|V=ss)=αβ=p(W=ds'|V=ss),
p(W=dd|V=ss)=β2.
Given the two-time probability table for V (8) along with the conditional probabilities (A2) and (A4), we arrive at a two-time probability table for W
Entropy 22 01334 i002
Note that (A5) is consistent both with (A3), which is obtained by summation along the rows of (A5), and with (8), which is obtained by summation within the line-separated cell groups in (A5):
pss≡ps' s'+ps'd+pds'+pdd
psb≡ps'b+pdb
pbs≡pbs'+pbd
pbb≡pbb.
Consider the other dichotomous processV'with states{s',b'}obtained from W according to the ruleV'=b'whenW=dorW=b, andV'=s'wheneverW=s' (see diagram (A1)). The two-time probability table forV' is obtained by another partitioning of the table (A5)
Entropy 22 01334 i003
with subsequent summation of cells within groups, which yields
ps' s'=α2 pss,
ps' b'=α(βpss+psb)=pb' s',
pb' b'=β2 pss+2βpsb+pbb.
The corresponding one-time probabilities forV'read
ps' =αps,
pb' =βps+pb.
In order to establish the relation between the one-time probability tables of spontaneous activitysxandsx', we equate the resultant one-time probabilities of observing a given state x as per (6) for the two equivalent models{V,S}and{V',S'}
p(x≠1)=ps sx=ps' sx',
p(x=1)=ps s1+pb=ps' s1'+pb' .
Taking into account (A9), we finally get
sx≠1=αsx≠1',
1-s1=α(1-s1').
Equations (A8), (A9) and (A11) fully describe the transformation of the spiking-bursting model which keeps the resultant stochastic process invariant by the construction of the transform. Taking into account that the dichotomous process is fully described by just two independent quantities, e.g.,psandpss, all other probabilities being expressed in terms of these due to normalization and stationarity, the full invariant transformation is uniquely identified by a combination of (A11a,b), (A8a) and (A9a), which together constitute the scaling (12).
Note that parameterαwithin its initial meaning (A2) may take on values in the range0<α≤1(caseα=1producing the identical transform). That said, in terms of the scaling (12a-d), all valuesα>0are equally possible, so that mutually inverse valuesα=α1andα=α2=1/α1produce mutually inverse transforms.
Appendix B. Expressing Mutual Information for the Spiking-Bursting Process
One-time entropyHx for the spiking-bursting process is expressed by (2) with probabilitiesp(x)taken from (6):
Hx=∑x{p(x)}=∑x{ps sx}+{p1}-{ps s1},
where the additional terms besides the sum over x account for the specific expression (6b) forp(x=1). Using the relation
{ab}≡a{b}+{a}b,
which is derived directly from (19), and collecting similar terms, we arrive at
Hx=ps Hs-ps{s1}+(1-s1){ps}+{p1},
whereHsis the entropy of the spiking component taken alone
Hs=∑x{sx}.
Two-time entropy is expressed similarly, by substituting probabilitiesp(xy)from (11) into the definition of entropy and taking into account the special cases withx=1and/ory=1:
Hxy=∑xy{p(xy)}=∑xy{pss sx sy}-∑x{pss sx s1}+∑x{πsx}-∑y{pss s1 sy}+∑y{πsy}+{pss s12}-2{πs1}+{p11}.
Further, applying (A13) and using the notation (A15), we find
∑xy{pss sx sy}=pss∑xy{sx sy}+{pss}∑xysx sy=pss·2Hs+{pss},
where we used the reasoning that∑xy{sx sy}is the two-time entropy of the spiking component taken alone, which is (due to the postulated absence of time correlations in it) twice the one-time entropyHs(this of course can equally be found by direct calculation). Similarly, we get
∑x{pss sx s1}=pss s1∑x{sx}+{pss s1}∑xsx=pss s1 Hs+{pss s1}
and exactly the same expression for∑y{pss s1 sy}, and also
∑y{πsy}=∑x{πsx}=π∑x{sx}+{π}∑xsx=πHs+{π}.
Substituting (A17a-c) into (A16), using (A13) where applicable, and collecting similar terms with the relation
pss+π-pss s1≡ps
taken into account, we arrive at
Hxy=2ps Hs+(1-s1)2{pss}-2ps{s1}+2(1-s1){π}+{p11}.
Finally, the expression (18) for mutual information is obtained by inserting (A14) and (A19) into the definition (1), with stationarityHy=Hxtaken into account.
Appendix C. Expanding I 0 in Powers of ϵ
Taylor series expansion for a functionf(x)up to the quadratic term reads
f(x0+ξ)=f(x0)+f'(x0)ξ+f''(x0)ξ22+R(ξ).
The remainder termR(ξ)can be represented in the Lagrange's form as
R(ξ)=f'''(c)ξ36,
where c is an unknown real quantity betweenx0andx0+ξ.
The functionf(x)can be approximated by omittingR(ξ) in (A20) ifR(ξ)is negligible compared to the quadratic term, for which it is sufficient that
f'''(c)ξ36≪f''(x0)ξ22
for any c betweenx0andx0+ξ, namely, for
c∈(x0,x0+ξ),ifξ>0,(x0-|ξ|,x0),ifξ<0.
Consider the specific case
f(x)=-xlogx,x>0,
for which we get
f'(x)=-logx-1,f''(x)=-1x,f'''(x)=1x2.
As long asf'''(x)is a falling function for anyx>0 , fulfilling (A22a) at the left boundary of (A22b) (atc=x0ifξ>0, and atc=x0-|ξ|ifξ<0 ) makes sure (A22a) is fulfilled in the whole interval (A22b). Precisely, the requirement is
1x02ξ36≪1x0ξ22,ifξ>0,
1(x0-|ξ|)2ξ36≪1x0ξ22,ifξ<0,
which in the caseξ>0reduces to
ξ3x0≪1,
and in the caseξ<0to
13Φ|ξ|x0≪1,
where
Φ(ζ)=ζ(1-ζ)2.
ReplacingΦ(·) in (A27a) by its linearizationΦ(ζ)≈ζfor smallζ , we reduce both (A26) and (A27a) to a single condition
|ξ|≪3x0.
We use these considerations to expand in powers ofϵthe functionI0(ps,ϵ)defined in (21) withpss,psb,pbbsubstituted by their expressions in terms ofϵ according to (14). We note that the braces notation{·} defined in (19) is expressed via the functionf(x) from (A23) as
{q}=f(q)log2.
Expanding this way the subexpressions of (21)
{pss}={ps2+ϵps2},
{psb}={ps pb-ϵps2},
{pbb}={pb2+ϵps2},
we find by immediate calculation that the zero-order and linear inϵ terms vanish, and the quadratic term yields (35). The condition (A28) has to be applied to all three subexpressions (A30a-c). Omitting the insignificant factor 3 in (A28), we obtain the applicability conditions
|ϵps2|≪ps2,
|ϵps2|≪ps pb,
|ϵps2|≪pb2,
which is equivalent to
|ϵ|≪1,
|ϵ|≪pb ps=ϵmax,
|ϵ|≪ϵmax2,
where the notationϵmax from (16) is used. We note that whenϵmax<1 , the condition (A32c) is the strongest among (A32a-c); whenϵmax>1 , the condition (A32a) is the strongest. Therefore, in both cases (A32b) can be dropped, thus producing (36).
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2020. This work is licensed under http://creativecommons.org/licenses/by/3.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Integrated information has been recently suggested as a possible measure to identify a necessary condition for a system to display conscious features. Recently, we have shown that astrocytes contribute to the generation of integrated information through the complex behavior of neuron–astrocyte networks. Still, it remained unclear which underlying mechanisms governing the complex behavior of a neuron–astrocyte network are essential to generating positive integrated information. This study presents an analytic consideration of this question based on exact and asymptotic expressions for integrated information in terms of exactly known probability distributions for a reduced mathematical model (discrete-time, discrete-state stochastic model) reflecting the main features of the “spiking–bursting” dynamics of a neuron–astrocyte network. The analysis was performed in terms of the empirical “whole minus sum” version of integrated information in comparison to the “decoder based” version. The “whole minus sum” information may change sign, and an interpretation of this transition in terms of “net synergy” is available in the literature. This motivated our particular interest in the sign of the “whole minus sum” information in our analytical considerations. The behaviors of the “whole minus sum” and “decoder based” information measures are found to bear a lot of similarity—they have mutual asymptotic convergence as time-uncorrelated activity increases, and the sign transition of the “whole minus sum” information is associated with a rapid growth in the “decoder based” information. The study aims at creating a theoretical framework for using the spiking–bursting model as an analytically tractable reference point for applying integrated information concepts to systems exhibiting similar bursting behavior. The model can also be of interest as a new discrete-state test bench for different formulations of integrated information.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer