1. Introduction
Agent success necessitates predicting with fidelity the behavior of treacherous environments in order to optimize action. Information theory is the power-tool for quantifying predictability, so any agent that thinks—an epistemic agent—can be modeled as a computing entity attempting to infer information theoretic measures. Channel capacity, Shannon entropy, mutual information, transfer entropy, and other such measures are defined with respect to empirically inaccessible joint probability distributions [1,2,3,4]. Finite computational and sensory resources, such as memory and sampling frequency, respectively, affect the estimation of these distributions [5,6,7,8,9,10,11,12,13,14,15,16,17]. From a Bayesian perspective, this implies that any agent with finite cognitive resources will make judgements that are governed by such limitations [18,19,20,21]. A group of epistemic agents—a polity—predicated on the pooling and dissemination of information is affected by the architectural heterogeneity of its constituents. In this context, a question of interest is to what extent can agreement be reached within polities given such limitations?
Transfer entropy provides a canonical example of how architecture affects conclusions about the structure of the environment. It is well known that sub-sampling of continuous time series can lead to differing estimates of transfer entropy [22,23,24], a failure circumvented by considering rates in the continuum limit [10,24]. Mathematically satisfying, such considerations are not applicable to physical agents whose information storage and processing capabilities are limited by the universe they find themselves attempting to predict and navigate [25]. Szilard and Landauer revealed that information is physical; there is an energy cost in memory for the read-write cycle [26,27,28]. Bekenstein demonstrated that any finite volume of spacetime has an upper limit to the information it can contain [29,30,31]. Bremermann showed that information cannot be processed at any rate without loss of fidelity [32]. Herein these limitations are not swept under the rug—any finite agent must have computational/cognitive abilities limited by finite memory and finite sampling frequency, which we bundle together under the term cognitive architecture. Embracing the inevitability of differing estimates of information theoretic measures due to variance in cognitive architectures or allotment of cognitive resources gives a different perspective on the origins of disagreement—the Consensus Problem.
With the desire to shed some light on what this means, we introduce a toy model of a polity, built from the bottom up, in order to show how disagreement amongst agents can emerge in ideal circumstances. We start with an epistemic agent capable of harnessing the correlations in the history of its environment to predict the future. The agent is placed in a simple environment consisting of a pair of stimuli which it samples according to its own limitations. We develop an exactly solvable model for the transfer entropy between the two stimuli and analyze how the inference of information-flow depends on the memory-capacity and temporal sampling of an observing agent [22,23,24]. Finally, a polity of agents, each having access to identical streams of stimuli, is formed and their individual estimates brought together and compared. As a representative illustration, we use time-series of CO content and atmospheric temperature taken on Mauna Loa to demonstrate this effect, suggesting that polarization amongst epistemic agents on scientifically charged topics may persist irrespective of the quality of data supporting one particular conclusion.
2. From Agents to Polities
In this section, we construct a model of an epistemic agent. We emphasize epistemic since we are not concerned with the actions of the agent, but in how it uses memories of its environment to predict the future and inform itself of how it should act. The agent is placed into an environment in which it has access to two time series of sensory input, which it samples as coupled qualia and stores in memory. Both sampling rate and storage are fixed—a toy model of the intrinsic cognitive architecture of the agent. We give a brief primer on transfer entropy and then construct a toy model of the qualia coming from a complex environment. With a single EA in hand, we introduce a simple description of a polity, an ensemble of epistemic agents, and show how the consensus problem can emerge in heterogeneous populations of agents.
2.1. Transfer Entropy and Influence
An epistemic agent, simple or complex, exists in a world full of uncertainty. That uncertainty implies that the inference of probability distributions, and the measures derived from them, lies at the core of an agent’s model of the world. Physical limits on processing and/or reaction times can hamper the actions, and existence, of an agent if that model only describes the past states of its environment. A model of the future helps mitigate the vagaries of the world.
Transfer entropy [4,33] is the simplest measure quantifying the extent to which past correlations between two processes reduce the future uncertainty in either process—the extent to which an agent can predict one of the processes, given past knowledge of both. Given two processes, , the transfer entropy is defined as the excess information gained by knowing the past history of process 2 in addition to that of process 1,
(1)
where is the hidden information, i.e., the Shannon entropy of X, and is the conditional entropy of X conditioned on Y [1]. In what follows, it is useful to rewrite the transfer entropy in terms of the mutual information , expressed as the Kullback–Leibler divergence between joint and product distributions, [34]. Using this, we can write an alternative expression for the transfer entropy [2,3,35]:(2)
For any finite agent, memory limitations preclude storage of the entire history, so a discrete subset of events is taken at , where N is taken to represent the number of past snapshots of the world the agent stores in estimating the joint distribution over histories i.e., memory. A schematic of the information diagram is shown in Figure 1, highlighting the relevant measures.The processes in question can be treated symmetrically—how does knowing how ’s history decreases the uncertainty in ’s future compare to knowing how ’s history reduces the uncertainty in ’s future? To probe the asymmetry of transfer entropy in these two cases, it is useful to define the ratio of the difference in information flows to the total information flow:
(3)
This quantity is bounded on the interval , saturating iff the transfer entropy vanishes in only one direction, which occurs iff the target process is deterministic. Since , when both directions vanish we set . We note that, in certain cases, the transfer entropy reduces to the Granger-Causality [36,37], and there is a tendency to interpret it as a causal measure. However, one must be cautious employing such interpretations, given that both measures are based entirely on correlations [36,38,39,40]. In passing, we refer to it as influence, but only with regard to how correlations help influence an agent’s predictions, as opposed to any sort of causal influence between the actual processes.2.2. A Toy Model of Qualia
Consider now a simple agent, one whose senses allow it to observe the world through two stimuli. The experiences of these stimuli by the agent are referred to as qualia (singular quale); both terms are used interchangeably. Placing the agent into an environment provides it with these stimuli, albeit noised by whatever else is happening. One could have in mind a thermostat that senses both temperature and pressure, though, in principle, one can imagine a far more complex agent sampling myriad stimuli and ignoring all but two. Like the temperature and the pressure of the thermostat’s environment, the stimuli are coupled together by whatever processes give rise to them in the environment, and the agent will estimate the influence of the two by computing the transfer entropy from its memory and sampling of the environment.
To see this in action, let us consider as our stimuli a pair of positions, , coupled in the environment by evolution equations
(4)
(5)
and initial conditions . These could be the temperature and pressure of the thermostat, but dimensionalized to facilitate comparisons. Since a metric structure is closely tied to similarity judgements when comparing both auditory and visual qualia, position is an appropriate descriptor [41,42] Here, parameterize the deterministic and stochastic contributions, respectively, to the equations of motion. The latter are associated with the environment acting as a heat reservoir, , inducing fluctuations in both processes— are independent white noise contributions that satisfy and . The dimensions of the parameters are , and .The four coupling constants define natural time and length scales in the model,
(6)
which are used to dimensionalize all variables; see Appendix A for more details. The latter describes the size of fluctuations in the separation of the two processes, while the former the decay of transients. In the left panel of Figure 1, we plot instances of paths generated by Equations (4) and (5) for several values of the coupling constants. We note that the model has an explicit coupling which introduces correlations between the past and present of both processes. The relevant quantities determining the behavior of the information dynamics of the two processes are(7)
with the former being deterministic and the latter stochastic asymmetry parameters, both normalized to the range .2.3. Influence between Qualia
Equations (4) and (5) are diagonalized into a Wiener process for the center of mass motion, and an Ornstein–Uhlenbeck process for the separation, giving exact solutions. For a more in depth derivation, please see Appendix A. Given that both are Gaussian processes, the solution to the corresponding Fokker–Planck equation is a multivariate Gaussian, allowing for an exact solution for the analytical form of . For a Gaussian process, if and have covariances and , respectively, and joint covariance , then the mutual information is
Combining this with Equation (2), we show the results for different values of N in Figure 3. In the small, , asymmetry regime where and for small timescales, , short memory , and in the limit , the influence reads We note that there are many ways in which one can discuss a small limit, and this is simply one of them. When the asymmetry switches sign, agents sampling timescales on opposite sides of conclude in contradiction to one another the direction of influence. Since the transfer entropy for Gaussian processes is equivalent to the Granger causality, this implies that one can confuse the direction of causation, which, as previously mentioned, is a common misinterpretation of a purely correlative measure [36,37]. Figure 2 indicates that this phenomenon is not limited to the linear regime but is a general feature found across parameter space. A longer discussion of how to compute the influence within the qualia model is presented in Appendix B.It is beneficial to think of a space of all possible cognitive architectures given fixed values of deterministic and stochastic model parameters. A polity distribution function, discussed in the next section, has this space as its support. If the population has a fairly uniform set of architectures, then this density will be highly peaked. As variance in architecture is introduced, or as cognitive resources are moved around, the polity distribution function spreads out and deforms. When the distribution crosses the vanishing influence surface, agents on either side will necessarily have differing opinions.
2.4. A Toy Model of Polities
A polity can be thought of as a more complex agent, one whose beliefs are a conglomeration of the beliefs of its constituent agents, and whose actions are determined by that distribution. Pooling of information can be done at multiple levels, and in different fashions that distribute weight amongst agent opinions. Here, we consider a simple case where final estimates of an information measure are pooled together but examine the effect of different weight schemes. Bringing together the estimates of many individual agents, the naive expectation is that the law of large numbers might allow the polity to form an estimate with smaller uncertainty. Given a homogeneous population of agents, this would be the case; not so for heterogeneous polities.
Consider then a polity of agents with heterogeneous cognitive architectures, , described by a population distribution function, , giving the fraction of the population with memory N and sampling times between and . Each agent, assumed independent of the others, has access to the same data, sampling and remembering as their nature allows, and estimates the influence between processes, contributing that result to the polity. The distribution of estimates is
where an agent’s uncertainty in their estimate of can be included in the conditional distribution . More details on the derivation are given in Appendix B.The distribution describes the heterogeneity of cognitive architectures found in the polity. If the agents are similar enough that they all have near identical hardware, whether that be biological or otherwise, one would expect the distribution to be peaked at some , with a large variance. To account for several orders of magnitude in sampling rates, we take the dependence to be log-normal, with mean and log normalized root mean square . Meanwhile, the N dependence is taken to be geometric, with mean ,
(8)
where and . Correlations between sampling and memory representing constraints such as favored look-back window times, , can easily be incorporated into this framework.3. Results
In this section, we look at the influence measure across a wide range of parameters in our coupled qualia model, as well as differing cognitive architectures. The central contour plot (A) of Figure 2 shows the asymmetry across the full domain for the deterministic parameter, as inferred by agents with a range of sampling timescales, . In the bottom-left surface plot (A) of Figure 2, we see the corresponding transfer entropy from process 1 to process 2 (blue), and vice versa (pink). The dominating process is the one with the weaker deterministic coupling; the other process is pulled towards it, as seen in the left panel of Figure 1. Three slices, labelled and , are taken of these surfaces for constant values of the deterministic asymmetry parameter, namely and displayed in the upper-left three panels (C). For the first (last) of these, the transfer entropy in the direction () is always larger. In both cases, agents would interpret the information flow as always being unidirectional, irrespective of their sampling times . In the middle panel, however, there is a cross-over of transfer entropy at a particular value of temporal discretization. The exact value of at which this occurs is not important in our discussion, as the far right plot of the influence shows that there is a wide range of deterministic asymmetry parameters that share this feature. In particular, the direction of influence inferred by agents will depend on their sampling of the past: If the agent samples the processes at longer timescales, they will believe that process 2 holds more information about process 1 than the other way around. Conversely, for shorter sampling timescales, the agents will reach the opposite conclusion. Re-framing our discussion to an ensemble of agents that do not have an agreed upon sampling timescale, there does not exist a singular conclusion concerning information flows across the entire ensemble: samples of agents drawn from this ensemble will reach contradictory conclusions concerning historical correlations between the processes.
The influence cross-over is not simply dependent on the sampling timescale , but also the memory size of the agents captured by the parameter N. The plot array (D) in Figure 2 shows multiple instances of the central contour plot (A) for varying values of the noise asymmetry parameter, b, and memory size, N. The black line in these panels indicates the region where the influence flips, and the rows show its dependence as a function of increasing N. Since the crossover region is monotonic in N, there are points of constant asymmetry parameters and temporal discretization that nonetheless lead to contradictory conclusions due to different memory capacities. We note that the effect of N appears weaker than that of as the locus of influence flips changes with N and appears to saturate by .
One could imagine that the way N and affect the transfer entropies conspire together so that, for a constant look-back window, , their effects would cancel out. This is not the case, however, as seen in Figure 3, where the top panel (A) shows for a wide range of and , and the bottom panel (B) explores the space of deterministic and stochastic parameters. The black line represents the cross-over point for the asymmetry in transfer entropy, a general feature over a large number of samples. It is interesting to note that, for , the cross-over curve hugs the line (horizontal, dashed) across window sizes smaller than the natural timescale, , while for larger windows it hugs the line (diagonal, dashed) corresponding to discretizations of the window into increments of size . As b grows larger than a, the crossover curve moves to the right, yet remains approximately parallel to this latter line for .
The Consensus Problem
The presence of surfaces of vanishing influence in the space of cognitive architectures seems to be a generic feature across many model parameters. For simplicity, let us assume that the computation of influence depends solely on and a critical timescale . If the agent samples on a timescale shorter than , they compute the influence to be ; otherwise, they compute . This can be modeled by a Heaviside step function, , so that . Then, the belief distribution over values of influence is bimodal:
(9)
with erf the Gaussian error function. Both outcomes in the polity belief distribution are weighed by a prefactor that is determined by what fraction of the polity distribution function lies on either side of the critical sampling time.A similar result can be found if we allow for uncertainty in the computation of influence, and polity censuring. Consider the case where an agent that has a memory less than some critical memory size computes an influence with mean and variance , while agents above but below compute an influence with mean and variance . Agents with memory above are censured so that their beliefs do not contribute to the polity. We also assume the errors that are small enough that the influence distributions are Gaussian. The belief distribution in the polity is
(10)
(11)
For more details on either calculation, see Appendix B. Again, we find a bimodal distribution with each mode weighed by the structural and statistical properties of the polity. This is quite general, and creating distributions with more modes becomes a simple generalization.A multi-modal polity belief distribution implies that there are camps within the polity with similar beliefs, though not necessarily with similar cognitive architectures. This is not a problem for consensus if the modes all have the same sign for influence—different camps still agree on the direction of influence. However, a problem occurs if the modes have different signs i.e., the distribution cuts through a line of vanishing influence. In this case, the camps have opposing beliefs, and the polity will not have a consensus belief in the direction of influence. We call this impasse the Consensus Problem, and it should be clear now how the heterogeneity of cognitive architectures within a polity contributes to its emergence. In the next section, we show this emergence in real world data.
4. An Argument over Climate Data
As an illustration of our framework applicable to empirical data, we use climate data gathered at Mauna Loa Observatory, CO content and local temperatures [43,44,45]. For clarity, we are not attempting to examine whether carbon dioxide content is driving temperature, or vice versa, but to show that the consensus problem can be identified in data coming from a dynamical system whose dynamics need not be known. These data consist of monthly measurements from 1958–present, with accuracy at the p.p.m. and °C levels. We leverage the uncertainty at the data level of accuracy to bootstrap a polity of heterogeneous agents with different ages. This is done by taking substreams drawn from the full data, starting dates uniformly drawn from 1958–2010, and introducing Gaussian noise with standard deviation equal to the significant digit in the original data. We experimented with many homogeneous polities of equal age agents, each instance using substreams of the same length but not necessarily starting at the present, and found the results robust for lengths up to ~20–30 months, beyond which data volume became an issue. Our bootstrapped results for a polity consisting of many ages are shown in Figure 4.
The analysis was done on the raw data, as well as detrended data, representing polities aware of long-term trends. Removing the linear/exponential trends increased the transfer entropies by nearly an order of magnitude, and tightened the error bars. Because influence is insensitive to scale transformations, the former effect did not change the overall shape of the influence curve significantly. Further detrending by the removal of the highest power harmonics had almost no effect on both transfer entropies and influence. For a more detailed description of the procedure, see Appendix C.
Figure 4B shows does, indeed, depend on the memory usage/look-back window. For months, a period associated with changing weather, an agent as modeled above would infer temperature influences CO content. For months, a period associated with seasonal changes, an agent would infer the opposite influence. The two data streams yield contradictory conclusions about which process influences which dependent on the architecture of the agent.
Moving up to polities, consensus on the influence direction for a random sample of agents is difficult unless the sample is drawn so that all the agents have similar memory usage given this monthly sampling strategy. The shape of the curve is similar to the second example discussed in the previous section, so we expect the polity belief distribution to be bimodal, and this is supported by Figure 5. There we see several polity belief distributions for three different values of the average polity memory size. Around , the median value of the influence becomes 0, implying that the population is equally divided over their opinions. For populations with shorter memories, the belief that temperature is influencing CO is dominant. Populations with longer memories tend to believe the opposite. Once again, we intend this as a demonstration of how the consensus problem might emerge in polities, and understand that the complexities of weather and climate change cannot be boiled down to two simple data streams.
5. Discussion and Conclusions
Under the assumption that information is physical; that any realistic epistemic agent will have bounds on its ability to acquire, store, and process information from the environment stemming from its finite cognitive architecture [26,32,46,47,48,49,50,51,52,53], this paper has given a powerful reinterpretation of known problems with transfer entropy estimation as a source for disagreement within populations of such agents that span a large enough volume of possible cognitive architectures. The consensus problem does not stem from any differences in the sensory data different agents are exposed to; exposing agents to identical data streams does not ensure that all will reach the same conclusion. This result is qualitatively obvious to anyone familiar with large groups of humans and has implications for recent studies on opinion formation and polarization [54,55,56]. The results also have implications for presumably model-agnostic machine learning. ML architectures utilize information theoretic measures to learn from data. However, such learning demands efficient storage of belief distributions in lieu of enormous data sets and is therefore subject to historical sampling. Algorithms with memory usage optimized to specific hardware architectures will likely encounter the consensus problem described here when compared with identical algorithms running on different architectures.
Transfer entropy has gained in popularity recently in the analysis of group dynamics [57,58,59]. We hope to investigate whether our results hold for more than two data streams, and what happens as we attempt to move up in scale from small groups of agents to flocks and coarse grained polity? Extending this work to larger scales will help clarify the dynamics of group formation and models of interaction in social organisms [60,61]. Beyond that, how does the consensus problem manifest due to cognitive architectural choices in the inference of other information theoretic measures? Going back to the notion of agency, epistemic agents have a repertoire of actions available to them which they must choose from based on the conclusions they make about their changing environment: In what way does this repertoire affect the availability of sensory data to an agent introducing new ways for the consensus problem to manifest. These points open up the need to clarify what is meant by an epistemic agent, as well as a further development of the notion of a polity as a group of agents.
Conceptualization, D.R.S., A.F. and M.G.; methodology, D.R.S. and J.C.-N.; software, D.R.S. and J.C.-N.; validation, D.R.S. and J.C.-N.; formal analysis, D.R.S. and J.C.-N.; investigation, D.R.S. and J.C.-N.; resources, G.G. and J.D.; data curation, D.R.S.; writing—original draft preparation, D.R.S.; writing—review and editing, A.F., G.G., J.D. and M.G.; visualization, D.R.S. and J.C.-N.; supervision, G.G. and J.D.; project administration, A.F. and M.G. All authors have read and agreed to the published version of the manuscript.
Not applicable.
The data for atmospheric CO
The authors declare no conflict of interest.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Figure 1. (A) Instances of paths generated by equations of motion, Equations (4) and (5), with the same seed for each pseudorandom generator used to simulate the heat bath. Axes have been scaled to the natural length and time scales, ℓ and [Forumla omitted. See PDF.], respectively. Paths are initialized 10ℓ units from each other. Note the existence of transient behavior decaying with timescale [Forumla omitted. See PDF.] followed by steady state behavior dominated by stochasticity. The black line represents the mean of the two processes while the grey lines of decreasing opacity are an integer number of ℓs away from the mean; (B) an information diagram of transfer entropy in our toy model. To the right, we have our coupled stochastic processes [Forumla omitted. See PDF.] and [Forumla omitted. See PDF.], as well as the heat reservoir, [Forumla omitted. See PDF.], through which noise is introduced to both processes. Coupling constants are labelled [Forumla omitted. See PDF.] and [Forumla omitted. See PDF.]. The present is at time t, and the temporal discretization scale is [Forumla omitted. See PDF.]. To the left, we have an information diagram where each circle represents the entropy of that quantity. The present and past entropies are shown as bubbles, and the mutual information is the intersection of these bubbles. The transfer entropy is labeled in relation to this mutual information. Note that this diagram has a mirror image on the other side of the processes (not shown) that represents the reverse [Forumla omitted. See PDF.] calculation.
Figure 2. (A) The influence contours in our toy model across [Forumla omitted. See PDF.] and [Forumla omitted. See PDF.] for [Forumla omitted. See PDF.] and [Forumla omitted. See PDF.]. The three slices, [Forumla omitted. See PDF.] and [Forumla omitted. See PDF.], are taken at [Forumla omitted. See PDF.], respectively. (B) The transfer entropy surfaces from which (A) was constructed, with slices shown. (C) The three slices are displayed. Note the crossover for [Forumla omitted. See PDF.]. (D) [Forumla omitted. See PDF.] is plotted for other values of b and N, with a vanishing influence represented by the black line. Each contour plot has the same axes as (A). For negative values of b, the diagrams would have a flipped color scheme. Note how for constant a values the sign flip in the asymmetry of transfer entropy is a generic feature across a large portion of the parameter space, typically occurring within an order of magnitude of the natural timescale.
Figure 3. (A) Influence between processes with [Forumla omitted. See PDF.] and [Forumla omitted. See PDF.]. Due to numerical accuracy, the grey regions have been removed from the plot. The horizontal line represents [Forumla omitted. See PDF.], while the diagonal line represents look-back windows that have been cut into intervals of duration equal to the natural timescale of the system, [Forumla omitted. See PDF.] (in dimensionless units). The black curve cutting through the plot is null information flow. For constant look-back window size, T, the direction of information flow depends on the discretization of the window; (B) a plot array for several values of the deterministic and stochastic parameters. Each subplot has the same description as the top panel. Note that the sign cross-over is a generic feature in much of the parameter space; furthermore, it is typically found within an order of magnitude of the natural timescale.
Figure 4. (A) Raw data from temperature and CO[Forumla omitted. See PDF.] content sensors taken at Mauna Loa from 1958–2022. Smaller inset plots show detrending procedures applied to the data: First, the exponential and linear trends are removed using a best fit. This is followed by the progressive removal of the harmonics containing the most signal power; (B) the transfer entropies computed from the pairs of data streams as a function of agent memory size. The scale on the vertical axis is irrelevant, as any scaling is removed in the next step. The progressively detrended data results in qualitatively similar transfer entropies compared to the raw data, with a noticable decrease in variance and a scaling by an order of magnitude after the initial detrending step; (C) the influence between the data streams—since the influence is insensitive to scale changes in the transfer entropy, both raw and detrended data result in nearly identical curves.
Figure 5. The left panel shows all the polity distributions for ensembles of agents with average memory [Forumla omitted. See PDF.]. The central curve is the average value of the influence, and the margins are taken at [Forumla omitted. See PDF.] intervals centered on the median. The right panels are the belief distributions for, starting at the top, [Forumla omitted. See PDF.], 6 and 24. The thick dashed line is the mean, and the thin dashed line is the median. The inset pie charts are the fraction of the population that believe one way or the other.
Appendix A. The Sensory Model
Our sensory model consists of two coupled stochastic processes that can be written as a vector Ornstein–Uhlenbeck process:
Appendix A.1. Scaling and Dimensionalization
The dimensions of all relevant quantities are
Appendix A.2. Diagonalization and Translations
Note that
Using the decomposition
Appendix A.3. Solution by Integration Factor
The equations of motion can be solved using an integration factor,
Appendix B. Information Theory
Appendix B.1. Statistics
Here, we compute the mean and covariance of our vector process. The mean is
Appendix B.2. Belief Distribution
The EA’s belief distribution over possible paths is required. Here, we construct the multivariate Gaussian describing the probability of a finite history appearing in our model. With a time discretization,
Appendix B.3. Information Measures
With the belief distribution, we can exactly compute the transfer entropy. Consider subdividing our history vector into two disjoint parts:
Using Equation (
To compute the transfer entropy in the opposite direction, we note that relabelling
Appendix B.4. From Agent to Politi
A politi consists of multiple epistemic agents, which, by pooling measures, can be considered a new type of agent with its own belief distribution. We consider only non-interacting agents whose belief distributions are independent of one another. Each agent is privy to the same data streams, though they may not sample them at the same rates due to differing cognitive architectures. We denote the set of agents with their respective cognitive architectures as
The polity belief distribution is found using Bayes’ Rule:
Given the geometric-lognormal distribution defined in the text, let us work out the first example for
Appendix C. Mauna Loa Data
We acquired our data for atmospheric CO
Appendix C.1. Detrending
We wanted to examine the effect of detrending the raw data on our results and were surprised to find only small quantitative changes. Removing trends occurred in steps, and at each step we then bootstrapped the detrended data (see below) and ran our analysis. As a first step, we performed a boxcar average with a window of one year and removed this from the data. This got rid of most of the exponential growth in the dataset but left a remaining linear trend. We performed a linear least squares fit and removed this from the data. The next detrending steps involved the removal of the highest power harmonics. To this end, we took the power spectrum of the data, found the highest peak, and included frequencies up to a
Appendix C.2. Bootstrapping
Note that
References
1. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J.; 1948; 27, pp. 379-423. [DOI: https://dx.doi.org/10.1002/j.1538-7305.1948.tb01338.x]
2. Dembo, A.; Cover, T.M.; Thomas, J.A. Information theoretic inequalities. IEEE Trans. Inf. Theory; 1991; 37, pp. 1501-1518. [DOI: https://dx.doi.org/10.1109/18.104312]
3. Cover, T.M.; Thomas, J.A. Information theory and statistics. Elem. Inf. Theory; 1991; 1, pp. 279-335.
4. Schreiber, T. Measuring information transfer. Phys. Rev. Lett.; 2000; 85, 461. [DOI: https://dx.doi.org/10.1103/PhysRevLett.85.461]
5. Lizier, J.T.; Prokopenko, M.; Zomaya, A.Y. Local information transfer as a spatiotemporal filter for complex systems. Phys. Rev. E; 2008; 77, 026110. [DOI: https://dx.doi.org/10.1103/PhysRevE.77.026110] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/18352093]
6. Wibral, M.; Vicente, R.; Lizier, J.T. Directed Information Measures in Neuroscience; Springer: Berlin/Heidelberg, Germany, 2014.
7. Caticha, A. Lectures on probability, entropy, and statistical physics. arXiv; 2008; arXiv: 0808.0012
8. Sowinski, D.R. Complexity and Stability for Epistemic Agents: The Foundations and Phenomenology of Configurational Entropy; Dartmouth College: Hanover, NH, USA, 2016.
9. Ursino, M.; Ricci, G.; Magosso, E. Transfer Entropy as a Measure of Brain Connectivity: A Critical Analysis with the Help of Neural Mass Models. Front. Comput. Neurosci.; 2020; 14, 45. [DOI: https://dx.doi.org/10.3389/fncom.2020.00045]
10. Bossomaier, T.; Barnett, L.; Harré, M.; Lizier, J.T. Transfer entropy. An Introduction to Transfer Entropy; Springer: Berlin/Heidelberg, Germany, 2016; pp. 65-95.
11. Gencaga, D.; Knuth, K.H.; Rossow, W.B. A recipe for the estimation of information flow in a dynamical system. Entropy; 2015; 17, pp. 438-470. [DOI: https://dx.doi.org/10.3390/e17010438]
12. Wolpert, D.H.; Wolf, D.R. Estimating functions of probability distributions from a finite set of samples. Phys. Rev. E; 1995; 52, 6841. [DOI: https://dx.doi.org/10.1103/PhysRevE.52.6841]
13. Agapiou, S.; Papaspiliopoulos, O.; Sanz-Alonso, D.; Stuart, A.M. Importance sampling: Intrinsic dimension and computational cost. Stat. Sci.; 2017; 32, pp. 405-431. [DOI: https://dx.doi.org/10.1214/17-STS611]
14. Aguilera, A.C.; Artés-Rodríguez, A.; Pérez-Cruz, F.; Olmos, P.M. Robust sampling in deep learning. arXiv; 2020; arXiv: 2006.02734
15. Hollingsworth, J.; Ratz, M.; Tanedo, P.; Whiteson, D. Efficient sampling of constrained high-dimensional theoretical spaces with machine learning. arXiv; 2021; arXiv: 2103.06957[DOI: https://dx.doi.org/10.1140/epjc/s10052-021-09941-9]
16. Rotskoff, G.M.; Mitchell, A.R.; Vanden-Eijnden, E. Active Importance Sampling for Variational Objectives Dominated by Rare Events: Consequences for Optimization and Generalization. arXiv; 2021; arXiv: 2008.06334
17. Zhu, J.; Bellanger, J.J.; Shu, H.; Le Bouquin Jeannès, R. Contribution to transfer entropy estimation via the k-nearest-neighbors approach. Entropy; 2015; 17, pp. 4173-4201. [DOI: https://dx.doi.org/10.3390/e17064173]
18. Caticha, A.; Giffin, A. Updating probabilities. AIP Conf. Proc.; 2006; 872, pp. 31-42.
19. Ramsey, F.P. Truth and probability. Readings in Formal Epistemology; Springer: Berlin/Heidelberg, Germany, 2016; pp. 21-45.
20. Caticha, A. Entropic dynamics. AIP Conf. Proc.; 2002; 617, pp. 302-313.
21. Caticha, A. Entropic dynamics. Entropy; 2015; 17, pp. 6110-6128. [DOI: https://dx.doi.org/10.3390/e17096110]
22. Barnett, L.; Seth, A.K. Detectability of Granger causality for subsampled continuous-time neurophysiological processes. J. Neurosci. Methods; 2017; 275, pp. 93-121. [DOI: https://dx.doi.org/10.1016/j.jneumeth.2016.10.016]
23. Spinney, R.E.; Lizier, J.T. Characterizing information-theoretic storage and transfer in continuous time processes. Phys. Rev. E; 2018; 98, 012314. [DOI: https://dx.doi.org/10.1103/PhysRevE.98.012314]
24. Spinney, R.E.; Prokopenko, M.; Lizier, J.T. Transfer entropy in continuous time, with applications to jump and neural spiking processes. Phys. Rev. E; 2017; 95, 032319. [DOI: https://dx.doi.org/10.1103/PhysRevE.95.032319]
25. Prokopenko, M.; Lizier, J.T. Transfer entropy and transient limits of computation. Sci. Rep.; 2014; 4, 5394. [DOI: https://dx.doi.org/10.1038/srep05394]
26. Szilard, L. Über die Entropieverminderung in einem thermodynamischen System bei Eingriffen intelligenter Wesen. Z. Phys.; 1929; 53, pp. 840-856. [DOI: https://dx.doi.org/10.1007/BF01341281]
27. Landauer, R. Irreversibility and heat generation in the computing process. IBM J. Res. Dev.; 1961; 5, pp. 183-191. [DOI: https://dx.doi.org/10.1147/rd.53.0183]
28. Boyd, A.B.; Crutchfield, J.P. Maxwell demon dynamics: Deterministic chaos, the Szilard map, and the intelligence of thermodynamic systems. Phys. Rev. Lett.; 2016; 116, 190601. [DOI: https://dx.doi.org/10.1103/PhysRevLett.116.190601] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/27232011]
29. Bekenstein, J.D. Universal upper bound on the entropy-to-energy ratio for bounded systems. JACOB BEKENSTEIN: The Conservative Revolutionary; World Scientific: Singapore, 2020; pp. 335-346.
30. Bekenstein, J.D. How does the entropy/information bound work?. Found. Phys.; 2005; 35, pp. 1805-1823. [DOI: https://dx.doi.org/10.1007/s10701-005-7350-7]
31. Bekenstein, J.D. Black holes and entropy. JACOB BEKENSTEIN: The Conservative Revolutionary; World Scientific: Singapore, 2020; pp. 307-320.
32. Bremermann, H.J. Minimum energy requirements of information transfer and computing. Int. J. Theor. Phys.; 1982; 21, pp. 203-217. [DOI: https://dx.doi.org/10.1007/BF01857726]
33. Massey, J. Causality, feedback and directed information. Proceedings of the 1990 International Symposium on Information Theory and Its Applications (ISITA-90); Waikiki, HI, USA, 27–30 November 1990; pp. 303-305.
34. Kullback, S.; Leibler, R.A. On information and sufficiency. Ann. Math. Stat.; 1951; 22, pp. 79-86. [DOI: https://dx.doi.org/10.1214/aoms/1177729694]
35. Gleiser, M.; Sowinski, D. How we make sense of the world: Information, map-making, and the scientific narrative. The Map and the Territory; Springer: Berlin/Heidelberg, Germany, 2018; pp. 141-163.
36. Barnett, L.; Barrett, A.B.; Seth, A.K. Granger Causality and Transfer Entropy Are Equivalent for Gaussian Variables. Phys. Rev. Lett.; 2009; 103, 238701. [DOI: https://dx.doi.org/10.1103/PhysRevLett.103.238701]
37. Amblard, P.O.; Michel, O.J. The relation between Granger causality and directed information theory: A review. Entropy; 2013; 15, pp. 113-143. [DOI: https://dx.doi.org/10.3390/e15010113]
38. James, R.G.; Barnett, N.; Crutchfield, J.P. Information flows? A critique of transfer entropies. Phys. Rev. Lett.; 2016; 116, 238701. [DOI: https://dx.doi.org/10.1103/PhysRevLett.116.238701]
39. Lizier, J.T.; Prokopenko, M. Differentiating information transfer and causal effect. Eur. Phys. J. B; 2010; 73, pp. 605-615. [DOI: https://dx.doi.org/10.1140/epjb/e2010-00034-5]
40. Ay, N.; Polani, D. Information flows in causal networks. Adv. Complex Syst.; 2008; 11, pp. 17-41. [DOI: https://dx.doi.org/10.1142/S0219525908001465]
41. Oh, S.; Bowen, E.F.; Rodriguez, A.; Sowinski, D.; Childers, E.; Brown, A.; Ray, L.; Granger, R. Towards a Perceptual Distance Metric for Auditory Stimuli 2020. Available online: http://xxx.lanl.gov/abs/2011.00088 (accessed on 14 July 2022).
42. Bowen, E.; Rodriguez, A.; Sowinski, D.; Granger, R. Visual stream connectivity predicts assessments of image quality. Accepted in the Journal of Vision under JOV-07873-2021R2. arXiv; 2020; arXiv: 2008.06939
43. Lawrimore, J.H.; Menne, M.J.; Gleason, B.E.; Williams, C.N.; Wuertz, D.B.; Vose, R.S.; Rennie, J. Global Historical Climatology Network - Monthly (GHCN-M); NOAA National Centers for Environmental Information, NESDIS, NOAA, U.S. Department of Commerce: Washington, DC, USA, 2021; Version 3
44. World Meteorological Organization. Climate Explorer; World Meteorological Organization: Geneva, Switzerland, 2021.
45. Koutsoyiannis, D.; Kundzewicz, Z.W. Atmospheric Temperature and CO2: Hen-Or-Egg Causality?. Sci; 2020; 2, 83. [DOI: https://dx.doi.org/10.3390/sci2040083]
46. Bekenstein, J.D. Black Holes and Entropy. Phys. Rev. D; 1973; 7, pp. 2333-2346. [DOI: https://dx.doi.org/10.1103/PhysRevD.7.2333]
47. Lloyd, S. Ultimate physical limits to computation. Nature; 2000; 406, pp. 1047-1054. [DOI: https://dx.doi.org/10.1038/35023282]
48. Bennett, C.H. The thermodynamics of computation—A review. Int. J. Theor. Phys.; 1982; 21, pp. 905-940. [DOI: https://dx.doi.org/10.1007/BF02084158]
49. Earman, J.; Norton, J.D. EXORCIST XIV: The wrath of Maxwell’s demon. Part I. From Maxwell to Szilard. Stud. Hist. Philos. Sci. Part B Stud. Hist. Philos. Mod. Phys.; 1998; 29, pp. 435-471. [DOI: https://dx.doi.org/10.1016/S1355-2198(98)00023-9]
50. Earman, J.; Norton, J.D. Exorcist XIV: The wrath of Maxwell’s demon. Part II. From Szilard to Landauer and beyond. Stud. Hist. Philos. Sci. Part B Stud. Hist. Philos. Mod. Phys.; 1999; 30, pp. 1-40. [DOI: https://dx.doi.org/10.1016/S1355-2198(98)00026-4]
51. Kim, H.; Davies, P.; Walker, S.I. New scaling relation for information transfer in biological networks. J. R. Soc. Interface; 2015; 12, 20150944. [DOI: https://dx.doi.org/10.1098/rsif.2015.0944]
52. Lizier, J.T.; Mahoney, J.R. Moving frames of reference, relativity and invariance in transfer entropy and information dynamics. Entropy; 2013; 15, pp. 177-197. [DOI: https://dx.doi.org/10.3390/e15010177]
53. Wolpert, D.H. Minimal entropy production rate of interacting systems. New J. Phys.; 2020; 22, 113013. [DOI: https://dx.doi.org/10.1088/1367-2630/abc5c6]
54. Sinan, A.; Lev, M.; Arun, S. Distinguishing influence-based contagion from homophily-driven diffusion in dynamic networks. Proc. Natl. Acad. Sci. USA; 2009; 106, pp. 21544-21549. [DOI: https://dx.doi.org/10.1073/pnas.0908800106]
55. Mimar, S.; Juane, M.M.; Park, J.; Muñuzuri, A.P.; Ghoshal, G. Turing patterns mediated by network topology in homogeneous active systems. Phys. Rev. E; 2019; 99, 062303. [DOI: https://dx.doi.org/10.1103/PhysRevE.99.062303]
56. Conover, M.; Ratkiewicz, J.; Francisco, M.; Goncalves, B.; Menczer, F.; Flammini, A. Political Polarization on Twitter. Proc. Int. AAAI Conf. Web Soc. Media; 2021; 5, pp. 89-96.
57. Bettencourt, L.M.; Gintautas, V.; Ham, M.I. Identification of functional information subgraphs in complex networks. Phys. Rev. Lett.; 2008; 100, 238701. [DOI: https://dx.doi.org/10.1103/PhysRevLett.100.238701]
58. Brown, J.; Bossomaier, T.; Barnett, L. Information flow in finite flocks. Sci. Rep.; 2020; 10, 3837. [DOI: https://dx.doi.org/10.1038/s41598-020-59080-6]
59. Brown, J.M.; Bossomaier, T.; Barnett, L. Information transfer in finite flocks with topological interactions. J. Comput. Sci.; 2021; 53, 101370. [DOI: https://dx.doi.org/10.1016/j.jocs.2021.101370]
60. Jiang, L.; Giuggioli, L.; Perna, A.; Escobedo, R.; Lecheval, V.; Sire, C.; Han, Z.; Theraulaz, G. Identifying influential neighbors in animal flocking. PLoS Comput. Biol.; 2017; 13, e1005822. [DOI: https://dx.doi.org/10.1371/journal.pcbi.1005822]
61. Vahdati, A.R.; Weissmann, J.D.; Timmermann, A.; Ponce de León, M.S.; Zollikofer, C.P. Drivers of Late Pleistocene human survival and dispersal: An agent-based modeling and machine learning approach. Quat. Sci. Rev.; 2019; 221, 105867. [DOI: https://dx.doi.org/10.1016/j.quascirev.2019.105867]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Agents interacting with their environments, machine or otherwise, arrive at decisions based on their incomplete access to data and their particular cognitive architecture, including data sampling frequency and memory storage limitations. In particular, the same data streams, sampled and stored differently, may cause agents to arrive at different conclusions and to take different actions. This phenomenon has a drastic impact on polities—populations of agents predicated on the sharing of information. We show that, even under ideal conditions, polities consisting of epistemic agents with heterogeneous cognitive architectures might not achieve consensus concerning what conclusions to draw from datastreams. Transfer entropy applied to a toy model of a polity is analyzed to showcase this effect when the dynamics of the environment is known. As an illustration where the dynamics is not known, we examine empirical data streams relevant to climate and show the consensus problem manifest.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details





1 Thayer School of Engineering, Dartmouth College, Hanover, NH 03755, USA
2 Department of Physics and Astronomy, University of Rochester, Rochester, NY 14627, USA
3 Department of Anthropology, Dartmouth College, Hanover, NH 03755, USA
4 Department of Physics and Astronomy, Dartmouth College, Hanover, NH 03755, USA