1. Introduction
Most statistical analyses refer to the concept of statistical evidence as in phrases like “the evidence in the data suggests” or “based on the evidence we conclude”, etc. It has long been recognized, however, that the concept itself has never been satisfactorily defined or, at least, no definition has been offered that has met with general approval. This article is about the concept of statistical evidence outlining some of the key historical aspects of the discussion and the current state of affairs.
One issue that needs to be dealt with right away is whether or not it is even necessary to settle on a clear definition. After all, statistics has been functioning as an intellectual discipline for many years without a resolution. There are at least two reasons why resolving this is important.
First, to be vague about the concept leaves open the possibility of misinterpretation and ambiguity as in “if we don’t know what statistical evidence is, how can we make any claims about what the evidence is saying?” This leads to a degree of adhocracy in the subject where different analysts measure and interpret the concept in different ways. For example, consider the replicability of research findings in many fields where statistical methodology is employed. One cannot claim that being more precise about statistical evidence will fix such problems, but it is reasonable to suppose that establishing a sound system of statistical reasoning, based on a clear prescription of what we mean by this concept, can only help.
Second, the subject of statistics cannot claim that it speaks with one voice on what constitutes a correct statistical analysis. For example, there is the Bayesian versus frequentist divide as well as the split between the evidential versus the decision-theoretic or behavioristic approaches to determining inferences. This diversity of opinion, while interesting in and of itself, does not enhance general confidence in the soundness of the statistical reasoning process.
As such, it is reasonable to argue that settling the issue of what statistical evidence is and how it is to be used to establish inferences is of paramount importance. Statistical reasoning is a substantial aspect of how many disciplines, from anthropology to particle and quantum physics and on to zoology, determine truth in their respective subjects. One can claim that the role of the subject of statistics is to provide these scientists with a system of reasoning that is logical, consistent and sound in the sense that it produces satisfactory results in practical contexts free of paradoxes. Recent controversies over the use of p-values, which have arisen in a number of scientific contexts, suggest that, at the very least, this need is not being met, for example, see [1,2].
It is to be emphasized that this paper is about the evidential approach to inference and does not discuss decision theory. In particular, the concern is with the attempts, within the context of evidential inference, to characterize statistical evidence. In some attempts to accomplish this, concepts from decision theory have been used, and so there is a degree of confounding between the two approaches. For example, it is common when using p-values to invoke an error probability, namely, the size of a test as given by , to determine when there is evidence against a hypothesis; see Section 3.1. Furthermore, the discussion of which approach leads to more satisfactory outcomes is not part of our aim, and no such comparisons are made. There is indeed a degree of tension between evidential inference and decision theory, and this dates back to debates between Fisher and Neyman concerning which is more appropriate. Some background on this can be found in the references [3,4,5]. A modern treatment of decision theory can be found in [6].
In Section 2, we provide some background. Section 3 discusses various attempts at measuring statistical evidence and why these are not satisfactory. Section 4 discusses what we call the principle of evidence and how it can resolve many of the difficulties associated with defining and using the concept of statistical evidence.
2. The Background
There are basically two problems that the subject of statistics has been developed to address, which we will refer to as the estimation (E) problem and the hypothesis assessment (H) problem. We will denote the data produced by some process by the symbol x and always assume that these have been collected correctly, namely, by random sampling from relevant populations. Obviously, this requirement is not always met in applications, but the focus here is on the ideal situation that can serve as a guide in practical contexts. The fact that data have not been collected properly in an application can be, however, a substantial caveat when interpreting the results of a statistical analysis and at least should be mentioned as a qualification.
Now suppose there is an object or concept of interest, denoted by in the world whose value is unknown. For example, could be the half-life of some fundamental particle, the rate of recidivism in a certain jurisdiction at some point in time, the average income and assets of voters in a given country, a graph representing the relationships among particular variables measured on each element of a population, etc. A scientist collects data x which they believe contain evidence concerning the answers to the following questions.
-
E: What value does take (estimation)?
-
H: Does (hypothesis assessment)?
Actually, there is more to these questions than just providing answers. For example, when quoting an estimate of based on there should also be an assessment of how accurate it is believed that the estimate is. Also, if a conclusion is reached concerning whether or not based on there should also be an assessment of how strong a basis there is for drawing this conclusion. At least as far as this article goes, statistics as a subject is concerned with E and H. For example, if interest is in predicting a future value of some process based on observing then will correspond to this future value.
The difference between the evidential and decision-theoretic approach can now be made clear. The evidential approach is concerned with summarizing what the evidence in x says about the answers to E and H. For example, for E, a conclusion might be that the evidence in the data implies that the true value of is best estimated by and also provide an assessment of the accuracy of . For H, the conclusion would be that there is evidence against/in favor of together with a measure of the strength of this evidence. By contrast, the decision-theoretic approach adds the idea of utility or loss to the problem and searches for answers which optimize the expected utility/loss as in quoting an optimal estimate or optimally accepting/rejecting In a sense, the decision-theoretic approach dispenses with the idea of evidence and tries to characterize optimal behavior. A typical complaint about the decision-theoretic approach is that for scientific problems, where the goal is the truth whatever that might be, there is no role for the concept of utility/loss. That is not to say that there is no role for decision theory in applications generally, but for scientific problems, we want to know as clearly as possible what the evidence in the data is saying. There may indeed be good reasons to do something that contradicts the evidence, now being dictated by utility considerations, but there is no reason not to state what the evidence says and then justify the difference. For example, consider the results of a clinical trial for which strong evidence is obtained that a new drug is efficacious in the treatment of a disease when compared to a standard treatment. Still, from the point of view of the pharmaceutical company involved, there may be good reasons not to market the drug due to costs, potential side effects, etc. As such, the utility considerations of the company outweigh what the evidence indicates.
The other significant division between the approaches to E and H is between frequentist and Bayesian statistics. In fact, these are also subdivisions of the evidential and the decision-theoretic approach. As they do not generally lead to the same results, this division needs to be addressed as part of our discussion. As a broad characterization, frequentism considers a thought experiment where it is imagined that the observed data x could be observed as a member of a sequence of basically identical situations, say, obtaining and then how well an inference performs in such sequences in terms of reflecting the truth concerning is measured. Inferences with good performance are considered more reliable and thus preferred. As soon as these performance measures are invoked, however, this brings things close to being part of decision theory and, as will be discussed here, there does not seem to be a clear way to avoid this. By contrast, the Bayesian approach insists that all inferences must be dictated by the observed data without taking into account hypothetical replications as a measure of reliability. As will be discussed in Section 3.4, however, it is possible to resolve this apparent disagreement to a great extent.
3. Evidential Inference
Statistical evidence is the central core concept of this approach. It is now examined how this has been addressed by various means and to what extent these have been successful. For this, we need some notation. The basic ingredient of every statistical theory is the sampling model a collection of probability density functions on some sample space with respect to some support measure The variable is the model parameter which takes values in the parameter space , and it indexes the probability distributions in the model. The idea is that one of these distributions in the model produced the data x, and this is represented as The object of interest is represented as a function where can be obtained from the distribution The questions E and H can be answered categorically if becomes known. Note that there is no reason to restrict to be finite dimensional, and so nonparametric models are also covered by this formulation.
A simple example illustrates these concepts and will be used throughout this discussion.
(location normal). Suppose is an independent and identically distributed sample of n from a distribution in the family , where denotes the density of a normal distribution with unknown mean μ and known variance and the goal is inference about the true value of While Ψ here is one to one, it could be many to one as in and such contexts raise the important issue of nuisance parameters and how to deal with them.
3.1. p-Values, E-Values and Confidence Regions
The p-value is the most common statistical approach to measuring evidence associated with H. The p-value is associated with Fisher, see [7], although there are precursors who contributed to the idea such as John Arbuthnot, Daniel Bernoulli, and Karl Pearson; see [8,9]. Our purpose in this section is to show that the p-value and the associated concepts of e-value and confidence region do not provide adequate measures of statistical evidence.
We suppose that there is a statistic and consider its distribution under the hypothesis The idea behind the p-value is to answer the question, is the observed value of something we would expect to see if is the true value of If is a surprising value, then this is interpreted as evidence against A p-value measures the location of in the distributions of under , with small values of the p-value being an indication that the observed value is surprising. For example, it is quite common that large values of are in the tails (regions of low probability content) of each of the distributions of under and so, given
is computed as the p-value.(location normal). Suppose that it is required to assess the hypothesis For this, it is common to use the statistic as this has a fixed distribution (the absolute value of a standard normal variable) under with the p-value given by
where Φ is the cdf.Several issues arise with the use of p-values in general; for example, see Section 3 in [10] for a very thorough discussion. First, what value is small enough to warrant the assertion that evidence against has been obtained when the p-value is less than It is quite common in many fields to use the value as a cut-off, but this is not universal, as in particle physics, is commonly employed. Recently, due to concerns with the replication of results, it has been proposed that the value be used as a standard, but there are concerns with this as expressed in [11]. The issue of the appropriate cut-off is not resolved.
It is also not the case that a value greater than is to be interpreted as evidence in favor of being true. Note that, in the case that has a single continuous distribution under then has a uniform distribution when This implies that a p-value near 0 has the same probability of occurring as a p-value near 1. It is often claimed that a valid p-value must have this uniformity property. But consider the p-value of Example 2, where under and so, as n rises, the distribution of becomes more and more concentrated about For large enough n, virtually all of the distribution of under will be concentrated in the interval , where represents a deviation from that is of scientific interest, while a smaller deviation is not, e.g., the measurements are made to an accuracy no greater than Note that in any scientific problem, it is necessary to state the accuracy with which the investigator wants to know the object , as this guides the measurement process as well as the choice of sample size. The p-value ignores this fact and, when n is large enough, could record evidence against when in fact the data are telling us that is effectively true and evidence in favor should be stated. This distinction between statistical significance and scientific significance has long been recognized (see [12]) and needs to be part of statistical methodology (see Example 6). This fact also underlies a recommendation, although not commonly followed, as it does not really address the problem, that the cut-off should be reduced as the sample size increases. Perhaps the most important take-away from this is that a value that lies in the tails of its distributions under is not necessarily evidence against It is commonly stated that a confidence region for should also be provided, but this does not tell us anything about which needs to be given as part of the problem.
There are many other issues that can be raised about the p-value, where many of these are covered by the word p-hacking and associated with the choice of For example, as discussed in [13], suppose an investigator is using a particular as the cut-off for evidence against and, based on a data set of size , obtains the p-value where is small. Since finding evidence against is often regarded as a positive, as it entails a new discovery, the investigator decides to collect an additional data values, computes the p-value based on the full data values and obtains a p-value less than But this ignores the two-stage structure of the investigation, and when this is taken into account, the probability of finding evidence against at level when is true, and assuming a single distribution under , equals where {evidence against found at first stage} and {evidence against found at second stage}. If then So, even though the investigator has done something very natural, the logic of the statistical procedure based on using p-values with a cut-off, effectively prevents finding evidence against
Royall in [10] provided an excellent discussion of the deficiencies of p-values and rejection trials (p-values with cut-off ) when considering these as measuring evidence. Sometimes, it is recommended that the p-value itself be reported without reference to a cut-off , but we have seen already that a small p-value does not necessarily constitute evidence against, and so this is not a solution.
Confidence regions are intimately connected with p-values. Suppose that for each there is a statistic that produces a valid p-value for as has been described and an cut-off is used. Now, put Then, and so is a -confidence region for As such, is equivalent to not being rejected at level and so all the problematical issues with p-values as measures of evidence apply equally to confidence regions. Moreover, it is correctly stated that, when x has been observed, then is not a lower bound on the probability that Of course, that is what we want, namely, to state a probability that measures our belief that is in this set, and so confidence regions are commonly misinterpreted; see the discussion of bias in Section 3.4.
An alternative approach to using p-values is provided by e-values; see [14,15,16]. An e-variable for a hypothesis is a non-negative statistic that satisfies whenever The observed value is called an e-value, where the “e” stands for expectation to contrast it with the “p” in p-value which stands for probability. Also, a cut-off needs to be specified such that is rejected whenever It is immediate, from Markov’s inequality, that whenever , and so this provides a rejection trial with cut-off for
(location normal). For the context of Example 2, define for any , and it is immediately apparent that is an e-variable for
Example 3 contains an example of the construction of an e-variable. Also, likelihood ratios can serve as e-variables and there are a number of other constructions of such variables discussed in the cited literature.
Consider the situation where data are collected sequentially, is an e-variable for based on independent data , and there is a stopping time e.g., stop when Also, put Then, whenever we have that
and so the process is a discrete time super-martingale. Moreover, it is clear that is an e-variable for This implies that, under conditions, and so the stopped variable is also an e-variable for . Assuming stopping time is finite with probability 1 when holds, then by Ville’s inequality, This implies that the problem for p-values when sampling until rejecting at size is avoided when using e-values.While e-values have many interesting and useful properties relative to p-values, the relevant question here is whether or not these serve as measures of statistical evidence. The usage of both requires the specification of to determine when there are grounds for rejecting Sometimes, this is phrased instead as “evidence against has been found”, but, given the arbitrariness of the choice of and the failure to properly express when evidence in favor of has been found, neither seems suitable as an expression of statistical evidence. One could argue that the intention behind these approaches is not to characterize statistical evidence but rather to solve the reject/accept problems of decision theory. It is the case, however, at least for p-values, that these are used as if they are proper characterizations of evidence, and this does not seem suitable for purely scientific applications.
Another issue that needs to be addressed in a problem is how to find the statistic or A common argument is to use likelihood ratios (see Section 3.2 and Section 3.3), but this does not resolve the problems that have been raised here, and there are difficulties with the concept of likelihood that need to be addressed as well.
3.2. Birnbaum on Statistical Evidence
Alan Birnbaum devoted considerable thought to the concept of statistical evidence in the frequentist context. Birnbaum’s paper [17] contained what seemed like a startling result about the implications to be drawn from the concept, and his final paper [18] contained a proposal for a definition of statistical evidence. Also, see [19] for a full list of Birnbaum’s publications, many of which contain considerable discussion concerning statistical evidence.
In [17], Birnbaum considered various relations, as defined by statistical principles, on the set of all inference bases, where an inference base is the pair consisting of a sampling model and data supposedly generated from a distribution in the model. A statistical principle R is then a relation defined in namely, R is a subset of There are three basic statistical principles that are commonly invoked as part of evidential inference, namely, the likelihood principle the sufficiency principle and the conditionality principle Inference bases for satisfy if for some constant we have for every they satisfy if the models have equivalent (1-1 functions of each other) minimal sufficient statistics (a sufficient statistic that is a function of every other sufficient statistic) that take equivalent values at the corresponding data values, and they satisfy if there is ancillary statistic a (a statistic whose distribution is independent of such that can be obtained from (or conversely) via conditioning on so The basic idea is that, if two inference bases are related by one of these principles, then they contain the same statistical evidence concerning the unknown true value of For this idea to make sense, it must be the case that these principles form equivalence relations on In [20], it is shown that L and S do form equivalence relations but C does not, and this latter result is connected with the fact that a unique maximal ancillary (an ancillary which every other ancillary is a function of) generally does not exist.
Birnbaum provided a proof in [17] of the result, known as Birnbaum’s Theorem, that if a statistician accepted the principles S and then they must accept This is highly paradoxical because frequentist statisticians generally accept both S and C but not L, as L does not permit repeated sampling. Two very different sampling models can lead to proportional likelihood functions, so repeated sampling behavior is irrelevant under L. There has been considerable discussion over the years concerning the validity of this proof, but no specific flaw has been found. A resolution of this is provided in [20], where it is shown that is not an equivalence relation and what Birnbaum actually proved is that the smallest equivalence relation on that contains is This substantially weakens the result, as there is no reason to accept the additional generated equivalences, and in fact it makes more sense to consider the largest equivalence relation on that is contained in Also, as discussed in [20], an argument similar to the one found in [17] establishes that a statistician who accepts C alone must accept L. Again, however, this only means that the smallest equivalence relation on that contains C is As shown in [21], issues concerning C can be resolved by restricting conditioning on ancillary statistics to stable ancillaries (those ancillaries such that conditioning on them retains the ancillarity of all other ancillaries and similarly have their ancillarity retained when conditioning on any other ancillary), as there always is a maximal stable ancillary. This provides a conditionality principle that is a proper characterization of statistical evidence but it does not lead to
While Birnbaum did not ultimately resolve the issues concerning statistical evidence, [18] made a suggestion as a possible starting point by proposing the confidence concept in the context of comparing two hypotheses versus The confidence concept is characterized by two error probabilities
and then reporting with the following interpretation:This clearly results from a confounding of the decision theoretic (as in Neyman–Pearson) approach to hypothesis testing with the evidential approach, as rejection trials similarly do. Also, this does not give a general definition of what is meant by statistical evidence, as it really only applies in the simple versus simple hypothesis testing context, and it suffers from many of the same issues as discussed concerning p-values. From reading Birnbaum’s papers, it seems he had largely despaired of ever finding a fully satisfactory characterization of statistical evidence in the frequentist context. It will be shown in Section 3.4, however, that the notion of frequentist error probabilities do play a key role in a development of the concept.
3.3. Likelihood
Likelihood is another statistical concept initiated by Fisher; see [22]. While the likelihood function plays a key role in frequentism, a theory of inference, called pure likelihood theory, based solely on the likelihood function, is developed in [10,23]. The basic axiom is thus the likelihood principle L of Section 3.2, which says that the likelihood function from inference base completely summarizes the evidence given in I concerning the true value of . As such, two inference bases with proportional likelihoods must give identical inferences, and so repeated sampling does not play a role. In particular, the ratio provides the evidence for relative to the evidence for , and this is independent of the arbitrary constant
While this argument for the ratios seems acceptable, a problem arises when we ask, does the value provide evidence in favor of or against being the true value? To avoid the arbitrary constant, Royall in [10] replaced the likelihood function by the relative likelihood function given by , where is the maximum likelihood estimate. The relative likelihood always takes values in Note that, if , then for all , and so no other value is supported by more than r times the support accorded to For Royall then argues, based on single urn experiment, for to represent very strong evidence in favor and for to represent quite strong evidence in favor of Certainly, these values seem quite arbitrary and again, as with p-values, we do not have a clear cut-off between evidence in favor and evidence against. Whenever values like this are quoted, there is an implicit assumption that there is a universal scale with which evidence can be measured. Currently, there are no developments that support the existence of such a scale; see the discussion of the Bayes factor in Section 3.4. For estimation, it is natural to quote and report a likelihood region such as for some r as an assessment of the accuracy of
Another serious objection to pure likelihood theory, and to the use of the likelihood function to determine inferences more generally, arises when nuisance parameters are present, namely, we want to make inference about where is not 1-1. In general, there does not appear to be a way to define a likelihood function for the parameter of interest based on the inference base I only, that is consistent with the motivation for using likelihood in the first place, namely, is proportional to the probability of the observed data as a function of It is common in such contexts to use the profile likelihood
as a likelihood function. There are many examples that show that a profile likelihood is not a likelihood, and so such usage is inconsistent with the basic idea underlying likelihood methods. An alternative to the profile likelihood is the integrated likelihood where the are probability measures on the pre-images see [24]. The integrated likelihood is a likelihood with respect to the sampling model , where but this requires adding to the inference base.(location normal). Suppose we want to estimate , so the nuisance parameter is given by The likelihood function is given by Since , this is minimized by when and by when Therefore, the profile likelihood for ψ is , and this depends on the data only through To see that is not a likelihood function, observe that the density of is given by
(1)
which is not proportional to For the integrated likelihood, we need to choose , so(2)
Note that (2) is not obtained by integrating (1), which is appropriate because is not a minimal sufficient statistic.The profile and integrated likelihoods are to be used just as the original likelihood is used for inferences about θ, even though the profile likelihood is not a likelihood. The profile likelihood leads immediately to the estimate for ψ, while (2) needs to be maximized numerically. Although the functional forms look quite different, the profile and integrated likelihoods here give almost identical numerical results. Figure 1 is a plot with generated from a distribution, and the estimates of ψ are both equal to There is asome disagreement in the left tail, so some of the reported intervals will differ, but as n increases, such differences disappear. This agreement is also quite robust against the choice of
Another issue arises with the profile likelihood, namely, the outcome differs in contexts which naturally should lead to equivalent results.
(prediction with scale normal). Suppose is an independent and identically distributed sample from a distribution in the family . Based on the observed data, the likelihood equals where and the MLE of is Now suppose, however, that the interest is in predicting k future (or occurred but concealed) independent values Perhaps the logical predictive likelihood to use is
where Profiling out leads to the profile MLE of y equaling 0 for all x as might be expected. Profiling y out of however, leads to the profile likelihood for , and so the profile MLE of equalsWhen interest is in the integrated likelihood for produces When interest is in then integrating out after placing a probability distribution on produces the MLE 0 for y, although the form of the integrated likelihood will depend on the particular distribution chosen.
The profile and integrated likelihoods certainly do not always lead to roughly equivalent results as in Example 4, particularly as the dimension of the nuisance parameters rises. While the profile likelihood has the advantage of not having to specify it suffers from a lack of a complete justification, at least in terms of the likelihood principle, and, as Example 5 demonstrates, it can produce unnatural results. In Section 3.4, it is shown that the integrated likelihood arises from a very natural principle characterizing evidence.
The discussion here has been mostly about pure likelihood theory, where repeated sampling does not play a role. The likelihood function is also an important aspect of frequentist inference. Such usage, however, does not lead to a resolution of the evidence problem, namely, providing a proper characterization of statistical evidence. In fact, frequentist likelihood methods use the p-value for H. There is no question that the likelihood contains the evidence in the data about but questions remain as to how to characterize that evidence, whether in favor of or against a particular value, and also how to express the strength of the evidence. More discussion of the likelihood principle can be found in [25].
3.4. Bayes Factors and Bayesian Measures of Statistical Evidence
Bayesian inference adds another ingredient to the inference base, namely, the prior probability measure on so now The prior represents beliefs about the true value of Note that is equivalent to a joint distribution for with density Once the data x have been observed, a basic principle of inference is then invoked.
Principle of Conditional Probability: for probability model if is observed to be true, where then the initial belief that is true, as given by is replaced by the conditional probability
So, we replace the prior by the posterior where is the prior predictive density of the data to represent beliefs about While at times the posterior is taken to represent the evidence about this confounds two distinct concepts, namely, beliefs and evidence. It is clear, however, that the evidence in the data is what has changed our beliefs, and it is this change that leads to the proper characterization of statistical evidence through the following principle.
Principle of Evidence: for probability model if is observed to be true where then there is evidence in favor of being true if evidence against being true if and no evidence either way if
Therefore, in the Bayesian context, we compare with to determine whether or not there is evidence one way or the other concerning being the true value. This seems immediate in the context where is a discrete probability measure. It is also relevant in the continuous case, where densities are defined via limits, as in , where is a sequence of neighborhoods of converging nicely to as , and is absolutely continuous with respect to support measure This leads to the usual expressions for densities; see [26] (Appendix A) for details.
The Bayesian formulation has a very satisfying consistency property. If interest is in the parameter then the nuisance parameters can be integrated out using the conditional prior and the inference base is replaced by where is the marginal prior on , with density Applying the principal of evidence here means to compare the posterior
(3)
to the prior density So, there is evidence in favor of being the true value whenever etc.In many applications, we need more than the simple characterization of evidence that the principle of evidence gives us. A valid measure of evidence is then any real-valued function of that satisfies the existence of a cut-off c such that the function taking a value greater than c corresponds to evidence in favor, etc. One very simple function satisfying this is the relative belief ratio
(4)
where the second equality follows from (3) and is sometimes referred to as the Savage–Dickey ratio; see [27]. Using , the values of are now totally ordered with respect to evidence, as when , there is more evidence in favor of than for etc.This suggests an immediate answer to E, namely, record the relative belief estimate , as this value has the maximum evidence in its favor. Also, to assess the accuracy of , record the plausible region , the set of values with evidence in their favor, together with its posterior content , as this measures the belief that the true value is in Note that while depends on the choice of the relative belief ratio to measure evidence, the plausible region only depends on the principle of evidence. This suggests that any other valid measure of evidence can be used instead to determine an estimate, as it will lie in . A -credible region for can also be quoted based on any valid measure of evidence provided , as, otherwise, would contain a value of for which there is evidence against being the true value. For example, a relative belief -credible region takes the form where
For H, the value tells us immediately whether there is evidence in favor of or against To measure the strength of the evidence concerning , there is the posterior probability as this measures our belief in what the evidence says. If and then there is strong evidence that is true, while when and then there is strong evidence that is false. Often, however, will be small, even 0 in the continuous case, so it makes more sense to measure the strength of the evidence in such a case by , as when and then there is small belief that the true value of has more evidence in its favor than etc.
Recalling the discussion in Section 3.1 about the difference that matters it is relatively easy to take this into account in this context, at least when is real valued. For this, we consider a grid of values separated by and then conduct the inferences using the relative belief ratios of the intervals In effect, is now
(location normal). Suppose that we add the prior to form a Bayesian inference base. The posterior distribution is then
Suppose our interest is inference about Figure 2 is a plot of the relative belief ratio based on the data in Example 4 generated from a distribution) and using the prior with From (4), it is seen that maximizing is equivalent to maximizing the integrated likelihood, and in this case, the prior is such that with prior probability 0.5, so indeed the relative belief estimate is To assess the accuracy of the estimate, we record the plausible interval which has posterior content There is evidence in favor of since and with so there is only moderate evidence in favor of 2 being the true value of ψ. Of course, these results improve with sample size For example, for with with posterior content and with strength So, the plausible interval has shortened considerably, and its posterior content has substantially increased, but the strength of the evidence in favor of has not increased by much. It is important to note that, because we are employing a discretization (with here), and since the posterior is inevitably virtually completely concentrated in as the strength converges to 1 at any of the values in this interval and to 0 at values outside of it.There are a number of optimal properties for the relative belief inferences as discussed in [26]. While this is perhaps the first attempt to build a theory of inference based on the principle of evidence, there is considerable literature on this idea in the philosophy of science where it underlies what is known as confirmation theory; see [28]. For example, the philosopher Popper in [29] (Appendix ix) writes
If we are asked to give a criterion of the fact that the evidence y supports or corroborates a statement x, the most obvious reply is: that y increases the probability of
One issue with philosophers’ discussions is that these are never cast in a statistical context. Many of the anomalies raised in those discussions, such as Hempel’s Raven Paradox, can be resolved when formulated as statistical problems. Also, the relative belief ratio itself has appeared elsewhere, although under different names, as a natural measure of evidence.
There are indeed other valid measures of evidence besides the relative belief ratio, for example, the Bayes factor originated with Jeffreys as in [30]; see [6,31] for discussion. For probability model with the Bayes factor in favor of A being true, after observing that C is true, is given by
the ratio of the posterior odds in favor of to the prior odds in favor of It is immediate that is a valid measure of evidence because if and only if Also, and so, because if and only if this implies when there is evidence in favor and when there is evidence against. These inequalities are important because it is sometimes asserted that the Bayes factor is not only a measure of evidence but also that its value is a measure of the strength of that evidence. Table 1 gives a scale, due to Jeffreys (see [32] (Appendix B), which supposedly can be used to assess the strength of evidence as given by the Bayes factor. Again, this is an attempt to establish a universal scale on which evidence can be measured and currently no grounds exist for claiming that such a scale exists. If we consider that the strength of the evidence is to be measured by how strongly we believe what the evidence says, then simple examples can be constructed to show that such a scale is inappropriate. Effectively, the strength of the evidence is context dependent and needs to be calibrated.Suppose Ω is a set with A value ω is generated uniformly from Ω and partially concealed but it is noted that where It is desired to know if where and Then, we have that and so since The posterior belief in what the evidence is saying is, however, which can be very small, say when But, if then which is well into the range where Jeffreys scale says it is decisive evidence in favor of Clearly there is only very weak evidence in favor of A being true here. Also, note that the posterior probability by itself does not indicate evidence in favor although the observation that C is true must be evidence in favor of A because The example can obviously be modified to show that any such scale is not appropriate.
There is another issue with current usage of the Bayes factor that needs to be addressed. This arises when interest is in assessing the evidence for or against when Clearly, the Bayes factor is not defined in such a context. An apparent solution is provided by choosing a prior of the form , where is a prior probability assigned to , and is a prior probability measure on the set Since the Bayes factor is now defined. Of some concern now is how should be chosen. A natural choice is to take otherwise there is a contradiction between beliefs as expressed by and It follows very simply, however, that in the continuous case, when then Moreover, if instead of using such a mixture prior, we define the Bayes factor via
where is a sequence of neighborhoods of converging nicely to then, under weak conditions ( is positive and continuous at , So, there is no need to introduce the mixture prior to get a valid measure of evidence. Furthermore, now the Bayes factor is available for E as well as H as surely any proper characterization of statistical evidence must be, and the relevant posterior for is as obtained from rather than fromOther Bayesian measures of evidence have been proposed. For example, it is proposed in [33] to measure the evidence for by computing and then use the posterior tail probability If is large, this is evidence against , while, if it is small, it is evidence in favor of . Note that is sometimes referred to as an e-value, but this is different from the e-values discussed in Section 3.1. Further discussion and development of this concept can be found in [34]. Clearly, this is building on the idea that underlies the p-value, namely, providing a measure that locates a point in a distribution and using this to assess evidence. It does not, however, conform to the principle of evidence.
Bias and Error Probabilities
One criticism that is made of Bayesian inference is that there are no measures of reliability of the inferences, which is an inherent part of frequentism. It is natural to add such measures, however, to assess whether or not the specified model and prior could potentially lead to misleading inferences. For example, suppose it could be shown that evidence in favor of would be obtained with prior probability near 1, for a data set of given size. It seems obvious that, if we did collect this amount of data and obtained evidence in favor of then this fact would undermine our confidence in the reliability of the reported inference.
(Jeffreys–Lindley paradox). Suppose we have the location normal model and is obtained which leads to the p-value , so there would appear to be definitive evidence that is false. Suppose the prior is used, and the analyst chooses to be very large to reflect the fact that little is known about the true value of It can be shown (see [26]), however, that as Therefore, for a very diffuse prior, evidence in favor of will be obtained, and so the frequentist and Bayesian will disagree. Note that the Bayes factor equals the relative belief ratio in this context. A partial resolution of this contradiction is obtained by noting that as and so the Bayesian measure of evidence is only providing very weak evidence when the p-value is small. This anomaly occurs even when the true value of μ is indeed far away from , and so the fault here does not lie with the p-value.
Prior error probabilities associated with Bayesian measures of evidence can, however, be computed and these lead to a general resolution of the apparent paradox. There are two error probabilities for H that we refer to as bias against and bias in favor of as given by the following two prior probabilities
Both of these biases are independent of the valid measure of evidence used, as they only depend on the principle of evidence as applied to the model and prior chosen. The is the prior probability of not getting evidence in favor of when it is true and plays a role similar to type I error. The is the prior probability of not obtaining evidence against when it is meaningfully false and plays a role similar to that of the type II error. Here, is the deviation from which is of scientific interest as determined by some measure of distance on As discussed in [26], these biases cannot be controlled by, for example, the choice of prior, as a prior that reduces causes to increase and vice versa. The proper control of these quantities is through sample size as both converge to 0 as
(Jeffreys–Lindley paradox). In this case, bias against and the bias in favor of as This leads to an apparent resolution of the issue: do not choose the prior to be arbitrarily diffuse to reflect noninformativeness; rather, choose a prior that is sufficiently diffuse to cover the interval where it is known μ must lie, e.g., the interval where the measurements lie, and then choose n so that both biases are suitably small. While the Jeffreys–Lindley paradox arises due to diffuse priors inducing bias in favor, an overly concentrated prior induces bias against, but again this bias can be controlled via the amount of data collected.
There are also two biases associated with E obtained by averaging, using the prior the biases for H. These can also be expressed in terms of coverage probabilities of the plausible region The biases for E are given by
where is the implausible region, namely, the set of values for which evidence against has been obtained. So, is the prior probability that the true value is not in the plausible region and so is the prior coverage probability (Bayesian confidence) of It is of interest that there is typically a value and so gives a lower bound on the frequentist confidence of with respect to the model obtained from the original model by integrating out the nuisance parameters. In both cases, the Bayesian confidences are average frequentist confidences with respect to the original model. is the prior probability that a meaningfully false value is not in the implausible region. Again, both of these biases do not depend on the valid measure of statistical evidence used and converge to 0 with increasing sample size.With the addition of the biases, a link is established between Bayesianism and frequentism: inferences are Bayesian, but the reliability of the inferences is assessed via frequentist criteria. More discussion on bias in this context can be found in [35].
4. Conclusions
This paper is a review of various conceptions of statistical evidence, as this has been discussed in the literature. It is seen that precisely characterizing evidence is still an unsolved problem in purely frequentist inference and in pure likelihood theory, but the Bayesian framework provides a natural way to perform this through the principle of evidence. One often hears complaints about the need to choose a prior which can be performed properly through elicitation; see [36]. Furthermore, priors are falsifiable via checking for prior-data conflict; see [26]. Finally, the effects of priors can be mitigated by the control of the biases, which are directly related to the principle of evidence.
The code used for the computations is available from the author.
Many thanks to Qiaoyu Liang for a number of helpful comments.
The author’s research is concerned with the need to be precise about the concept of statistical evidence and with the construction of a theory of statistical inference based on this. Parts of this theory are described in
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 1. Relative likelihoods for [Forumla omitted. See PDF.] computed from the profile—and integrated—likelihoods with [Forumla omitted. See PDF.] for data generated from a [Forumla omitted. See PDF.] distribution in Example 4.
Figure 2. Relative belief ratio for [Forumla omitted. See PDF.] in Example 6 with [Forumla omitted. See PDF.].
Jeffreys’ Bayes factor scale for measuring strength of the evidence in favor (the strength of evidence against is measured by the reciprocals).
Bayes Factor | Strength |
---|---|
1 to | Barely worth mentioning |
Substantial | |
10 to | Strong |
Very Strong | |
> | Decisive |
References
1. Ionides, E.L.; Giessing, A.; Ritov, Y.; Page, S.E. Response to the ASA’s Statement on p-Values: Context, Process, and Purpose. Am. Stat.; 2017; 71, pp. 88-89. [DOI: https://dx.doi.org/10.1080/00031305.2016.1234977]
2. Wasserstein, R.L.; Lazar, N.A. The ASA Statement on p-Values: Context, process, and purpose. Am. Stat.; 2016; 70, pp. 129-133. [DOI: https://dx.doi.org/10.1080/00031305.2016.1154108]
3. Lehmann, E.L. Neyman’s statistical philosophy. Selected Works of E. L. Lehmann; Springer: Boston, MA, USA, 1995; pp. 1067-1073.
4. Neyman, J. “Inductive Behavior” as a basic concept of philosophy of science. Rev. Int. Stat. Inst.; 1957; 25, pp. 7-22. [DOI: https://dx.doi.org/10.2307/1401671]
5. Neyman, J. Foundations of behavioristic statistics. Foundations of Statistical Inference, a Symposium; Godambe, V.P.; Sprott, D.A. Holt, Rinehart and Winston of Canada: Toronto, ON, Canada, 1971; pp. 1-13.
6. Berger, J.O. Statistical Decision Theory and Bayesian Analysis; 2nd ed. Springer: Berlin/Heidelberg, Germany, 2006.
7. Fisher, R.A. Statistical Methods for Research Workers; 14th ed. Hafner Press: Royal Oak, MI, USA, 1925.
8. Stigler, S.M. The History of Statistics; Belknap Press: Cambridge, MA, USA, 1986.
9. Stigler, S.M. The Seven Pillars of Statistical Wisdom; Harvard University Press: Cambridge, MA, USA, 2016.
10. Royall, R. Statistical Evidence: A likelihood paradigm; Chapman & Hall: London, UK, 1997.
11. Ioannidis, J.P.A. The proposal to lower p-value thresholds to .005. J. Am. Med. Assoc.; 2018; 319, pp. 1429-1430. [DOI: https://dx.doi.org/10.1001/jama.2018.1536] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/29566133]
12. Boring, E.G. Mathematical vs. scientific significance. Psychol. Bull.; 1919; 16, pp. 335-338. [DOI: https://dx.doi.org/10.1037/h0074554]
13. Cornfield, J. Sequential trials, sequential analysis and the likelihood principle. Am. Stat.; 1966; 20, pp. 18-23. [DOI: https://dx.doi.org/10.1080/00031305.1966.10479786]
14. Grunwald, P.; de Heide, R.; Koolen, W.M. Safe testing. J. R. Stat. Soc. Ser. B; 2024; in press [DOI: https://dx.doi.org/10.1093/jrsssb/qkae011]
15. Shafer, G.; Shafer, A.; Vereshchagin, N.; Vovk, V. Test martingales, Bayes factors and p-values. Stat. Sci.; 2011; 26, pp. 84-101.
16. Vovk, V.; Wang, R. Confidence and discoveries with e-values. Stat. Sci.; 2023; 38, pp. 329-354. [DOI: https://dx.doi.org/10.1214/22-STS874]
17. Birnbaum, A. On the foundations of statistical inference (with discussion). J. Am. Stat. Assoc.; 1962; 57, pp. 269-326. [DOI: https://dx.doi.org/10.1080/01621459.1962.10480660]
18. Birnbaum, A. The Neyman-Pearson theory as decision theory, and as inference theory; with criticism of the Lindley-Savage argument for Bayesian theory. Synthese; 1977; 36, pp. 19-49. [DOI: https://dx.doi.org/10.1007/BF00485690]
19. Giere, R.N. Publications by Allan Birnbaum. Synthese; 1977; 36, pp. 15-17. [DOI: https://dx.doi.org/10.1007/BF00485689]
20. Evans, M. What does the proof of Birnbaum’s theorem prove?. Electron. J. Stat.; 2013; 7, pp. 2645-2655. [DOI: https://dx.doi.org/10.1214/13-EJS857]
21. Evans, M.; Frangakis, C. On resolving problems with conditionality and its implications for characterizing statistical evidence. Sankhya A; 2023; 85, pp. 1103-1126. [DOI: https://dx.doi.org/10.1007/s13171-022-00295-2]
22. Fisher, R.A. On the mathematical foundations of theoretical statistics. Philos. Trans. R. Soc. London. Ser. A; 1922; 222, pp. 309-368.
23. Edwards, A.W.F. Likelihood; 2nd ed. Johns Hopkins University Press: Baltimore, MD, USA, 1992.
24. Berger, J.O.; Liseo, B.; Wolpert, R.L. Integrated likelihood methods for eliminating nuisance parameters. Stat. Sci.; 1999; 14, pp. 1-22. [DOI: https://dx.doi.org/10.1214/ss/1009211804]
25. Berger, J.O.; Wolpert, R.L. The Likelihood Principle: A Review, Generalizations, and Statistical Implications. IMS Lecture Notes Monogr. Ser.; 1988; 6, 208. [DOI: https://dx.doi.org/10.1214/lnms/1215466210]
26. Evans, M. Measuring Statistical Evidence Using Relative Belief. CRC Press: Boca Raton, FL, USA, 2015.
27. Dickey, J. The weighted likelihood ratio, linear hypotheses on normal location parameters. Ann. Stat.; 1971; 42, pp. 204-223. [DOI: https://dx.doi.org/10.1214/aoms/1177693507]
28. Salmon, W. Confirmation. Sci. Am.; 1973; 228, pp. 75-81. [DOI: https://dx.doi.org/10.1038/scientificamerican0573-75]
29. Popper, K. The Logic of Scientific Discovery. Harper Torchbooks Routledge: New York, NY, USA, 1968.
30. Jeffreys, H. Some tests of significance, treated by the theory of probability. Math. Proc. Camb. Philos.; 1935; 31, pp. 203-222. [DOI: https://dx.doi.org/10.1017/S030500410001330X]
31. Kass, R.E.; Raftery, A.E. Bayes factors. J. The American Stat. Assoc.; 1995; 90, pp. 773-795. [DOI: https://dx.doi.org/10.1080/01621459.1995.10476572]
32. Jeffreys, H. Theory of Probability; 3rd ed. Oxford University Press BPS: Oxford, UK, 1961.
33. De Bragança Pereira, C.A.; Stern, J.M. Evidence and credibility: Full Bayesian significance test for precise hypotheses. Entropy; 1999; 1, pp. 99-110. [DOI: https://dx.doi.org/10.3390/e1040099]
34. Stern, J.M.; Pereira, C.A.D.B.; Lauretto, M.D.S.; Esteves, L.G.; Izbicki, R.; Stern, R.B.; Diniz, M. The e-value and the full Bayesian significance test: Logical properties and philosophical consequences. São Paulo J. Math. Sci.; 2022; 16, pp. 566-584.
35. Evans, M.; Guo, Y. Measuring and controlling bias for some Bayesian inferences and the relation to frequentist criteria. Entropy; 2021; 23, 190. [DOI: https://dx.doi.org/10.3390/e23020190] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33557320]
36. O’Hagan, A.; Buck, C.E.; Daneshkhah, A.; Eiser, J.R.; Garthwaite, P.H.; Jenkinson, D.J.; Oakley, J.E.; Rakow, T. Uncertain Judgements: Eliciting Expert Probabilities; Wiley: Hoboken, NJ, USA, 2006.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2024 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
One can argue that one of the main roles of the subject of statistics is to characterize what the evidence in the collected data says about questions of scientific interest. There are two broad questions that we will refer to as the estimation question and the hypothesis assessment question. For estimation, the evidence in the data should determine a particular value of an object of interest together with a measure of the accuracy of the estimate, while for the hypothesis assessment, the evidence in the data should provide evidence in favor of or against some hypothesized value of the object of interest together with a measure of the strength of the evidence. This will be referred to as the evidential approach to statistical reasoning, which can be contrasted with the behavioristic or decision-theoretic approach where the notion of loss is introduced, and the goal is to minimize expected losses. While the two approaches often lead to similar outcomes, this is not always the case, and it is commonly argued that the evidential approach is more suited to scientific applications. This paper traces the history of the evidential approach and summarizes current developments.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer