About the Authors:
Ana C. Guedes
Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Resources, Software, Supervision, Validation, Visualization, Writing – original draft
Affiliation: Departamento de Estatística, Universidade Federal de Pernambuco, Recife, PE, Brazil
Francisco Cribari-Neto
Roles Conceptualization, Data curation, Formal analysis, Investigation, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – review & editing
Affiliation: Departamento de Estatística, Universidade Federal de Pernambuco, Recife, PE, Brazil
Patrícia L. Espinheira
Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Resources, Software, Supervision, Validation, Visualization, Writing – review & editing
* E-mail: [email protected]
Affiliation: Departamento de Estatística, Universidade Federal de Pernambuco, Recife, PE, Brazil
ORCID logo https://orcid.org/0000-0002-9150-8330
Introduction
Regression models are useful for gaining knowledge on how different variables (known as regressors, covariates or independent variables) impact the mean behavior of a variable of interest (known as dependent variable or response). The beta regression model is the most commonly used model with responses that are double bounded, in particular with responses that assume values in the standard unit interval, (0, 1). It was introduced by [1] who used an alternative parameterization for the beta density, which is indexed by mean (μ) and precision (ϕ) parameters. Let Y be a beta-distributed random variable. Its density is(1)0 < μ < 1, ϕ > 0, where Γ(⋅) is the gamma function. Such a law is quite flexible in the sense that the density in (1) can assume different shapes depending on the parameter values. It was used by [1] as the underlying foundation for a regression model in which y1, …, yn are independent random variables such that yi is beta-distributed with mean μi (i.e. IE(yi) = μi) and precision parameter ϕ, for i = 1, …, n. They showed that the variance of yi is μi(1 − μi)/(1 + ϕ) which, for a given μi, is decreasing in ϕ. The model is thus heteroskedastic since the variance of yi changes with μi. The response means are modeled using a set of covariates and ϕ is assumed constant across observations. This model became known as the fixed precision beta regression model.
A more general beta regression formulation was considered by [2] and formally introduced by [3] who allowed the precision parameter to vary across observations, i.e. yi is beta-distributed with mean μi and precision ϕi, i = 1, …, n. More flexibility can be achieved in some situations by allowing the precision parameter to be impacted by some covariate values. In such a more general formulation, the variance of yi is no longer restricted to be a multiple of μi(1 − μi). The model includes two separate regression submodels, one for the mean and another for the precision, and became known as the variable precision beta regression model. The fixed precision beta regression model is a particular case of the variable precision counterpart; it is obtained by setting ϕ1 = ⋯ = ϕn = ϕ.
Fixed and varying precision beta regression modeling have been used in many different fields. A beta regression analysis of the effects of sexual maturity on space use in Atlantic salmon (Salmo salar) parr can be found in [4]. In [5] the beta regression model is used to segment and describe the container shipping market by analyzing the relationships between service attributes and likelihood of customer retention for the container shipping industry. Some applications of beta regression modeling in ecology can be found in [6]. In [7], a statistical downscaling model is developed based on beta regression which allow precipitation state in river basin to be calculated. The beta regression model is used by [8] to model global solar radiation. For a beta regression analysis of ischemic stroke volume, see [9].
In both variants of the beta regression model (fixed and variable precision), parameter estimation is carried out by maximum likelihood. It is common practice to perform testing inferences via likelihood ratio and z-tests. The latter are Wald-type tests and are typically less accurate than the former; see [10]. Point estimation and testing inferences are usually accurate when the sample size (n) is large. In some applications, nonetheless, the number of data points is small and it is recommended to make use of inferential tools that are expected to yield reliable inferences in small samples. For instance [11], obtained modified parameter estimates that display smaller biases in fixed and variable precision linear beta regression models.
The likelihood ratio test, which is commonly used in beta regression empirical analyses, employs an asymptotic approximation: the critical values used in the test are obtained from the test statistic’s asymptotic null distribution, which is known to be , where l is the number of restrictions under evaluation. An asymptotic approximation is used because the test statistic’s exact null distribution is unknown. In large samples, the test typically delivers accurate inferences since there is good control of the type I error frequency. In contrast, when the number of data points is small, size distortions can be large. In particular, the test tends to be liberal (oversized): the effective null rejection rates tend to be considerably larger than the selected significance level. When the sample size is quite small, the test’s effective null rejection can be much larger than the nominal significance level, as shown by the numerical evidence we report. A Bartlett correction to the likelihood ratio test was derived by [12]. A major shortcoming of their result, however, is that it only holds for the fixed precision beta regression model. In this paper, we overcome such a shortcoming by deriving the Bartlett correction for varying precision beta regressions, which are more commonly used by practitioners. The derivation of the correction becomes more challenging in the more general setting. That happens because the parameters that index the two submodels are not orthogonal in the sense that Fisher’s information matrix is not block diagonal, and that renders lengthier and more complex derivations of the quantities involved in the Bartlett correction. We considered three Bartlett-corrected test statistics. It is noteworthy that the size distortions of such tests vanish faster than those of the standard likelihood ratio test as the sample size increases and thus the new tests are expected to outperform the likelihood ratio test in small samples. In particular, the likelihood ratio test’s size distortions are O(n−1) whereas those of the Bartlett-corrected tests are O(n−2).
To motivate our analysis, consider the following important issue in behavioral biometrics: the impact of average intelligence on the prevalence of religious disbelievers. Suppose there is interest in measuring such a net impact using data on n nations. The variable of interest (response) is the proportion of atheists in each country and the covariates include average intelligence and other control variables. [13] carried out varying precision beta regression analyses and produced estimates of such an impact under different scenarios. Each scenario corresponds to a particular choice of countries. We consider the scenario that uses data on 50 countries. We show that by using corrected likelihood ratio tests we arrive at a varying precision beta regression model different from that used by the authors. It is noteworthy that our model yields a better fit than their model. We also note that the maximal estimated impact of intelligence on religious disbelief obtained from our model is considerably larger than that computed from the model in [13] in low income nations. Our results also reveal that, as countries become more developed, the maximal impact of intelligence on the prevalence of atheists weakens and the impact becomes, in the plausible range of average intelligence values, more symmetric. To the best of our knowledge, this is the first analysis of how the maximal impact of average intelligence on the prevalence of atheists is affected by economic development. This illustrates the importance of using tests with good small sample performance when performing beta regression analyses with samples of small to moderate sizes.
The remainder of the paper is structured as follows. In first section that follows this introduction, we present the variable precision beta regression model. In the second section, we derive the Bartlett correction to the likelihood ratio test in varying precision beta regressions and use it in three modified test statistics. Our main contribution is that we obtain closed-form expressions for the quantities that allow improved testing inferences to be carried out in varying precision beta regressions. Additionally, we briefly review an alternative small sample correction that is already available in the literature. Unlike the correction we derive, however, it does not yield an improvement in the rate at which size distortions vanish. In particular, the size distortions of our corrected tests vanish at rate O(n−2) whereas those of the alternative tests we consider do so at rate O(n−1). Monte Carlo simulation evidence is presented in the third section. An empirical application that addresses an important issue in behavioral biometrics is presented and discussed in the fourth section. The fifth section contains some concluding remarks. Technical details related to the derivation of the quantities involved in the Bartlett correction are presented in the Appendix.
The beta regression model
Let y = (y1, …, yn)⊤ be a vector of independent random variables such that yi follows the beta distribution with mean μi and precision ϕi, i = 1, …, n. Such parameters are modeled aswhere β = (β1, …, βp)⊤ ∈ IRp and δ = (δ1, …, δq)⊤ ∈ IRq are unknown regression parameters (p + q < n), ηi and ζi are linear predictors, xi1 ≡ hi1 ≡ 1∀i, xi2, …, xip and hi2, …, hiq are mean and precision covariates, respectively, and g1: (0, 1) ↦ IR and g2: (0, ∞) ↦ IR are strictly monotonic and twice-differentiable link functions. Common choices for g1 are logit, probit, loglog, cloglog and Cauchy, and common choices for g2 are log and square root; see [14].
Let θ = (β⊤, δ⊤)⊤ be the vector containing all regression coefficients. The log-likelihood function is(2)where , with and . The maximum likelihood estimators of β and δ solve U = ∂ℓ(β, δ)/∂θ = (Uβ(β, δ)⊤, Uδ(β, δ)⊤)⊤ = 0p+q, where 0p+q is a (p+ q)-vector of zeros. They cannot be expressed in closed-form. Maximum likelihood estimates can be obtained by numerically maximizing the model log-likelihood function using a Newton or quasi-Newton optimization algorithm such as the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm; see [15].
For a recent overview of the beta regression model, see [6]. Practitioners can perform beta regression analyses using the betareg package developed for the R statistical computing environment; see [14].
Improved likelihood ratio tests in beta regressions
At the outset, we consider a general setup. Suppose the interest lies in testing a null hypothesis () that imposes l restrictions on the k-dimensional parameter vector θ = (β⊤, δ⊤)⊤, where k = p + q. To that end, we write θ = (ψ⊤, λ⊤)⊤, where ψ = (ψ1, …, ψl)⊤ is the vector of parameters of interest and λ = (λ1, …, λs)⊤ is the vector of nuisance parameters so that l + s = p + q. We wish to test against , where ψ(0) is a given l-vector. The likelihood ratio test statistic iswhere () and () are the unrestricted and restricted maximum likelihood estimators of (ψ⊤, λ⊤), respectively. Under the null hypothesis, w is asymptotically distributed as . The test is usually performed using critical values obtained from such an asymptotic null distribution, the approximation error being of order O(n−1). That is, under the null hypothesis, , where α ∈ (0, 1) is the test significance level and is the (1 − α)th quantile from the distribution. The chi-squared approximation to the null distribution of ω may be poor when the sample size is small and, as a result, large size distortions may take place.
A correction that became known as ‘the Bartlett correction’ was developed to improve the likelihood ratio test’s small sample behavior. It uses the fact that, under , IE(ω) = l + b + O(n−2), where b = b(θ) is O(n−1). Using such a result, it is possible to define the corrected test statisticwhose expected value equals l when terms of order O(n−2) are neglected. The quantity c = 1 + b/l became known as ‘the Bartlett correction factor’. A general approach for obtaining the Bartlett correction factor in statistical models was developed by [16]. His approach requires the derivation of log-likelihood cumulants. The expected value of ω, under the null hypothesis, can be expressed aswhere εk and εk−l are of order O(n−1). Here,(3)whereThe above cumulants (κ’s) are defined in the Appendix. The indices r, s, t, u, v and w vary over all k parameters in the summation in (3). The Bartlett correction factor can then be written as[16] also showed that all cumulants of the Bartlett-corrected test statistic agree with those of the reference chi-squared distribution with error of order O(n−3/2) which indicates that its null distribution is expected to be well approximated by the limiting chi-squared distribution. [17] obtained an asymptotic expansion for the null distribution of ω; see also [18–20]. [21] showed that size distortions of Bartlett-corrected tests are of order O(n−2), and not of order O(n−3/2), as previously believed.
In what follows, we shall obtain the Bartlett correction factor for the class of varying precision beta regressions. We shall only present the main result. Details on the derivation can be found in the Appendix. It is noteworthy that β and δ are not orthogonal (i.e., Fisher’s information matrix is not block diagonal), unlike what happens in the class of generalized linear models. As a consequence, the derivation of the Bartlett correction factor becomes lengthier and more challenging. We shall use the main result in [22], who wrote the general adjustment factor in matrix form. At the outset, we define some k × k matrices whose (r, s) elements are t, u = 1, …, k. We derived the log-likelihood cumulants up to fourth order for the class of varying precision beta regression models. These cumulants are presented in the Appendix. Using such results, we obtain matrices A(tu), P(t) and Q(u). It is then possible to write εk as(4)where tr(⋅) is the trace operator and the (r, s) elements of L, M and N are r, s = 1, …, k. Also, εk−l is obtained from (4) by only considering the nuisance parameters.
The corrected statistic ωb1 is the standard Bartlett-corrected likelihood ratio test statistic. In addition to it, we shall also consider two other Bartlett-corrected test statistics that are used in [23]. The three test statistics are equivalent up to order O(n−1) and are given byWe shall refer to the three corrected test statistics above as ‘ratio-like’, ‘exponentially adjusted’ and ‘multiplicative-like’, respectively. An advantage of ωb2 is that it is always positive-valued. In order to use the above test statistics in a given class of models, it is necessary to obtain closed-form expressions for εk and εk−l that are valid for such models. For varying precision beta regressions, these quantities can be computed using Eq (4), which is our main result. For details on Bartlett corrections, we refer readers to [24, 25].
An alternative correction to the likelihood ratio test statistic was proposed by [26] who generalized previous results in [27]. His main result relate to those in [28, 29]. The author in [26] proposed using the following two modified test statistics: ωa1 = ω − 2 log ξ and ωa2 = ω(1 − ω−1 log ξ)2, the latter having the advantage of always being positive-valued. ξ is a function of several model-based quantities (score function, expected information, observed information, etc.). Closed-form expressions for ξ were derived by several authors considering different underlying models. In particular, for models tailored for double limited responses, they were derived by [30] for unit gamma regressions, by [31] for varying precision regression models, and by [32] for beta regressions with parametric mean link function. The finite sample performances of such corrected tests when used in beta regressions was numerically evaluated by [10].
It is noteworthy that the size distortions of the three Bartlett-corrected tests vanish at a faster rate than those of ω, ωa1 and ωa2 as the sample size increases: O(n−2) versus O(n−1).
Finally, we note that there are alternative strategies for achieving accurate hypothesis testing inferences in small samples. For instance [33], proposed a numerical approach for estimating the Bartlett correction factor and [34] obtained the Bartlett correction for generalized linear models using a modified version of the likelihood function that accounts for the impact of nuisance parameters on the inference made on the parameters of interest. We shall not pursue these approaches since, as we shall see, the standard Bartlett corrected test is able to deliver extremely accurate inference in small samples in varying precision beta regressions even when the number of nuisance parameters is large.
Numerical evidence
In what follows we shall present Monte Carlo simulation results on the finite sample performances of six tests in varying precision beta regressions, namely: ω, ωb1 (‘ratio-like’), ωb2 (‘exponentially adjusted’), ωb3 (‘multiplicative-like’), ωa1 and ωa2. All reported results are based on 10,000 replications and were obtained using the R statistical computing environment; see [35]. Log-likelihood maximization was performed using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm with analytical first derivatives. Starting values for β and δ were computed as described in Appendix A of [36] with minor tweaks. The computation of such starting values entails the estimation of two linear regressions. We consider the varying precision beta regression model log(μi/(1 − μi) = β1 + β2 xi2 + β3 xi3 + β4 xi4 and log(ϕi) = δ1 + δ2 hi2 + δ3 hi3, i = 1, …, n. All covariate values were obtained as random draws and remained constant for all replications performed for a given sample size. We consider three scenarios. In the first scenario, we test , and hence l = 1 (one restriction). The true parameter values are β1 = 1.0, β2 = 1.7, β3 = 3.5, β4 = 0, δ1 = 3.7, δ2 = 1.5 and δ3 = 0.9. In the second scenario, the interest lies in testing : β3 = β4 = 0, thus l = 2 (two restrictions). The parameter values in this case are β1 = 1.0, β2 = 1.7, β3 = β4 = 0, δ1 = 3.7, δ2 = 1.5 and δ3 = 0.9. In the third and final scenario, the null hypothesis under evaluation is : δ2 = δ3 = 0, and hence l = 2 (two restrictions). The parameter values are β1 = 1.0, β2 = 1.7, β3 = 2.5, β4 = −3.0, δ1 = 3.7 and δ2 = δ3 = 0. We computed the tests’ null rejection rates at the α = 10%, 5%, 1% significance levels for different sample sizes (n ∈ {15, 20, 30, 40}). They are presented in Table 1 (first scenario), Table 2 (second scenario) and Table 3 (third scenario); all entries are percentages.
[Figure omitted. See PDF.]
Table 1. Null rejection rates (%), : β4 = 0.
https://doi.org/10.1371/journal.pone.0253349.t001
[Figure omitted. See PDF.]
Table 2. Null rejection rates (%), .
https://doi.org/10.1371/journal.pone.0253349.t002
[Figure omitted. See PDF.]
Table 3. Null rejection rates (%), .
https://doi.org/10.1371/journal.pone.0253349.t003
The tests’ null rejection rates for the first scenario are, as noted, displayed in Table 1. At the outset, we note that the likelihood ratio test ω is considerably liberal, that is, it rejects the null hypothesis too often when it is true. For instance, when n = 15 and α = 10%, its null rejection rate exceeds 30%, i.e. it is over three time larger than the nominal significance level. When n = 20, it equals 25.2%. The test is considerably oversized even when n = 40 (null rejection rate > 15%). The corrected tests display much better control of the type I error frequency, especially the third Bartlett-corrected test (i.e. that based on ωb3—‘multiplicative-like’). For example, when n = 20 and α = 10%, its null rejection is 10.4% whereas those of ωb1, ωb2, ωa1 and ωa2, are, respectively, 16.7%, 14.6%, 14.7% and 17.2%. All modified tests display small size distortions when n = 40; again, ωb3 (‘mutiplicative-like’) is the best performer. Interestingly, ωb3 is the only conservative test when the sample size is very small (n = 15).
Fig 1 contains quantile-quantile (QQ) plots of three test statistics, namely: the likelihood ratio test statistic, the best performing Bartlett-corrected test statistic (ωb3—‘mutiplicative-like’) and the best performing test statistics obtained from the alternative finite sample correction (ωa1). We plot the exact quantiles of the three test statistics against their asymptotic counterparts (obtained from the distribution). The included 45° line indicates perfect agreement between exact and asymptotic null distributions. The left and right panels are for n = 15 and n = 20, respectively. In both plots, the line that corresponds to ω is considerably above the 45° line which indicates that the test statistic exact quantiles are much larger than the asymptotic quantiles, and that translates into liberal test behavior, i.e., the test tends to overreject the null hypothesis. The exact quantiles of ωa1 also exceed those from the chi-squared distribution, but less dramatically. The null distribution of ωb3, the Bartlett-corrected test statistic (‘mutiplicative-like’), is very well approximated by the limiting distribution since the dashed line is very close to the 45° line.
[Figure omitted. See PDF.]
Fig 1. Quantile-quantile plots, .
https://doi.org/10.1371/journal.pone.0253349.g001
Table 2 contains simulation results for the second scenario, that is, it contains results relative to testing that β3 and β4 are jointly equal to zero. Here, l = 2. Again, the likelihood ratio test is markedly oversized when the sample size is small, even more so than in the previous scenario. For instance, when α = 10% and n = 20, the estimated size of the test equals 31.1%, i.e. the test’s empirical size is over three times larger than the nominal significance level. The corrected tests perform much more reliably. Again, overall, the best performing test is that based on our third Bartlett-corrected test statistic (ωb3—‘mutiplicative-like’). For instance, when n = 20 and α = 10%, its null rejection rate is 10.7%; the corresponding figures for ωa1 and ωa2 (the two alternative corrected tests) are 14.9% and 17.5%, respectively.
Fig 2 contains QQ plots for the second scenario. As in the previous scenario, the null distribution of ω is poorly approximated by the limiting chi-squared distribution and the approximation works better for ωb3 (the Bartlett-corrected test statistic, ‘mutiplicative-like’) than for ωa1.
[Figure omitted. See PDF.]
Fig 2. Quantile-quantile plots, : β3 = β4 = 0.
https://doi.org/10.1371/journal.pone.0253349.g002
We shall now consider to tests on the coefficients of the precision submodel. The null rejection rates for the third scenario are in Table 3. We test the null hypothesis of fixed precision, i.e. we test which is equivalent to testing (the precision parameter is constant across observations). The figures in Table 3 indicate, once again, that testing inferences based on ω can be quite unreliable when n is small. Overall, the third Bartlett-corrected (‘multiplicative-like’) test outperforms all other corrected tests. For instance, when n = 20 and α = 10%, its null rejection rate is 9.5% whereas those of ωb1 (‘ratio-like’), ωb2 (‘exponentially adjusted’), ωa1 and ωa2 are 17.1%, 14.4%, 11.7% and 14.1%. We do not present QQ plots for brevity. We note, however, that they show that the null distribution of ωb3 (‘mutiplicative-like’) is well approximated by the limiting χ2 distribution.
We also performed simulations using a data generating process that differs from the estimated model, that is, we estimated the tests’ non-null rejection rates (powers). We restrict attention to the likelihood ratio test (ω), the best performing Bartlett-corrected test (ωb3—‘mutiplicative-like’) and the best performing test obtained using the alternative finite sample correction (ωa1). We consider two sample sizes (n ∈ {20, 40}) and two significance levels (α = 10%, 5%). We test (first scenario), but the data are generated using using a value of β4 that is different from zero; we denote such a value by γ. The null hypothesis is thus false. Since some tests are oversized, all testing inferences are carried out using exact (estimated from the size simulations) critical values. The tests’ estimated powers for different values of γ are presented in Table 4. As expected, the tests become more powerful when the sample size is larger and also as the value of γ moves away from zero. Overall, the three tests display similar non-null rejection rates.
[Figure omitted. See PDF.]
Table 4. Nonnull rejection rates (%), .
https://doi.org/10.1371/journal.pone.0253349.t004
We shall now return to the evaluation of the tests’ null performances. First, we shall investigate the impact of the number of nuisance parameters on the tests’ null behavior. We set the sample size at n = 40 and consider the following model:i = 1, …, 40. We test : β2 = 0 against : β2 ≠ 0. The covariate x2 is a dummy variable that equals 1 for the first twenty observations and 0 otherwise. The values of all other covariates were obtained as random draws. Table 5 contains the tests’ null rejection rates for p = 3, 4, 5, 6. The results show that the likelihood ratio test tends to become progressively more liberal as the number of nuisance parameters increases. In contrast, the corrected tests are much less sensitive to the number of nuisance parameters, especially ωb3 (‘mutiplicative-like’), which is the best performing test. Its null rejection rates for the different values of p at the 10% significance level range from 9.9% to 10% whereas those of ω range between 14.1% (p = 3) and 17.1% (p = 6).
[Figure omitted. See PDF.]
Table 5. Null rejection rates (%), , varying number of nuisance parameters.
https://doi.org/10.1371/journal.pone.0253349.t005
Second, we shall evaluate the tests’ finite sample performances when the null hypothesis includes restrictions on the parameters of both submodels simultaneously. The data generating process isWe consider two different null hypotheses, namely: (i) : β2 = 0, δ3 = 0 (l = 2) and (ii) : β2 = 0, δ2 = δ3 = 0 (l = 3). The corresponding parameter values are (i) β1 = 1.0, β2 = 0, β3 = 3.0, δ1 = 1.7, δ2 = 0.7, δ3 = 0 and (ii) β1 = 1.5, β2 = 0, β3 = −1.4, δ1 = 1.5, δ2 = δ3 = 0. The covariate values were obtained as random draws and n ∈ {15, 20, 30, 40}. Table 6 contains the tests’ null rejection rates. The test based on the Bartlett-corrected test statistic ωb3 (‘mutiplicative-like’) is the best performer in both cases. For instance, when l = 2 and n = 15, its null rejection rate at the 10% significance level is 9.5% whereas those of the competing tests range from 11.1% to 25.0%.
[Figure omitted. See PDF.]
Table 6. Null rejection rates (%), (l = 2) and (l = 3).
https://doi.org/10.1371/journal.pone.0253349.t006
Finally, we shall evaluate the impact of different levels of correlations between regressors on the tests’ small sample performance. The model isThe values of the two regressors are obtained as random draws from the bivariate normal distribution with mean (0, 0)⊤ and covariance matrix Σ. The diagonal and off-diagonal elements of Σ are, respectively, 1 and ρ. Hence, ρ is the correlation coefficient between x2 and x3. We test : β2 = β3 = 0 (l = 2). Data generation was carried out using β1 = 1.0, β2 = β3 = 0, δ1 = 1.7 and δ2 = 0.1. Different correlation strengths were considered, ranging from very low to very strong: ρ ∈ (0.1, 0.5, 0.75, 0.95). The sample sizes are n ∈ {15, 20, 30, 40}. Table 7 contains the tests’ null rejection rates. Again, the likelihood ratio test ω is quite liberal when n is small, slightly more so under very strong correlation between the two regressors. The Bartlett-corrected tests perform very well for all correlation values, especially ωb3 (‘multiplicative-like’). Its null rejection rates are once again very close to α. For instance, when ρ = 0.75, n = 15 and α = 10% (5%), the test’s null rejection rate is 9.7% (4.4%) whereas that of uncorrected test (ω) is 22.3% (14.0%) and those of alternative tests ωa1 and ωa2 are 18.4% (9.9%) and 20.5% (13.4%), respectively. It is noteworthy that the null rejection rates of the three Bartlett-corrected tests are insensitive to the level of correlation between regressors. For example, when n = 15 and α = 10%, the null rejection rates of ωb1 (‘ratio-like’), ωb2 (‘exponentially adjusted’) and ωb3 (‘multiplicative-like’) for ρ = (0.1, 0.5, 0.75, 0.95) are in [13.3%, 13.8%], [11.9%, 12.2%] and [9.4%, 10.6%], respectively.
[Figure omitted. See PDF.]
Table 7. Null rejection rates (%), (l = 2); varying correlation between regressors.
https://doi.org/10.1371/journal.pone.0253349.t007
Behavioral biometrics: Intelligence and atheism
We shall now address the behavioral biometrics issue briefly outlined in the Introduction. The interest lies in modeling the impact of average intelligence on the prevalence of religious disbelievers. General intelligence relates to the ability to reason deductively or inductively, think abstractly, use analogies, synthesize information, and apply it to new domains. It is typically measured by the intelligence quotient (IQ) which is a score obtained from standardized tests. Average IQ scores have been computed for a large number of countries; see e.g. [37, 38]. There is evidence that intelligence negatively correlates with religious belief at the individual level; see e.g. [39]. The negative correlation holds even when religiosity and performance on analytic thinking are measured in separate sessions; see [40]. It also holds when computed from a cross section of nations and from the U.S. states; see [41, 42]. There are evolutionary reasons for the inverse relationship between intelligence and religious belief. For instance, according to the Savanna-IQ Interaction Hypothesis more intelligent individuals are more likely to acquire and espouse evolutionarily novel values and preferences than less intelligent individuals; see [43]. One of such evolutionarily novel values is religious disbelief.
Several regression analysis were performed to measure the net impact of changes in intelligence levels on the prevalence of atheists; see [44] for details. A beta regression analysis was carried out by [13]. They used data on 124 nations and showed that the net impact of average intelligence on the prevalence of religious disbelievers is always positive, gains strength up to a certain level of average intelligence and then weakens. The same data set (n = 124) was analyzed by [32] using a beta regression model that includes a parametric mean link function and by [30] using the unit gamma regression model. In what follows, we shall consider a different data set. On page 487 of their paper [13], briefly mention a beta regression analysis that was performed using data on the fifty countries with the largest prevalence of atheists (n = 50) which they call ‘scenario 3’. Since our interest lies in small sample inferences, we shall pursue that modeling. A novel feature of such data is that they do not include countries for which the prevalence of atheists is very small (close to zero).
The response variable (y) is the proportion of atheists in each country and the covariates are: average intelligence quotient (x2), average intelligence quotient squared (x3), life expectancy in 2007 in years (x4), the logarithm of the ratio between trade volume (the sum of imports and exports) and gross national product (x5), and per capita income adjusted for purchasing power parity (x6). Additionally, the following interactions are used: x7 = x5 × x6 and x8 = x4 × x5; the latter was not considered by the original authors. Except for x8, these are the same variables used by [13]. Average intelligence is the independent variable of main interest and the remaining regressors are control variables. Also, n = 50 (fifty countries with the largest prevalence of religious disbelievers). The data and computer code used in the empirical analysis that follows can be obtained at https://github.com/acguedes/beta-Bartlett.
[13] fitted the following beta regression model to the data (Model ):We noticed that an improved fit according to standard model selection criteria and pseudo-R2 (see below) can be achieved by adding x8 to the mean submodel and by only using x2 in the precision submodel, since the x4 and x5 lose statistical significance when the mean submodel includes the interaction between these two variables. Our model (Model ) is thenAll parameter estimates of the above model are statistically significant at the 5% significance level according to the z test, and its pseudo-R2, as defined by [1], is superior to that of the model fitted by [13]: 0.3719 vs 0.3216. Model is also favored by the three most commonly used model selection criteria when compared to Model , AIC (−61.0555 vs −55.3188), AICC (−55.4144 vs −48.3714) and BIC (−41.9352 vs −34.2865). We shall investigate whether x4 and x5 should be excluded from our model by testing whether β4 and β5 equal zero (individually and jointly). We shall use three tests, namely: the likelihood ratio test (ω), the best performing Bartlett-corrected test (ωb3—‘mutiplicative-like’) and the best performing test based on the alternative small sample correction (ωa1).
At the outset, we test the exclusion of x4 from Model , that is, we test . The p-values of the ω, ωb3 and ωa1 tests are 0.0258, 0.0443 and 0.0295, respectively. The first and third tests clearly reject the null hypothesis at α = 5% whereas the p-value of the Bartlett-corrected test is very close to 0.05 which renders uncertainty about the exclusion of x4 from the model. Next, we test . We obtain the following p-values for ω, ωb3 and ωa1: 0.0303, 0.0505 and 0.0332, respectively. The first and third tests clearly reject the removal of x5 from the model at the 5% significance level; the null hypothesis is not rejected by the Bartlett-corrected test. Finally, we test the joint exclusion of both covariates, i.e. we test , and obtain the following p-values for ω, ωb3 and ωa1: 0.0726, 0.1121 and 0.0840, respectively. The null hypothesis is not rejected by the three tests at the 5% nominal level, but only the Bartlett-corrected test maintains that inference at the 10% nominal level. That is, such a test provides more evidence in favor of the removal of x4 and x5 from the mean submodel.
Based on the above testing inference, we arrive at the following reduced model (Model ), which is our final model:The estimates of β1, …, β6 (standard errors in parenthesis) are, respectively, 22.9423 (7.6472), −0.7583 (0.1942), 0.0044 (0.0011), 0.1866 (0.0545), −0.0483 (0.0136), 0.0265 (0.0055). For the precision submodel, we obtain (5.0312) and (0.0498). The model pseudo-R2 is 0.3455; it is higher than that of the model estimated by [13]. Additionally, AIC = −59.8094, AICC = −56.2972 and BIC = −44.5133. It is noteworthy that these criteria clearly favor our reduced model relative to the model presented in [13]; recall that for that model, AIC = −55.3188, AICC = −48.3714 and BIC = −34.2865. The difference in AIC (AICC) [BIC] in favor of Model is of nearly 5 points (nearly 8 points) [over 10 points]. When the difference in AIC values exceeds 4, one can conclude that there is considerably less support for the model with larger AIC; see [45]. The evidence in favor of our reduced model is thus strong.
Asymptotic confidence intervals with nominal coverage (1 − α) × 100% for the parameters of Model can be obtained using the asymptotic normality of the corresponding maximum likelihood estimators. In particular, for j = 1, …, 6 and k = 1, 2, and are asymptotic confidence intervals for βj and δk with nominal coverage (1 − α) × 100%, respectively, the asymptotic standard errors, se, being obtained from Fisher’s information matrix inverse evaluated at the maximum likelihood estimates. Here, z1−α/2 denotes the 1 − α/2 standard normal quantile. Table 8 contains the lower and upper limits (LLCIa and ULCIa) of such intervals for the parameters that index Model for 1 − α = 0.95. Following [46, Section 3], we also computed approximate confidence intervals based on the test statistics ω, ωb3 and ωa1 which was done by finding the set of parameter values such that the test statistic is smaller than for each parameter and each test statistic. Such intervals are also presented in Table 8. For instance, the confidence intervals for β5 constructed using ωb3 and ωa1 are [−0.0796, −0.0169] and [−0.0781, −0.0145], respectively; the corresponding asymptotic interval estimate is [−0.0749, −0.0216]. It is noteworthy that none of the reported confidence intervals contains the value zero.
[Figure omitted. See PDF.]
Table 8. Lower (LLCI) and upper (ULCI) asymptotic confidence intervals limits for the parameters of Model ; standard asymptotic confidence interval and confidence intervals constructed using the test statistics ω, ωb3 and ωa1.
https://doi.org/10.1371/journal.pone.0253349.t008
The model used by [13] (Model ) and our reduced model (Model ) are non-nested. In order to distinguish between them using a hypothesis test, we performed the J test as outlined by [47]. When the test is applied to two non-nested models, say Models m1 and m2, each model is sequentially tested against the other, i.e. we test Model m1 against Model m2, and then we test Model m2 against Model m1. It is thus possible to accept one model as the true model and reject the alternative model, to accept both models (i.e. to conclude that the two models are empirically indistinguishable) or to reject both models. Since the J testing inference is reached using the likelihood ratio test, we have also performed the test using the two corrected tests. We first test Model , i.e. the model fitted by [13], against our reduced model (Model ). The p-values of the tests based on ω, ωb3 and ωa1 are, respectively, 0.0036, 0.0825, 0.0233. All tests reject Model (i.e. the model used by the authors) at the 10% significance level; the test based on ωa1 (ω) yields rejection at α = 5% (1%). Next, Model is tested against Model . The p-values of the tests that use ω, ωb3 and ωa1 are 0.0364, 0.1100 and 0.2001, respectively. Interestingly, our model is rejected at the 5% significance level by the likelihood ratio test whereas that inference is reversed when the small sample corrections are applied: the two corrected tests do not yield rejection of the model, nor even at α = 10%. That is, our model is not rejected by the two corrected tests.
It is noteworthy that if we consider the three sets of tests, i.e. the tests of , and , it is clear that the Bartlett-corrected test was the test that most emphatically suggested the removal of both x4 and x5 from the mean submodel of Model .
We constructed a residual normal probability with simulated envelopes using the combined residuals of [48] from our fitted model (Model ); see Fig 3. The envelope bands were constructed using 100 replications The plot shows that there is no evidence against the correct specification of our model since all points lie inside the two envelope bands.
[Figure omitted. See PDF.]
Fig 3. Residual normal probability plot.
https://doi.org/10.1371/journal.pone.0253349.g003
In [13], Fig 4, the authors plot an estimate of ∂μi/∂xi2 against a sequence of values of average intelligence by setting all other covariates at their median values. In Fig 4 we present a panel of similar plots each containing two impact curves, namely: (i) that obtained from our reduced model (‘new’) and (ii) that obtained using the model fitted by [13] (‘old’). That is, ‘new’ and ‘old’ in Fig 4 refer to Models and , respectively. Instead of only fixing the covariates other than average intelligence at their median values, we do so at four different quantiles: 0.10, 0.25, 0.50 (median) and 0.75. We note that the two impact curves become more similar (more dissimilar) as the quantile at which the regressors values are set increases (decreases). Such covariates tend to assume larger values for more developed nations since they relate to per capita income, life expectancy and integration to international trade. In particular, the former two variables are highly correlated with economic development. It then follows that one gets a somewhat different functional form of the impact of average intelligence on the prevalence of religious disbelievers in lower income countries when our reduced model is used relative to the model used by the original authors. At the lowest quantile (0.10), the maximal impact computed from our model (Model ) is over 11% larger than that obtained using the alternative model (Model ). When the covariates values are set at their medians, the figure drops to nearly 4%. To the best of our knowledge, our analysis provides the first measure of the decline in the maximal impact of average intelligence on the prevalence of religious disbelievers and also of the changes in the functional form of such an impact as nations become more developed.
[Figure omitted. See PDF.]
Fig 4. Impact curves.
https://doi.org/10.1371/journal.pone.0253349.g004
Concluding remarks
The beta regression model is widely used to model responses that assume values in (0, 1). In the initial formulation of the model, the precision parameter was assumed constant for all observations, i.e. all responses in the sample share the same precision. This model became known as the fixed precision beta regression model. A more general and more flexible formulation of the model was later proposed. It allows both distribution parameters to vary across observations. Most empirical applications employ this version of the model, which is known as the varying precision beta regression model. It contains two submodels, one for mean and another for the precision.
In both variants of the regression model, testing inferences are usually performed using the likelihood ratio test. Such a test employs an asymptotic approximation, and as a consequence it can be quite size distorted when the sample size is small. In particular, it tends to be liberal (oversized), i.e. it overrejects the null hypothesis when such a hypothesis is true. Since many applications of the beta regression model are based on samples of small to moderate sizes, it is important to develop alternative tests with superior finite sample behavior, i.e. tests that yield better control of the type I error frequency. [12] derived a Bartlett correction to the likelihood ratio test that can be used to achieve more accurate testing inferences. Their result, nonetheless, only applies to the more restrictive model formulation, namely: the fixed precision model. Since many applications employ the varying precision formulation of the beta regression model, their small sample correction cannot be used. In this paper we derived the Bartlett correction to the likelihood ratio test in full generality. Our correction can thus be used to construct modified likelihood ratio tests to be used in varying precision beta regression analyses. We considered three Bartlett-corrected tests. Monte Carlo simulation evidence revealed that one of such tests typically delivers very accurate inferences even when the sample size is quite small. Its small sample performance was numerically compared to those of two tests that are based on an alternative correction. Overall, the results favor the test that employs the Bartlett correction. A novel feature of our Bartlett-correction tests is that their size distortions are guaranteed to vanish at a faster rate than that of the likelihood ratio test: O(n−2) vs O(n−1).
We presented and discussed an empirical application that involved an important issue in evolutionary biometrics, namely: the relationship between average intelligence and the prevalence of religious disbelievers. Using data on 50 countries, we showed that by using our Bartlett-corrected testing inferences we arrive at a beta regression model slightly different from that previously used in the literature. It is noteworthy that our model displays superior fit and yields a noticeably different functional form of the impact of intelligence on religious disbelief in low income countries. This empirical application illustrates the usefulness of the Bartlett correction derived in our paper.
A direction for future research is the extension of our analytical results for testing inferences in inflated beta regression models introduced by [49] which include both continuous and discrete components and thus allow for response values that are exactly equal to 0 or 1.
Appendix: Varying precision beta regression log-likelihood cumulants
We shall now present the varying precision beta regression model log-likelihood cumulants up to fourth order. We shall use lower and upper case case letters to index derivatives of (2) with respect to the components of β and δ, respectively. We use tensor notation: κrs = IE(∂2 ℓ(θ)/∂βr∂βs), κrst = IE(∂3 ℓ(θ)/∂βr∂βs∂βt), κrstu = IE(∂4 ℓ(θ)/∂βr∂βs∂βt∂βu), etc., r, s, t, u = 1, …, k. Additionally, we use the following notation for derivatives of the above cumulants: , , , etc.
It can be shown that
Let wi = ψ′(μi ϕi) + ψ′((1 − μi)ϕi), mi = ψ″(μi ϕi) − ψ″((1 − μi)ϕi) and alsowhere ψ′(⋅) and ψ″(⋅) is the trigamma and tetragamma functions, respectively. The following derivatives are needed for obtaining the log-likelihood cumulants:
The log-likelihood derivatives with respect to the components of θ = (β⊤, δ⊤)⊤ are given by
Using the above results, we arrive, after long derivations, at the following expressions for the relevant varying precision beta regression model cumulants:
Also, we obtained the following expressions for the first order derivatives of the log-likelihood cumulants:
The second order derivatives of the log-likelihood cumulants can expressed as follows:
Acknowledgments
We thank two anonymous referees for comments and suggestions that led to a much improved manuscript.
Citation: Guedes AC, Cribari-Neto F, Espinheira PL (2021) Bartlett-corrected tests for varying precision beta regressions with application to environmental biometrics. PLoS ONE 16(6): e0253349. https://doi.org/10.1371/journal.pone.0253349
1. Ferrari SLP, Cribari-Neto F. Beta regression for modelling rates and proportions. Journal of Applied Statistics. 2004;31(7):799–815.
2. Kieschnick R, McCullough BD. Regression analysis of variates observed on (0, 1): Percentages, proportions and fractions. Statistical Modelling. 2003;3(3):193–213.
3. Simas AB, Barreto-Souza W, Rocha AV. Improved estimators for a general class of beta regression models. Computational Statistics & Data Analysis. 2010;54(2):348–366.
4. Bouchard C, Lange F, Guéraud F, Rives J, Tentelier C. Sexual maturity increases mobility and heterogeneity in individual space use in Atlantic salmon (Salmo salar) parr. Fish Biology. 2020;96(4):925–938. pmid:32048290
5. Chen KK, Chiu RH, Chang CT. Using beta regression to explore the relationship between service attributes and likelihood of customer retention for the container shipping industry. Transportation Research Part E: Logistics and Transportation Review. 2017;104:1–16.
6. Douma JC, Weedon JT. Analysing continuous proportions in ecology and evolution: A practical introduction to beta and Dirichlet regression. Methods in Ecology and Evolution. 2019;10(9):1412–1430.
7. Mandal S, Srivastav RK, Simonovic SP. Use of beta regression for statistical downscaling of precipitation in the Campbell River Basin, British Columbia, Canada. Journal of Hydrology. 2016;538:49–62.
8. Mullen L, Marshall L, McGlynn B. A beta regression model for improved solar radiation predictions. Journal of Applied Meteorology and Climatology. 2013;52(8):1923–1938.
9. Swearingen CJ, Tilley BC, Adams RJ, Rumboldt Z, Nicholas JS, Bandyopadhyay D, et al. Application of beta regression to analyze ischemic stroke volume in NINDS rt-PA clinical trials. Neuroepidemiology. 2011;37(2):73–82. pmid:21894044
10. Cribari-Neto F, Queiroz MPF. On testing inference in beta regressions. Journal of Statistical Computation and Simulation. 2014;84(1):186–203.
11. Ospina R, Cribari-Neto F, Vasconcellos KLP. Improved point and interval estimation for a beta regression model. Computational Statistics & Data Analysis. 2006;51(2):960–981.
12. Bayer FM, Cribari-Neto F. Bartlett corrections in beta regression models. Journal of Statistical Planning and Inference. 2013;143(3):531–547.
13. Cribari-Neto F, Souza TC. Religious belief and intelligence: Worldwide evidence. Intelligence. 2013;41:482–489.
14. Cribari-Neto F, Zeileis A. Beta regression in R. Journal of Statistical Software. 2010;34(2):1–24.
15. Nocedal J, Wright SJ. Numerical Optimization. 2nd ed. New York: Springer; 2006.
16. Lawley DN. A general method for approximating to the distribution of likelihood ratio criteria. Biometrika. 1956;43(3-4):295–303.
17. Hayakawa T. The likelihood ratio criterion and the asymptotic expansion of its distribution. Annals of the Institute of Statistical Mathematics. 1977;29(1):359–378.
18. Chesher A, Smith RJ. Bartlett corrections to likelihood ratio tests. Biometrika. 1995;82(2):433–436.
19. Cordeiro GM. On the corrections to the likelihood ratio statistics. Biometrika. 1987;74(2):265–274.
20. Harris P. A note on Bartlett adjustments to likelihood ratio tests. Biometrika. 1986;73(3):735–737.
21. Barndorff-Nielsen OE, Hall P. On the level-error after Bartlett adjustment of the likelihood ratio statistic. Biometrika. 1988;75(2):374–378.
22. Cordeiro GM. General matrix formulae for computing Bartlett corrections. Statistics & Probability Letters. 1993;16(1):11–18.
23. Lemonte AJ, Ferrari SLP, Cribari-Neto F. Improved likelihood inference in Birnbaum-Saunders regressions. Computational Statistics & Data Analysis. 2010;54(5):1307–1316.
24. Cordeiro GM, Cribari-Neto F. An Introduction to Bartlett Correction and Bias Reduction. New York: Springer; 2014.
25. Cribari-Neto F, Cordeiro GM. On Bartlett and Bartlett-type corrections. Econometric Reviews. 1996;15(4):339–367.
26. Skovgaard IM. Likelihood asymptotics. Scandinavian Journal of Statistics. 2001;28(1):3–32.
27. Skovgaard IM. An explicit large-deviation approximation to one-parameter tests. Bernoulli. 1996;2(2):145–165.
28. Barndorff-Nielsen OE. Inference on full or partial parameters based on the standardized signed log likelihood ratio. Biometrika. 1986;73(2):307–322.
29. Barndorff-Nielsen OE. Modified signed log likelihood ratio. Biometrika. 1991;78(3):557–563.
30. Guedes AC, Cribari-Neto F, Espinheira PL. Modified likelihood ratio tests for unit gamma regressions. Journal of Applied Statistics. 2020;47(9):1562–1586.
31. Ferrari SLP, Pinheiro EC. Improved likelihood inference in beta regression. Journal of Statistical Computation and Simulation. 2011;81(4):431–443.
32. Rauber C, Cribari-Neto F, Bayer FM. Improved testing inferences for beta regressions with parametric mean link function. AStA Advances in Statistical Analysis. 2020;104(4):687–717.
33. Rocke DM. Bootstrap Bartlett adjustment in seemingly unrelated regression. Journal of the American Statistical Association. 1989;84(406):598–601.
34. Ferrari SLP, Lucambio F, Cribari-Neto F. Improved profile likelihood inference. Journal of Statistical Planning and Inference. 2005;134(2):373–391.
35. R Core Team. R: A Language and Environment for Statistical Computing; 2020. Available from: http://www.R-project.org/.
36. Ferrari SLP, Espinheira PL, Cribari-Neto F. Diagnostic tools in beta regression with varying dispersion. Statistica Neerlandica. 2011;65(3):337–351.
37. Lynn R, Meisenberg G. National IQs calculated and validated for 108 nations. Intelligence. 2010;38(4):353–360.
38. Lynn R, Vanhanen T. IQ and the Wealth of Nations. Augusta: Washington Summit Publishers; 2006.
39. Ganzach Y, Gotlibovski C. Intelligence and religiosity: Within families and over time. Intelligence. 2013;41(5):546–552.
40. Pennycook G, M RR, Koehler DJ, Fugelsang JA. Atheists and agnostics are more reflective than religious believers: Four empirical studies and a meta-analysis. PLOS ONE. 2016;11(4):e0153039. pmid:27054566
41. Lynn R, Harvey J, Nyborg H. Average intelligence predicts atheism rates across 137 nations. Intelligence. 2009;37(1):11–15.
42. Reeve CL, Basalik D. A state level investigation of the associations among intellectual capital, religiosity and reproductive health. Intelligence. 2011;39(1):64–73.
43. Kanazawa S. Why liberals and atheists are more intelligent. Social Psychology Quarterly. 2010;73(1):33–57.
44. Zuckerman M, Silberman J, Hall JA. The relation between intelligence and religiosity: A meta-analysis and some proposed explanations. Personality and Social Psychology Review. 2013;17(4):325–354. pmid:23921675
45. Burnham KP, Anderson DR. Multimodel inference: Understanding AIC and BIC in model selection. Sociological Methods Research. 2004;33(2):261–304.
46. Das U, Dhar SS, Pradhan V. Corrected likelihood-ratio tests in logistic regression using small-sample data. Communications in Statistics—Theory and Methods. 2018;47(17):4272–4285.
47. Cribari-Neto F, Lucena SEF. Non-nested hypothesis testing in the class of varying dispersion beta regressions. Journal of Applied Statistics. 2015;42(5):967–985.
48. Espinheira PL, Santos EG, Cribari-Neto F. On nonlinear beta regression residuals. Biometrical Journal. 2017;59(3):445–461. pmid:28128858
49. Ospina R, Ferrari SLP. A general zero-or-one inflated beta regression models. Computational Statistics & Data Analysis. 2012;56(6):1609–1623.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2021 Guedes et al. This is an open access article distributed under the terms of the Creative Commons Attribution License: http://creativecommons.org/licenses/by/4.0/ (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Beta regressions are commonly used with responses that assume values in the standard unit interval, such as rates, proportions and concentration indices. Hypothesis testing inferences on the model parameters are typically performed using the likelihood ratio test. It delivers accurate inferences when the sample size is large, but can otherwise lead to unreliable conclusions. It is thus important to develop alternative tests with superior finite sample behavior. We derive the Bartlett correction to the likelihood ratio test under the more general formulation of the beta regression model, i.e. under varying precision. The model contains two submodels, one for the mean response and a separate one for the precision parameter. Our interest lies in performing testing inferences on the parameters that index both submodels. We use three Bartlett-corrected likelihood ratio test statistics that are expected to yield superior performance when the sample size is small. We present Monte Carlo simulation evidence on the finite sample behavior of the Bartlett-corrected tests relative to the standard likelihood ratio test and to two improved tests that are based on an alternative approach. The numerical evidence shows that one of the Bartlett-corrected typically delivers accurate inferences even when the sample is quite small. An empirical application related to behavioral biometrics is presented and discussed.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer