Content area
Full Text
Contents
Figures and Tables
Abstract
Normed and nonnormed fit indexes are frequently used as adjuncts to chi-square statistics for evaluating the fit of a structural model. A drawback of existing indexes is that they estimate no known population parameters. A new coefficient is proposed to summarize the relative reduction in the noncentrality parameters of two nested models. Two estimators of the coefficient yield new normed (CFI) and nonnormed (FI) fit indexes. CFI avoids the underestimation of fit often noted in small samples for Bentler and Bonett's (1980) normed fit index (NFI). FI is a linear function of Bentler and Bonett's non-normed fit index (NNFI) that avoids the extreme underestimation and overestimation often found in NNFI. Asymptotically, CFI, FI, NFI, and a new index developed by Bollen are equivalent measures of comparative fit, whereas NNFI measures relative fit by comparing noncentrality per degree of freedom. All of the indexes are generalized to permit use of Wald and Lagrange multiplier statistics. An example illustrates the behavior of these indexes under conditions of correct specification and misspecification. The new fit indexes perform very well at all sample sizes.
As is well known, the goodness-of-fit test statistic T used in evaluating the adequacy of a structural model is typically referred to the chi-square distribution to determine acceptance or rejection of a specific null hypothesis, Σ = Σ(θ). In the context of covariance structure analysis, Σ is the population covariance matrix and θ is a vector of more basic parameters, for example, the factor loadings and intercorrelations and unique variances in a confirmatory factor analysis. The statistic T reflects the closeness of ˆΣ = Σ(ˆθ), based on the estimator ˆθ, to the sample matrix S , the sample covariance matrix in covariance structure analysis, in the chi-square metric. Acceptance or rejection of the null hypothesis via a test based on T may be inappropriate or incomplete in model evaluation for several reasons:
- Some basic assumptions underlying T may be false and the distribution of the statistic may not be robust to violation of these assumptions.