van Rijn et al. Large-scale Assess Educ (2016) 4:10 DOI 10.1186/s4053601600253
Assessment oft ofitem response theory models used inlargescale educational survey assessments
Peter W. van Rijn1, Sandip Sinharay2*http://orcid.org/0000-0003-4491-8510
Web End = , Shelby J. Haberman3 and Matthew S. Johnson4
http://orcid.org/0000-0003-4491-8510
Web End = *Correspondence: ssinharay@pacicmetrics. com
2 Pacic Metrics Corporation, Monterey, CA, USAFull list of author information is available at the end of the article
Introduction
Several large-scale educational survey assessments(LESAs) such as the United States National Assessment of Educational Progress (NAEP), the International Adult Literacy Study (IALS; Kirsch 2001), the Trends in Mathematics and Science Study (TIMSS; Martin and Kelly 1996), and the Progress in International Reading Literacy Study (PIRLS; Mullis etal. 2003) involve the use of item response theory (IRT) models for score-reporting purposes (e.g., Beaton 1987; Mislevy etal. 1992; Von Davier and Sinharay 2014).
Standard 4.10 of the Standards for Educational and Psychological Testing (American Educational Research Association 2014) recommends obtaining evidence of model t when an IRT model is used to make inferences from a data set. In addition, because of the importance of the LESAs in educational policy-making in the U.S. and abroad, it is essential to assess the t of the IRT models used in these assessments. Although several researchers have examined the t of the IRT models in the context of LESAs (for example, Beaton 2003; Dresher and Thind 2007; Sinharay etal. 2010), there is a substantial scope of further research on the topic (e.g., Sinharay etal. 2010).
This paper suggests two types of residuals to assess the t of IRT models used in LESAs. One among them can be used to assess item t and the other can be used to
2016 The Author(s). This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/
Web End =http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
van Rijn et al. Large-scale Assess Educ (2016) 4:10
assess other aspects of t of these models. These residuals are computed for several simulated data sets and four operational NAEP data sets. The focus in the remainder of this paper will mostly be on NAEP.
The next section provides some background, describing the current NAEP IRT model and the existing NAEP IRT model-t procedures. The Methods section describes our suggested residuals. The data section describes data from four NAEP assessments. The Simulation section describes a simulation study that examined the TypeI error rate and power of the suggested methods. The next section involves the application of the suggested residuals to the four NAEP data sets. The last section includes conclusions and suggestions for future research.
Background
The IRT model used inNAEP
Consider a NAEP assessment that was administered to students i, i = 1, 2, . . . , N , with corresponding sampling weights Wi. The sampling weight Wi represent the number of students in the population that student i represents (e.g., Allen et al. 2001 pp. 161225). Denote the p-dimensional latent prociency variable for student i by ~i = (i1, i2, . . . , ip). In NAEP assessments, p is between 1 and 5. Denote the vector of item scores for studenti by yi = (yi1, yi2, . . . , yip), where the sub-vector yik contains the item scores yijk, j Jik, of studenti to items(that are indexed by j) corresponding to the kth dimension/subscale that were presented to studenti. For example, yik could be the scores of student i to the algebra questions presented to her on a mathematics test and ik could represent the students prociency variable for algebra. Because of the use of matrix sampling (that refers to a design in which each student is presented only a subset of all available items) in NAEP assessments, the algebra questions administered to student i are a subset Jik of the set Jk of all available algebra questions on the test. The item scores yijk can be an integer between 0 and rjk > 0, where rjk = 1 for dichotomous items and an integer larger than 1 for polytomous items with three or more score categories.
Let ~k denote the vector of item parameters of the items related to the kth subscale, and let ~ = (~1, ~2, . . . , ~p). For itemj in Jik, let fjk(y|ik, ~k) be the probability given ik and ~k that yijk = y, 0 y rjk, and let
Then fk(yik|ik, ~k) is the conditional likelihood of an examinee on the items corresponding to the k-th subscale. Because of the between-item multidimensionality of the items (that refers to each item measuring only one subscale; e.g., Adams etal. 1997) used in NAEP, the conditional likelihood for a studenti given ~i is given by
p
[notdef]
k=1
In Equation 2, the expressions fk(yik|ik, ~k) are dened by the particular IRT model used for analysis, which is usually the three-parameter logistic (3PL) model (Birnbaum 1968) for multiple-choice items, the two-parameter logistic (2PL) model (Birnbaum
Page 2 of 23
fk(yik|ik, ~k) =
[notdef]
jJik
fjk(y|ik, ~k).
(1)
f (yi|~i, ) =
fk(yik|ik, k) L(~i; ; yi).
(2)
van Rijn et al. Large-scale Assess Educ (2016) 4:10
1968) for dichotomous constructed-response items, and the generalized partial-credit model (GPCM; Muraki 1992) for polytomous constructed-response items. Parameter identication requires some linear constraints on the item parameters ~ or on the regression parameters and [notdef].
Associated with studenti is a vector zi = (zi1, zi2, . . . , ziq) of background variables. It is assumed that yi and zi are conditionally independent given ~i and, for some p by q matrix and some p by p positive-denite symmetric matrix [notdef], the latent regression assumption is made that
where Np denotes the p-dimensional normal distribution (e.g., Beaton 1987; Von Davier and Sinharay 2014). Let (~; zi, ) denote the density at ~ of the above normal distribution.
Weighted maximum marginal likelihood estimates may then be computed for the item parameters, the matrix , and the matrix [notdef] by maximizing the weighted log likelihood
Page 3 of 23
~i|zi Np(zi, ),
(3)
(~, , ) =
N
[notdef]
i=1
Wi log
L(; ~; yi)(; zi, )d.
(4)
The resulting estimates are
~ = (
~
1, . . . ,
p),
~ , and
[notdef]. The model described by Eqs.1 4 will henceforth be referred to as the NAEP operational model or the NAEP model. The model given by Eqs.1 and 2 are often referred to as the NAEP measurement model or the IRT part of the NAEP model.
In this report, a limited version of the NAEP model is used where no background variables are employed, and ~i is assumed to have a multivariate normal distribution with means equal to 0 and constraints on the covariance matrix, so that the variances are equal to 1 if the covariances are all zero. Consideration of only this limited model allows us to focus on the t of the IRT part(given by Eqs.1 and 2) of the NAEP model rather than on the latent regression part(given by Eq.3). If zi has a normal distribution, then this limited model is consistent with the general NAEP model. The suggested residuals can be extended to the case in which background variables are considered. These extensions are already present in the software employed in the analysis in this paper. For example, residuals involving item responses and background variables can indicate whether the covariances of item responses and background variables are consistent with the IRT model. The extensions can also examine whether problems with residual behavior found when ignoring background variables are removed by including background variables in the model. This possibility is present if background variables that are not normally distributed are related to latent variables.
Existing tools forassessment oft ofthe NAEP IRT model
The assessment of t of the NAEP IRT model is more complicated than that of the IRT models used in other testing programs because of several unique features of NAEP. Two such major features are:
van Rijn et al. Large-scale Assess Educ (2016) 4:10
Page 4 of 23
Because NAEP involves matrix sampling, common model-t tools such as the item-t statistics of Orlando and Thissen (2000), which are appropriate when all items are administered to all examinees, cannot be applied without modication.
Complex sampling involves both the sampling weights Wi and departures from customary assumptions of independent examinees due to sampling from nite populations and due to rst sampling schools and then sampling students within schools.
However, there exist several approaches for assessing the t of the NAEP IRT model.
The primary model-t tool used in the NAEP operational analyses is graphical item-t analyses using residual plots and a related 2-type item-t statistic (Allen etal. 2001 p. 233) that provide guidelines for treating the items (such as collapsing categories of polytomous items, treating adjacent year data separately in concurrent calibration, or dropping items from the analysis). However, the null distribution1 of these residuals and of the 2-type statistic are unknown (Allen etal. 2001 p. 233).
Dierential item functioning (DIF) analyses are also used in NAEP operational analysis to examine one aspect of multidimensionality (Allen etal. 2001 p. 233). In addition, the dierence between the observed and model-predicted proportions of students obtaining a particular score on an item (Rogers etal. 2006) are also examined in NAEP operational analyses. However, to evaluate when a dierence can be considered large enough, the standard deviation of the dierence is not used. It will be useful to incorporate the variability in the comparison of the observed and predicted proportions. As will be clear later, our proposed approach addresses this issue.
Beaton (2003) suggested the use of item-t measures involving weighted sums and weighted sums of squared residuals obtained from the responses of the students to each question of NAEP. Let ijk be the estimated conditional expectation of yijk given zi based on the parameter estimates
~,
, and
[notdef], and let
ijk be the corresponding estimated conditional standard deviation of yijk. Let us consider item j that measures the k-th subscale and let Kjk be the set of studentsi who were presented the item. Beatons t indices are of the forms
The bootstrap method (e.g., Efron and Tibshirani 1993) is used to approximate the null distribution of these statistics. Li (2005) employed Beatons statistics to operational NAEP data sets to determine the eect of accommodations for students with disabilities. Dresher and Thind (2007) employed Beatons statistics to 2003 NAEP and 1999 TIMSS data. They also employed the 2-type item t statistic provided by the NAEP version of the PARSCALE program, but they computed the null distribution of all statistics from their values for one simulated data set. However, these methods have their limitations. One simulated data set is inadequate to reect a null distribution and the bootstrap method involved in the approach of Li (2005) is computationally intensive.
[notdef]
iKjk
Wi yijk
ijk
ijk and [notdef]
iKjk
Wi (yijk
ijk)2
2ijk
.
1 The null distribution refers to the distribution under perfect model t.
van Rijn et al. Large-scale Assess Educ (2016) 4:10
Sinharay etal. (2010) suggested a simulation-based model-t technique similar to the bootstrap method (e.g., Efron and Tibshirani 1993) to assess the t of the NAEP statistical model. However, their suggested statistics were computed at the booklet level rather than for the whole data set and the p-values of the statistics under the null hypothesis of no mist did not always follow a uniform distribution and were smaller than what was expected.
The above review shows that there is need for further research on the assessment of t of the NAEP IRT model. We address that need by suggesting two new types of residuals to assess the t of the NAEP IRT model.
Methods
We suggest two types of methods for the assessment of absolute t of the NAEP IRT model: (1) item-t analysis using residuals and (2) generalized residual analysis. We also report results from comparisons between dierent IRT models. For comparisons between IRT models, one can use the estimated expected log penalty per presented item that is given by
where l(
~) is the log-likelihood of the measurement model and c(Jik) is the number of items in Jik. We make use of a slightly dierent version developed by Gilula and Haberman (1995), which is given by
where is the estimated Hessian matrix of the weighted log likelihood, is the estimated covariance matrix of the weighted log likelihood, and tr(M) denotes the trace of the matrix M. The matrices and are based on the parameter estimates
Page 5 of 23
PE = l(
~)
,
(5)
~Ni=1 Wi
~pk=1 c(Jik)
PE-GH = l(
~) + tr([]1)
,
(6)
~Ni=1 Wi
~pk=1 c(Jik)
~,
, and
[notdef]. A
smaller value of PE or PE-GH indicates better model performance in terms of prediction of the observed response patterns. For a discussion of interpretation of dierences between estimated expected log penalties per presented item for dierent models, see Gilula and Haberman (1994). In applications comparable to those in this paper, changes in value of 0.001 are small, a change of 0.01 is of moderate size, and a change of 0.1 is quite large.
In the case of complex sampling, is evaluated based on variance formulas appropriate for the sampling method used. For a random variable X with sampled values Xi , 1 i N, and sample weights Wi > 0, 1 i N, let W+ =
~Ni=1 Wi. Let the asymp-
totic variance 2( X) of the weighted average
X = W 1+
N
[notdef]
i=1
WiXi
van Rijn et al. Large-scale Assess Educ (2016) 4:10
be estimated by
2( X). For example, in simple random sampling with replacement and sampling weights Wi = 1,
2( X) can be set to n2
~Ni=1(Xi X)2 or
Page 6 of 23
~Ni=1(Xi X)2. With the sampling weights Wi randomly drawn together
with the Xi,
2( X) is given by
Numerous other formulas of this kind are available in sampling theory (Cochran 1977). Attention is conned here to standard cases in which [ X E(X)]/
( X) has a standard normal distribution in large samples. The software used in this paper (Haberman 2013) treats simple random sampling with replacement, simple stratied random sampling with replacement, two-stage random sampling with both stages with replacement, and stratied random sampling in which, within each stratum, two-stage random sampling is employed with both stages with replacement. To analyze NAEP data, the software (Haberman 2013) computes the asymptotic variance under the assumption that the sampling procedure is two-stage sampling with schools as primary sampling units;2 variance formulas for this case can be found, for example, in (Cochran 1977,pp. 301309).
Itemt analysis using residuals
To assess item t, Bock and Haberman (2009) and Haberman etal. (2013) employed a form of residual analysis in the context of regular IRT applications (that do not involve complex sampling or matrix sampling) that involves a comparison of two approaches to estimation of the item response function.
For item j in Jk and non-negative response value y rjk, let fjk(y|~) denote the value of fjk(y|k, ~k) with ~k replaced by
~k for ~ = (1, . . . , p). For example, for the two-parameter logistic(2PL) model, fjk(1|~) is equal to
where jk and bjk are the respective estimated item discrimination and difficulty parameters for the item. Let i(~) be the estimated posterior density at ~ of ~i given yi and zi. Let yijk be 1 if yijk = 1 and 0 otherwise, and let
Thus, as in Haberman etal. (2013), fjk(1|~) can be considered as an estimated unconditional expectation of the item score at ~ and fjk(y|~) can be considered as an estimated
[n(n 1)]1
W 2+
N
[notdef]
i=1
W 2i(Xi X)2.
exp[jk(k bjk]
1 + exp[jk(k bjk)]
,
fjk(y|~) =
~iKjk Wii(~)yijk
~iKjk Wii(~).
(7)
2 The software does not employ a nite population correction that is typically used when the sampling is without replacement, as in NAEPthis is a possible area of future research. It is anticipated that the nite population correction would not aect our results because of large sample sizes in NAEP.
van Rijn et al. Large-scale Assess Educ (2016) 4:10
conditional expectation of the item score at ~, conditional on the data. If the IRT model ts the data, then both fjk(1|~) and fjk(y|~) converge to fjk(1|~) as sample size increases.
Then the residual of itemj at ~, which measures the standardized dierence between
fjk(1|~) and fjk(y|~), is dened as
where sjk(~) is found by use of gradients of components of the log likelihood. Let q be a positive integer, and let T be a nonempty open set of q-dimensional vectors. Let continuously dierentiable functions ~, , and [notdef] on T and unique ~ and
Page 7 of 23
tjk(y|~) =
fjk(y|~) fjk(y|~)
sjk(y|~) ,
(8)
~ in T exist
such that ~() = ~, (~) = , (~) = , ~(
) =
~, (
~) =
, and (
~) =
. For
each studenti, let hi be a continuously dierentiable function on T such that, for ~ in T, hi(~) = log
[notdef]
L(; (~); yi)(; (~)zi; (~))d. Let hi be the gradient function of hi, and let i = hi(
~). Let
yjk(~) and
~yjk() minimize the residual sum of squares
[notdef]
iKjk
Wi[dyijk(~)]2
dyijk(~) =i[yijk fjk(y|~)]
for
yjk(~) [
yjk(~)]i
(Haberman etal. 2013,Eq.46). Then sjk(y|~) is the estimated standard deviation s( X) for Xi = dyijk(~) for i in Kjk and Xi = 0 for i not in Kjk.
If the model holds and the sample size is large, then tjk(y|~) has an approximate standard normal distribution. Arguments used in Haberman etal. (2013) for simple random sampling without replacement (where all Wi = 1, p = 1, and all Jiks are equal) apply virtually without change in sampling procedures under study. The asymptotic variance estimate s( X) is simply computed for Xi = dyijk(~) for i in Kjk and Xi = 0 for i not in Kjk based on the complex sampling procedure used for the data. If the model does not t the data and the sample is large, then the number of statistically signicant residuals tjk(y|~)
will be much more than the nominal level.
As in Haberman etal. (2013), one can create plots of item t using the above residuals. Figure 1 shows examples of such plots for two dichotomous items. In each case, p = 1 . For each item, the examinee prociency is plotted along the X-axis, the solid line denotes the values of the estimated ICC, that is, fj1(1|~) from Eq.8 for the item and for the vector ~ with the single element 1, and the two dashed lines denote a pointwise 95% condence band consisting of the values of fj1(1|~) 2sj1(1|~) and fj1(1|~) + 2sj1(1|~) , where fj1(1|~) is given by Eq.7. If the solid line falls outside this condence band, that would indicate a statistically signicant residual. These plots are similar to the plots of item t provided by IRT software packages such as PARSCALE (Du Toit 2003). In Fig.1, the right panel corresponds to an item for which substantial mist is observed and the left panel corresponds to an item for which no statistically signicant mist is observed (the solid line almost always lies within the 95% condence band).
van Rijn et al. Large-scale Assess Educ (2016) 4:10
The ETS mirt software (Haberman 2013) was used to compute residuals for item t for the NAEP data sets. The program is available on request for noncommercial use.
This item-t analysis can be considered a more sophisticated version of the graphical item-t analysis operationally employed in NAEP. While the asymptotic distribution of the residuals is not known in the analysis employed operationally, it is known in our proposed item-t analysis.
Generalized residual analysis
Generalized residual analysis for assessing the t of IRT models in regular applications (that do not involve complex sampling or matrix sampling) was suggested by Haberman (2009) and Haberman and Sinharay (2013). The methodology is very general and a variety of model-based predictions can be examined under the framework.
For a version of generalized residuals suitable for applications in NAEP, for studenti, let Yi be the set of possible values of yi and ei(y, zi) be a real number where zi is the vector of covariates. Let yi be a random variable with values in Yi such that yi and yi are conditionally independent given ~i and have the same conditional distribution. Let
let i be the estimated conditional expectation of ei(yi, zi) given yi and zi, and let
Page 8 of 23
O = W 1+
N
[notdef]
i=1
Wiei(yi, zi),
= W 1+
N
[notdef]
i=1
Wii(yi, zi).
Let
and
~ minimize
~Ni=1 Wi d2i for
di = ei(yi, zi) i
~i.
2( X) for Xi = di. Then the generalized residual is
Under very general conditions, if the model holds, then t has an asymptotic standard normal distribution (Haberman and Sinharay 2013). A fundamental requirement is that
Let s2 be the estimate
t = (O )/s.
van Rijn et al. Large-scale Assess Educ (2016) 4:10
the dimension q is small relative to N. A normal approximation must be appropriate for
d, where d has values di, 1 i N, such that
minimizes
Page 9 of 23
di = ei(yi, zi) E(ei(yi, zi)|yi, zi) ~hi()
~Ni=1 E(Wid2i).
A statistically signicant absolute value of the generalized residual t indicates that the IRT model does not adequately predict the statistic O.
The method is quite exible. Several common data summaries such as the item proportion correct, proportion simultaneously correct for a pair of items, and observed score distribution can be expressed as the statistic O by dening ei(yi, zi) appropriately. For example, to study the number-correct score or the rst-order marginal distribution of a dichotomous itemj related to skillk, let
Then O is the weighted proportion of students who correctly answer itemj of subscalek, and t indicates whether O is consistent with the IRT model. Because generalized residuals for marginal distributions include variability computations based on the IRT model employed, they provide a more rigorous comparison of observed and model-predicted proportions of students obtaining a particular score on an item than provided in Rogers etal. (2006).
For an example of a pairwise number-correct or the second-order marginal distribution, if j and j are distinct dichotomous items related to skillk, let
Then O is the weighted proportion of students who correctly answer both itemj and item j, and t indicates whether O is consistent with the IRT model. Residuals for the second-order marginal may be used to detect violations of the conditional independence assumption made by IRT models (Haberman and Sinharay 2013).
It is possible to create graphical plots using these generalized residuals. For example, one can create a plot showing the values of O and a 95% condence interval given by
1.96s. A value of O lying outside this condence interval would indicate a generalized residual signicant at the 5% level. The ETS mirt software (Haberman 2013) was used to perform the computations for the generalized residuals.
Assessment ofpractical signicance ofmist ofIRT models
George Box commented that all models are wrong (Box and Draper 1987,p. 74). Similarly, Lord and Novick (1968,p. 383) wrote that it can be taken for granted that every model is false and that we can prove it so if we collect a sufficiently large sample of data. According to them, the key question, then, is the practical utility of the model, not its ultimate truthfulness. Sinharay and Haberman (2014) therefore recommended the assessment of practical signicance of mist, which comprises the determination of the extent to which the decisions made from the test scores are robust against the mist of the IRT models. We assess the practical signicance of mist in all of our data examples.
ei(yi, zi) =
[notdef]
1 if i is in Kjk and yijk = 1, 0 otherwise
ei(yi, zi) =
[notdef]
1 if i is in Kjk Kjk and yijk = yijk = 1, 0 otherwise
van Rijn et al. Large-scale Assess Educ (2016) 4:10
The quantities of most practical interest among those that are operationally reported in NAEP are the subgroup means and the percent at dierent prociency levels. We examine the eect of mist on these quantities.
Data
We next describe data from four NAEP assessments that are used to demonstrate our suggested residuals. These data sets represent a variety of NAEP assessments.
NAEP 2004 and2008 longterm trend mathematics assessment atage 9
The long-term trend Mathematics assessment at age 9 (LTT Math Age 9; see e.g., Rampey etal. 2009) is supposed to measure the students
knowledge of basic mathematical facts,
ability to carry out computations using paper and pencil,
knowledge of basic measurement formulas as they are applied in geometric setting, and
ability to apply mathematics to daily-living skills (such as those related to time and money).
The assessment has a computational focus and contained a total of 161 dichotomous multiple-choice and constructed-response items divided over nine booklets. For example, a multiple-choice question in the assessment, which was answered correctly by 44% of the examinees, is How many fths are equal to one whole? This assessment has many more items per student than the usual NAEP assessments. The items covered the following topics: numbers and numeration; measurement; shape, size, and position; probability and statistics; and variables and relationships. The data set included about 16,000 examinees most of whom belong to Grade 4 with about 7300 students in the 2004 assessment and about 8600 students in the 2008 assessment. It is assumed in the operational analysis that there is only one skill underlying the items.
NAEP 2002 and2005 reading atgrade 12
The NAEP Reading Grade 12 assessment (e.g., Perie et al. 2005) measures the reading and comprehension skills of students in grade 12 by asking them to read selected grade-appropriate passages and answer questions based on what they have read. The assessment measures three contexts for reading: reading for literary experience, reading for information, and reading to perform a task. The assessment contained a total of 145 multiple-choice and constructed-response items divided over 38 booklets. Multiple-choice items were designed to test students understanding of the individual texts, as well as their ability to integrate and synthesize ideas across the texts. Constructed-response items were based on consideration of the texts the students read. Each student read approximately two passages and responded to questions about what he or she read. The data set included about 26,800 examinees with 14,700 students from the 2002 sample and 12,100 students from the 2005 sample. It is assumed that there are three skills (or subscales) underlying the items, one each corresponding to the three contexts.
Page 10 of 23
van Rijn et al. Large-scale Assess Educ (2016) 4:10
NAEP 2007 and2009 mathematics atgrade 8
The NAEP Mathematics Grade 8 assessment measures students knowledge and skills in mathematics and students ability to apply their knowledge in problem-solving situations. It is assumed that each item measures one among the ve following skills (subscales): number properties and operations; measurement; geometry; data analysis, statistics and probability; and algebra. This Mathematics Grade 8 assessment (e.g., National Center for Education Statistics 2009) included 231 multiple-choice and constructed-response items divided over 50 booklets. The full data set included about 314,700 examinees with 153,000 students from the 2007 sample and 161,700 students from the 2009 sample.
NAEP 2009 science atgrade 12
The NAEP 2009 Science Grade 12 assessment (e.g., National Center for Education Statistics 2011) included 185 multiple-choice and constructed-response items on physical science, life science, and earth and space science divided over 55 booklets. It is assumed that there is one skill underlying the items. The data set included about 11,100 examinees.
Results forsimulated data
In order to check Type I error of the item-t residuals and the generalized residuals for rst-order marginals and second-order marginals, we simulated data that look like the above-mentioned NAEP data sets but t the model perfectly. The simulations were performed on a subset of examinees for Mathematics Grade 8 because of the huge sample size for the test. We used the item-parameter estimates from our analyses of the NAEP data sets using the constrained 3PL/GPCM (see Table4). Values of were drawn from the normal distribution with separate population means for the assessments with two years and unit variance. The original booklet design was used, but sampling weights and primary sampling units were not used.
Itemt analysis using residuals Type I error rates
For the item-t residuals, the average Type I error rates at the 5 % level of signicanceover 25 replications, evaluated at either 11 or 31 points, are shown in Table1. The rates are considerably larger than the nominal level.
To explore the inated Type I error rates of the item-t residuals, Fig. 2 shows the Type I error rates as a function of and the test information functions for the four NAEP data sets.
Table 1 Average Type I error forgeneralized residuals ofitem response functions
Assessment Average type I error
11 points 31 points
LTT math age 9 12 10 Reading grade 12 10 9 Math grade 8 9 8 Science grade 12 14 14
Page 11 of 23
van Rijn et al. Large-scale Assess Educ (2016) 4:10
Page 12 of 23
The item-t residuals were computed at either 11 or 31 points between 3 and 3. It can be seen that more residuals are signicant for larger absolute values of , which is in line with earlier results (see, e.g., Haberman etal. 2013; Fig.2). In addition, there is a relationship between the Type I error rates and the test information function; Type I error rates become larger as information goes down. Obviously, there is a relationship between the Type I error rates and the sample sizes. This is best seen for the Science Grade 12 data set (last row), which is the smallest data set (for which the sample size is about 11,000, with an average of about 1900 responses per item); rst, the Type I error rates for item-t residuals computed at 11 points get closer to the nominal of .05 than those at 31 points; second, the peak of the test information function is between = 1 and = 2, indicating that the items are relatively difficult (note that the mean
van Rijn et al. Large-scale Assess Educ (2016) 4:10
and standard deviation of are xed to zero and one for model identication purposes). Given that there are not many students with > 2 and that even for these students the items can still be relatively difficult, the Type I error rate shows a steep incline between = 2 and = 3.
Thus, it can be concluded that the TypeI error rates for the item-t residuals are close to their nominal value if there are enough students and if there is substantial information in the ability range of interest.
Power
The samples of all four NAEP data sets are large and, therefore, the power to detect mist is generally expected to be large; however, we performed additional power analysis for the item t residuals using the item parameters of the LTT Math Age 9 assessment. Note that this assessment consists of dichotomous items only.
For the item-response functions, we consider four bad/mistting item types (see e.g., Sinharay 2006, p. 441):
1. Non-monotone for low :p(Y = 1|) = 14logit1(4.25( + 0.5)) + logit1(4.25( 1)).
2. Upper asymptote smaller than 1: p(Y = 1|) = 0.7logit1(3.4( + 0.5)).3. Flat for mid : p(Y = 1|) = 0.55logit1(5.95( + 1)) + logit1(5.95( 2)).4. Wiggly, non-monotone curve:p(Y = 1|) = 0.65logit1(1.5) + 0.35logit1(sin(3)).
The item response functions for these for item types are shown in Fig.3. The LTT math age 9 assessments consists of 161 items. We assigned 16 items to be bad items, with each type associated with four items. The simulations are set up in the same way as before, but with the item probabilities for the bad items determined by the equations above.
Page 13 of 23
van Rijn et al. Large-scale Assess Educ (2016) 4:10
The number of replications is 25. The item response functions are again evaluated at 31 points between 3 and 3.
The power for the four bad item types are shown in Table2. We computed two values of power for each bad item type: one for the 31 points between = 3 and = 3 and one for the 21 points between = 2 and = 2. As expected, both values of power are satisfactory for each bad item type. For bad items of type 4, the values of power are smaller compared to other bad item types, but this is due to the fact that the IRT model can approximate the wiggly curve reasonably well.
In addition, we simulated data under the 1PL, 2PL, and 3PL model and tted the 1PL to all three data sets, the 2PL to the latter two, and the 3PL to the latter only. This set up gives us additional Type I error rates for other model types and power for the situation in which the tted model is simpler than the data-generating model (see e.g., Sinharay 2006; Table1).
The results of these simulations are shown in Table3. In this table, the diagonals indicate Type I error and the o-diagonals indicate the power. The Type I Error rates are inated, which is in line with the previous results. The power to detect mist of the 1PL is very reasonable, but it is quite low for the 2PL.
Generalized residual analysis
For rst-order marginals, or the (weighted) proportion of students who correctly answer the dichotomous items or receive a specic score on a polytomous item, we used 25 replications for each of the four NAEP data sets. For second-order marginals,3 however, we used only ve replications, because the computation of these residuals for a single data set is very time consuming (several hours).
For the generalized residuals for rst-order marginals, the average Type I error rates at the 5% level are 7% for LTT Math Age 9 for long-term trend, 1% for Reading Grade
Table 2 Power ofthe item-t residuals forLTT math age 9 simulations
Item type Mean (3 to3) Mean (2 to2)
Bad item 1 74 84 Bad item 2 96 94 Bad item 3 86 90 Bad item 4 52 68
Table 3 Type I error (diagonals) andPower (o-diagonals) ofitem-t residuals fordierent model combinations forLTT math age 9
Itemt residual Datagenerating model
Fitted model 1PL 2PL 3PL
1PL 9 65 66
2PL 11 27
3PL 20
3 or the weighted proportion of students who correctly answer a pair of dichotomous items or receive a specic pair of scores on a pair of items one of which is polytomous.
Page 14 of 23
van Rijn et al. Large-scale Assess Educ (2016) 4:10
12, 0% for Math Grade 8, and 6% for Science Grade 12, respectively. Note that most of the Type I error rates for the rst-order marginals are rather meaningless, because IRT models with item-specic parameters should be able to predict observed item score frequencies well. For the generalized residuals for second-order marginals, the average Type I error rates at the 5% level are 6% for LTT Math Age 9, 5% for Reading Grade 12, and 6% for Science Grade 12, respectively. Thus, the Type I error rates of the generalized residuals for the second-order marginals are close to the nominal level, and seem to be satisfactory.
Results forthe NAEP data
We tted the 1-parameter logistic (1PL) model, 2PL model, 3PL model with constant guessing (C3PL), and 3PL model to the dichotomous items and the partial credit model (PCM) and GPCM to the polytomous items to each of the above-mentioned data sets. For the LTT Math Age 9, Reading Grade 12, and Math Grade 8 data, which had two assessment years, a dummy predictor was used so that population means for the two years are allowed to dier. The ETS mirt software (Haberman 2013) was used to perform all the computations, including the tting of the IRT models and the computation of the residuals.
Table 4 shows the estimated expected log penalty per presented item based on the GilulaHaberman approach. The two missing values in the last row denote that the corresponding assessments (LTT math and science) involve only one subscale. Note that the multidimensional models are so-called simple structure or between-item multidimensional models (e.g., Adams etal. 1997). Before discussing the results, we stress that serious model identication issues were encountered with the 3PL models for all four data sets. First, good starting values needed to be provided in order to nd a solution. Second, parameters with very large standard errors were found for the 3PL model. For example, the standard error of the logit of the guessing parameter for one item in the Math Grade 8 data was as high as 20.04, while about 31,200 students answered this item. Note that these issues were not encountered with the C3PL model. Now, we can make two observations based on the results in Table4. First, there is some improvement in t when item-specic slope (discrimination) parameters are used: For all four NAEP data sets, the biggest improvement in t was seen between the unidimensional 1PL/PCM and 2PL/GPCM. Second, the improvement in t beyond the 2PL/GPCM seems to be small.
Table 4 Relative model t statistics (PE-GH) forunidimensional (1D) andmultidimensional (MD) models
Model LTT math age 9 Reading grade 12 Math grade 8 Science grade 12
1D 1PL/PCM 0.465 0.634 0.607 0.643 1D 2PL/GPCM 0.456 0.629 0.601 0.636 1D C3PL/GPCM 0.455 0.629 0.600 0.634 1D 3PL/GPCM 0.454 0.629 0.600 0.634 MD 3PL/GPCM 0.628a 0.600b
a Threedimensional model
b Fivedimensional model
Page 15 of 23
van Rijn et al. Large-scale Assess Educ (2016) 4:10
That is, the addition of guessing parameters and multidimensional simple structures only lead to very small improvements in t.
The estimated correlations between the three (latent) dimensions in the multidimensional 3PL/GPCM are .86, .80 and .80 for the Reading assessment. The estimated correlations between the ve (latent) dimensions in the multidimensional 3PL/GPCM for the Math Grade 8 data are shown in Table5.
We next summarize the results from the application of our suggested IRT model-t tools to data from the four above-mentioned NAEP assessments using the unidimensional model combinations.
Itemt analysis using residuals
For each of the four NAEP data sets, we computed the item-t residuals at 31 equally-spaced values of the prociency scale between 3 and 3 for each score category (except for the lowest score category) of each item. Haberman etal. (2013) recommended the use of 31 values. Further, some limited analysis showed that the use of a dierent number of values does not change the conclusions. This resulted in, for example, 31 residuals for a binary item and 62 residuals for a polytomous item with three score categories.
The results are shown in Table6. The percentages of signicant results for the item-t residuals are all much larger than the nominal level of 5%. There is a substantial drop in the percentages from the 1PL/PCM to the other three models. However, the percentages show a steady decrease only for the LTT Math Age 9 assessments with increasing model
Table 5 Correlations betweendimensions inve-dimensional 3PL/GPCM formath grade 8 data
2 3 4 5
1. Number properties and operations .97 .93 .96 .952. Measurement .96 .96 .943. Geometry .93 .924. Data analysis and probability .945. Algebra
Table 6 Percent signicant residuals underdierent unidimensional models
Residual Model LTT math age 9
Item-t residual 1PL/PCM 67 43 75 47
2PL/GPCM 40 26 64 33 C3PL/GPCM 35 27 64 28 3PL/GPCM 28 29 64 33 First-order marginal 1PL/PCM 0 0 0 0
2PL/GPCM 0 0 0 0 C3PL/GPCM 29 5 0 19 3PL/GPCM 0 0 0 0 Second-order
marginal
1PL/PCM 47 15 27 18 2PL/GPCM 31 13 19 15 C3PL/GPCM 31 13 19 15 3PL/GPCM 31 14 19 15
Page 16 of 23
Reading grade 12
Science grade 12
Math grade 8
van Rijn et al. Large-scale Assess Educ (2016) 4:10
complexity (note that this assessment contains the largest proportion of MC items). For the other three data sets, the percentages of signicant residuals are similar after the 2PL/GPCM.
In the operational analysis, the number of items that were found to be mistting and removed from the nal computations were two, one, zero and six, respectively, for the four assessments.
Figure4 shows the item-t plots and residuals for a constructed response item from the 2005 Reading Grade 12 assessment. In the top panel, the solid line shows the estimated ICC of the item ( fj1(1|~) from Eq.8) and the dashed lines show a corresponding pointwise 95% condence band ( fj1(1|~) 2sj1(1|~) and fj1(1|~) + 2sj1(1|~)). The bottom panel shows the residuals. The gure shows that several residuals are statistically signicant. In fact, except for three or four residuals, all the residuals are statistically signicant. In addition, several of them are larger than 5. Thus, the 2PL model does not t the item. Figure5 shows the operational item t plot (PARPLOT) for the same item. The plot shows the estimated ICC using a solid line. The triangles indicate the empirical ICC and the inverted triangles indicate the empirical ICC during the previous
Page 17 of 23
van Rijn et al. Large-scale Assess Educ (2016) 4:10
Page 18 of 23
administration of the same item (in 2004). In NAEP operational analysis, the mist of the item was not found seriousso the item was retained.
The item t residuals can also be used to check if the item response function of a particular item provides sufficient t across all dierent booklets that contain the item. This provides an opportunity to study item mist due to, for example, item position eects (e.g., items that are at the end of the booklet can be more difficult due to either speededness or fatigue eects; see, for example,Debeer and Janssen 2013). Figure6 shows the item t residuals per booklet for a multiple choice LTT Math Age 9 item. It can be seen that the t residuals for lower abilities improve if the 3PL is used instead of the 3PL. Interestingly, the residuals for booklet 8 are more extreme than those for the other three booklets.
Generalized residual analysis
The second block of Table6 shows the percentage of statistically signicant generalized residuals for the rst-order marginal without any adjustment (that is, larger than 1.96 in absolute value) for all data sets and dierent models. The percentages are all zero except for the C3PL/GPCM. This can be explained by the fact that all but the C3PL/GPCM have item-specic parameters that can predict the observed proportions of item scores quite well. Only the C3PL/GPCM can have issues with this prediction, for example, if there is variation in guessing behaviors. This latter seems to be the case for LTT Math Age 9 but not for Math Grade 8.
van Rijn et al. Large-scale Assess Educ (2016) 4:10
Page 19 of 23
The third block of Table6 shows the percentage of statistically signicant generalized residuals for the second-order marginals. The percentages are considerably larger than the nominal level(also than the Type I error rates found in the simulation study) and show that the NAEP model does not adequately account for the association among the items. The mist is most apparent for LTT math age 9.
Figure7 shows a histogram of the generalized residuals for the second order marginal for the 2009 Science Grade 12 data.
Several generalized residuals are smaller than 10, which provides strong evidence of mist of the IRT model to second-order marginals.
Researcher such as Bradlow etal. (1999) noted that if the IRT model cannot account for the dependence between item-pair properly, then the precision of prociency estimates will be overestimated and showed that accounting for the dependence using, for example, the testlet model would not lead to overestimation of the precision of prociency estimates. Their result implies that if we found too many signicant generalized residuals for second-order marginals for items belonging to common stimulus (also referred to as testlets by, for example, Bradlow etal. 1999), then application of a model like the testlet model (Bradlow etal. 1999) would lead to better t to the NAEP data. However, we found that the proportion of signicant generalized residuals for
van Rijn et al. Large-scale Assess Educ (2016) 4:10
second-order marginals for item pairs belonging to testlets is roughly the same as those for item pairs not belonging to testlets. Thus, there does not seem to be an easy way to rectify the mist of the NAEP IRT model to the second-order marginals.
Assessment ofpractical signicance ofmist
To assess the practical signicance of item mist for the four assessments, we obtained the overall and subgroup means and the percentage of examinees at dierent prociency levels (we considered the percentages at basic or above, and procient or above) from the operational analysis. These quantities are reported as rounded integers in operational NAEP reports (e.g., Rampey etal. 2009). Note that these quantities were computed after omitting the items that were found mistting in the operational analysis (2, 1, 0 and 6 such items for the four assessments). Then, for any assessment, we found the nine items that had the largest number of statistically signicant item-t residuals. For example, for the 2008 long-term trend Mathematics assessment at Age 9, nine items with respectively 19, 19, 19, 19, 18, 18, 17, 17 and 16 statistically signicant item-t residuals (out of a total of 31 each) were found.
For each assessment, we omitted scores on the nine mistting items and ran the NAEP operational analysis to recompute the subgroup means (rounded and converted to the NAEP operational score scale) and the percentage of examinees at dierent prociency levels. We compared these recomputed values to the corresponding original (and operationally reported) quantities.
Interestingly, in 48 such comparisons of means and percentages for each of the four data sets, there was no dierence in 44, 36, 32 and 47 cases, respectively, for the long-term-trend, reading, math and science data sets. For example, the overall average score is 243 (on a 0-500 scale) and overall percent scoring 200 or above is 89 in both of these analyses for the 2008 long-term-trend Mathematics assessment at age 9. In the cases when there was a dierence, the dierence was one in absolute value. For example, the
Page 20 of 23
van Rijn et al. Large-scale Assess Educ (2016) 4:10
operationally reported overall percent at 250 or above is 44 while the percent at 250 or above after removing 9 mistting items is 45 for the 2008 long-term-trend Mathematics assessment at age 9.
Thus, the practical signicance of the item mist seems to be negligible for the four data sets.
Conclusions
The focus of this paper was on the assessment of mist of the IRT model used in large-scale survey assessments such as NAEP using data from four NAEP assessments. Two sets of recently suggested model-t tools, the item-t residuals (Bock and Haberman 2009; Haberman etal. 2013) and generalized residuals (Haberman and Sinharay 2013), were modied for application to NAEP data.
Keeping in mind the importance of NAEP in educational policy-making in the U.S., this paper promises to make a signicant contribution by performing a rigorous check of the t of the NAEP model. Replacement of the current NAEP item-t procedure by our suggested procedure would make the NAEP statistical toolkit more rigorous. Because several other assessments such as IALS, TIMSS and PIRLS use essentially the same statistical model as in NAEP, the ndings of this paper will be relevant to those assessments as well.
An important nding in this paper is that statistically signicant mist (in the form of signicant residuals) was found for all the data sets. This nding concurs with the statement of George Box that all models are wrong (Box and Draper 1987,p. 74) and a similar statement of (Lord and Novick 1968,p. 383). However, the observed mist was not practically signicant for any of the data sets. For example, the item-t residuals were statistically signicant for several items, but the removal of some of these items led to negligible dierences in the reported outcomes such as subgroup means and percentages at dierent prociency levels. Therefore, the NAEP operational model seems to be useful though it is wrong (in the sense that the model was found mistting to the NAEP data using the suggested residuals) from the viewpoint of George Box. It is possible that the lack of practical signicance of the mist is due to the thorough test development and review procedures used in NAEP, which may lter out any serious IRT-model-t issues. The nding of the lack of practical signicance of the mist is similar to the nding in Sinharay and Haberman (2014) that the mist of the operational IRT model used in several large-scale high-stakes tests is not signicant.
Several issues can be examined in future research. First, one could apply our suggested methods to data sets from other large-scale educational survey assessments such as TIMSS, PIRLS, and IALS. Second, Haberman etal. (2013) provided detailed simulation results demonstrating that the TypeI error rates of their item-t residuals in regular IRT applications are quite close to the nominal level as the sample size increases and those results are expected to hold for our suggested item-t residuals (that are extensions of the residuals of Haberman etal. 2013) as well, but it is possible to perform simulation studies to verify that. It is also possible to perform simulations to nd out the extent of model mist that would be practically signicant. Third, we studied the practical consequences of item mist in this paper; it is possible in future research to study the practical consequences of multidimensionality; for example, there is a close relationship between
Page 21 of 23
van Rijn et al. Large-scale Assess Educ (2016) 4:10
DIF and multidimensionality (e.g., Camilli 1992) and it would be of interest to study the practical consequences of multidimensionality on DIF. Fourth, it is possible to further explore the reasons of the rst-order marginal not being useful in our analysis. Sinharay etal. (2011) also found the generalized residuals of Haberman and Sinharay (2013) for the rst-order marginal to be not useful in assessing the t of regular IRT models. These residuals might be more useful to detect dierential item functioning (DIF). For example, the generalized residuals for the rst-order marginals for males and females can be used to study gender-based DIF (although the Type I error might be low, the power would be larger). Finally, several students taking the NAEP, especially those in twelfth grade, lack motivation (e.g., Pellegrino etal. 1999). It would be interesting to examine whether that lack of motivation aects the model t in any manner.
Authors contributions
PWVR carried out most of the computations and wrote a major part of the manuscript. SS wrote the rst draft of the manuscript and performed some of the computations. SJH suggested the mathematical results. MSJ wrote some parts of the manuscript and performed some computations. All authors read and approved the nal manuscript.
Author details
1 ETS Global, Amsterdam, Netherlands. 2 Pacic Metrics Corporation, Monterey, CA, USA . 3 ETS, Princeton, NJ, USA.
4 Columbia University, New York, USA.
Acknowledgements
The authors thank the editor Matthias von Davier and the two anonymous reviewers for helpful comments. The research reported here was partially supported by the Institute of Education Sciences, U.S. Department of Education, through Grant R305D120006 to Educational Testing Service as part of the Statistical and Research Methodology in Education Initiative.
Competing interests
The authors declare that they have no competing interests.
Received: 24 September 2015 Accepted: 7 June 2016
References
Adams, R. J., Wilson, M. R., & Wang, W. C. (1997). The multidimensional random coefficients multinomial logit model.
Applied Psychological Measurement, 21, 123.
Allen, N. A., Donoghue, J. R., & Schoeps, T. L. (2001). The NAEP 1998 technical report (NCES 2001-452). Washington, DC:
United States Department of Education, Institute of Education Sciences, Department of Education, Office for Educational Research and Improvement.
American Association of Educational Research, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.
Beaton, A. E. (1987). Implementing the new design: The NAEP 198384 technical report (Tech. Rep. No 15-TR-20). Princeton, NJ: ETS.
Beaton, A. E. (2003). A procedure for testing the t of IRT models for special populations: Draft. Unpublished manuscript. Birnbaum, A. (1968). Some latent trait models and their use in inferring an examinees ability. In F. M. Lord & M. R. Novick
(Eds.), Statistical theories of mental test scores (pp. 397479). Reading: Addison-Wesley.
Bock, R. D., & Haberman, S. J. (2009) Condence bands for examining goodness-of-t of estimated item response functions. Paper presented at the annual meeting of the Psychometric Society, Cambridge, UK.
Box, G. E. P., & Draper, N. R. (1987). Empirical model-building and response surfaces. New York, NY: Wiley.
Bradlow, E. T., Wainer, H., & Wang, X. (1999). A Bayesian random eects model for testlets. Psychometrika, 64, 153168. Camilli, G. (1992). A conceptual analysis of dierential item functioning in terms of a multidimensional item response model. Applied Psychological Measurement, 16, 129147.
Cochran, W. G. (1977). Sampling techniques (3rd ed.). New York: Wiley.
Debeer, D., & Janssen, R. (2013). Modeling item-position eects within an IRT framework. Journal of Educational Measurement, 50, 164185.
Dresher, A. R., & Thind, S. K. (2007). Examination of item t for individual jurisdictions in NAEP. Paper presented at the annual meeting of the American Educational Research Association, Chicago, IL.du Toit, M. (2003). IRT from SSI. Lincolnwood, IL: Scientic Software International.
Efron, B., & Tibshirani, R. J. (1993). An introduction to the bootstrap. New York: Chapman and Hall.
Gilula, Z., & Haberman, S. J. (1995). Prediction functions for categorical panel data. The Annals of Statistics, 23, 11301142. Gilula, Z., & Haberman, S. J. (1994). Models for analyzing categorical panel data. Journal of the American Statistical Association, 89, 645656.
Page 22 of 23
van Rijn et al. Large-scale Assess Educ (2016) 4:10
Haberman, S. J. (2009). Use of generalized residuals to examine goodness of t of item response models (ETS Research Report
RR-09-15). Princeton: ETS.
Haberman, S. J. (2013). A general program for item-response analysis that employs the stabilized Newton-Raphson algorithm
(ETS Research Report RR-13-32). Princeton: ETS.
Haberman, S. J., & Sinharay, S. (2013). Generalized residuals for general models for contingency tables with application to item response theory. Journal of American Statistical Association, 108, 14351444.
Haberman, S. J., Sinharay, S., & Chon, K. H. (2013). Assessing item t for unidimensional item response theory models using residuals from estimated item response functions. Psychometrika, 78, 417440.
Kirsch, I. S. (2001). The International Adult Literacy Survey (IALS): Understanding what was measured (ETS Research Report
RR-01-25). Princeton: ETS.
Li, J. (2005) The eect of accommodations for students with disabilities: An item t analysis. Paper presented at the
Annual meeting of the National Council of Measurement in Education, Montreal, CA.
Lord, F. M., & Novick, M. R. (1968). Statistical theories of mental test scores. Reading: Addison Wesley.
Martin, M. O., & Kelly, D. L. (1996). Third international mathematics and science study technical report volume 1: Design and development. Chestnut Hill: Boston College.
Mislevy, R. J., Johnson, E. G., & Muraki, E. (1992). Scaling procedures in NAEP. Journal of Educational Statistics, 17, 131154. Mullis, I., Martin, M., & Gonzalez, E. (2003). 2003 PIRLS 2001 international report: IEAs study of reading literacy achievement in primary schools,. Chestnut Hill: Boston College.
Muraki, E. (1992). A generalized partial credit model: Application of an EM algorithm. Applied Psychological Measurement,
16, 159176.
National Center for Education Statistics. (2009). The nations report card: Mathematics 2009 (Tech. Rep. No. NCES 2010451).
Washington, DC: Institute of Education Sciences, U.S. Department of Education.
National Center for Education Statistics. (2011). The nations report card: Science 2009 (Tech. Rep. No. NCES 2011451). Washington, DC: Institute of Education Sciences, U.S. Department of Education.
Orlando, M., & Thissen, D. (2000). Likelihood-based item-t indices for dichotomous item response theory models. Applied
Psychological Measurement, 24, 5064.
Pellegrino, J. W., Jones, L. R., & Mitchell, K. J. (1999). Grading the nations report card: Evaluating NAEP and transforming the assessment of educational progress. Washington, DC: National Academy Press.
Perie, M., Grigg, W., & Donahue, P. (2005). The nations report card: Reading 2005 (Tech. Rep. No. NCES 2006451). Washington, DC: U.S. Government Printing Office: U.S. Department of Education, National Center for Education Statistics. Rampey, B. D., Dion, G. S., & Donahue, P. L. (2009). NAEP 2008 trends in academic progress (Tech. Rep. No. NCES 2009479).
Washington, DC: National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education.
Rogers, A., Gregory, K., Davis, S., Kulick, E. (2006). Users guide to NAEP model-based p-value programs. Unpublished manuscript. Princeton: ETS.
Sinharay, S. (2006). Bayesian item t analysis for unidimensional item response theory models. British Journal of Mathematical and Statistical Psychology, 59, 429449.
Sinharay, S., Guo, Z., von Davier, M., & Veldkamp, B. P. (2010). Assessing t of latent regression models. IERI Monograph
Series, 3, 3555.
Sinharay, S., & Haberman, S. J. (2014). How often is the mist of item response theory models practically signicant?
Educational Measurement: Issues and practice, 33(1), 2335.
Sinharay, S., Haberman, S. J., & Jia, H. (2011). Fit of item response theory models: A survey of data from several operational tests (ETS Research Report No. RR-11-29). Princeton: ETS.
Von Davier, M., & Sinharay, S. (2014). Analytics in international large-scale assessments: item response theory and population models. In L. Rutkowski, M. Von Davier, & D. Rutkowski (Eds.), Handbook of international large-scale assessment: background, technical issues, and methods of data analysis (pp. 155174). Boca Raton: CRC.
Page 23 of 23
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Large-scale Assessments in Education is a copyright of Springer, 2016.
Abstract
Latent regression models are used for score-reporting purposes in large-scale educational survey assessments such as the National Assessment of Educational Progress (NAEP) and Trends in International Mathematics and Science Study (TIMSS). One component of these models is based on item response theory. While there exists some research on assessment of fit of item response theory models in the context of large-scale assessments, there is a scope of further research on the topic. We suggest two types of residuals to assess the fit of item response theory models in the context of large-scale assessments. The Type I error rates and power of the residuals are computed from simulated data. The residuals are computed using data from four NAEP assessments. Misfit was found for all data sets for both types of residuals, but the practical significance of the misfit was minimal.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer