| mirt {mirt} | R Documentation | 
mirt fits a maximum likelihood (or maximum a posteriori) factor analysis model
to any mixture of dichotomous and polytomous data under the item response theory paradigm
using either Cai's (2010) Metropolis-Hastings Robbins-Monro (MHRM) algorithm, with
an EM algorithm approach outlined by Bock and Aitkin (1981) using rectangular or
quasi-Monte Carlo integration grids, or with the stochastic EM (i.e., the first two stages
of the MH-RM algorithm). Models containing 'explanatory' person or item level predictors
can only be included by using the mixedmirt function, though latent
regression models can be fit using the formula input in this function.
Tests that form a two-tier or bi-factor structure should be estimated with the
bfactor function, which uses a dimension reduction EM algorithm for
modeling item parcels.  Multiple group analyses (useful for DIF and DTF testing) are
also available using the multipleGroup function.
mirt(
  data,
  model = 1,
  itemtype = NULL,
  guess = 0,
  upper = 1,
  SE = FALSE,
  covdata = NULL,
  formula = NULL,
  itemdesign = NULL,
  item.formula = NULL,
  SE.type = "Oakes",
  method = "EM",
  optimizer = NULL,
  dentype = "Gaussian",
  pars = NULL,
  constrain = NULL,
  calcNull = FALSE,
  draws = 5000,
  survey.weights = NULL,
  quadpts = NULL,
  TOL = NULL,
  gpcm_mats = list(),
  grsm.block = NULL,
  rsm.block = NULL,
  monopoly.k = 1L,
  key = NULL,
  large = FALSE,
  GenRandomPars = FALSE,
  accelerate = "Ramsay",
  verbose = TRUE,
  solnp_args = list(),
  nloptr_args = list(),
  spline_args = list(),
  control = list(),
  technical = list(),
  ...
)
data | 
 a   | 
model | 
 a string to be passed (or an object returned from)   | 
itemtype | 
 type of items to be modeled, declared as either a) a single value to be
recycled for each item, b) a vector for each respective item, or c) if applicable,
a matrix with columns equal to the number of items and rows equal to the number of
latent classes. The  
 Additionally, user defined item classes can also be defined using the   | 
guess | 
 fixed pseudo-guessing parameters. Can be entered as a single value to assign a global guessing parameter or may be entered as a numeric vector corresponding to each item  | 
upper | 
 fixed upper bound parameters for 4-PL model. Can be entered as a single value to assign a global guessing parameter or may be entered as a numeric vector corresponding to each item  | 
SE | 
 logical; estimate the standard errors by computing the parameter information matrix?
See   | 
covdata | 
 a data.frame of data used for latent regression models  | 
formula | 
 an R formula (or list of formulas) indicating how the latent traits
can be regressed using external covariates in   | 
itemdesign | 
 a   | 
item.formula | 
 an R formula used to specify any intercept decomposition (e.g., the LLTM; Fischer, 1983). Note that only the right-hand side of the formula is required for compensatory models. For non-compensatory   | 
SE.type | 
 type of estimation method to use for calculating the parameter information matrix
for computing standard errors and  
 Note that both the   | 
method | 
 a character object specifying the estimation algorithm to be used. The default is
 The   | 
optimizer | 
 a character indicating which numerical optimizer to use. By default, the EM
algorithm will use the  Other options include the Newton-Raphson ( Additionally, estimation subroutines from the   | 
dentype | 
 type of density form to use for the latent trait parameters. Current options include 
 Note that when   | 
pars | 
 a data.frame with the structure of how the starting values, parameter numbers,
estimation logical values, etc, are defined. The user may observe how the model defines the
values by using   | 
constrain | 
 a list of user declared equality constraints. To see how to define the
parameters correctly use   | 
calcNull | 
 logical; calculate the Null model for additional fit statistics (e.g., TLI)? Only applicable if the data contains no NA's and the data is not overly sparse  | 
draws | 
 the number of Monte Carlo draws to estimate the log-likelihood for the MH-RM algorithm. Default is 5000  | 
survey.weights | 
 a optional numeric vector of survey weights to apply for each case in the
data (EM estimation only). If not specified, all cases are weighted equally (the standard IRT
approach). The sum of the   | 
quadpts | 
 number of quadrature points per dimension (must be larger than 2).
By default the number of quadrature uses the following scheme:
  | 
TOL | 
 convergence threshold for EM or MH-RM; defaults are .0001 and .001. If
  | 
gpcm_mats | 
 a list of matrices specifying how the scoring coefficients in the (generalized)
partial credit model should be constructed. If omitted, the standard gpcm format will be used
(i.e.,   | 
grsm.block | 
 an optional numeric vector indicating where the blocking should occur when
using the grsm, NA represents items that do not belong to the grsm block (other items that may
be estimated in the test data). For example, to specify two blocks of 3 with a 2PL item for
the last item:   | 
rsm.block | 
 same as   | 
monopoly.k | 
 a vector of values (or a single value to repeated for each item) which indicate
the degree of the monotone polynomial fitted, where the monotone polynomial
corresponds to   | 
key | 
 a numeric vector of the response scoring key. Required when using nested logit item
types, and must be the same length as the number of items used. Items that are not nested logit
will ignore this vector, so use   | 
large | 
 a  Alternatively, if the collapse table of frequencies is desired for the purpose of saving computations
(i.e., only computing the collapsed frequencies for the data onte-time) then a character vector can
be passed with the arguement  
  | 
GenRandomPars | 
 logical; generate random starting values prior to optimization instead of using the fixed internal starting values?  | 
accelerate | 
 a character vector indicating the type of acceleration to use. Default
is   | 
verbose | 
 logical; print observed- (EM) or complete-data (MHRM) log-likelihood after each iteration cycle? Default is TRUE  | 
solnp_args | 
 a list of arguments to be passed to the   | 
nloptr_args | 
 a list of arguments to be passed to the   | 
spline_args | 
 a named list of lists containing information to be passed to the  
 This code input changes the   | 
control | 
 a list passed to the respective optimizers (i.e.,   | 
technical | 
 a list containing lower level technical parameters for estimation. May be: 
  | 
... | 
 additional arguments to be passed  | 
function returns an object of class SingleGroupClass
(SingleGroupClass-class)
Specification of the confirmatory item factor analysis model follows many of
the rules in the structural equation modeling framework for confirmatory factor analysis. The
variances of the latent factors are automatically fixed to 1 to help
facilitate model identification. All parameters may be fixed to constant
values or set equal to other parameters using the appropriate declarations.
Confirmatory models may also contain 'explanatory' person or item level predictors, though
including predictors is currently limited to the mixedmirt function.
When specifying a single number greater than 1 as the model input to mirt
an exploratory IRT model will be estimated. Rotation and target matrix options are available
if they are passed to generic functions such as summary-method and
fscores. Factor means and variances are fixed to ensure proper identification.
If the model is an exploratory item factor analysis estimation will begin
by computing a matrix of quasi-polychoric correlations. A
factor analysis with nfact is then extracted and item parameters are
estimated by a_{ij} = f_{ij}/u_j, where f_{ij} is the factor
loading for the jth item on the ith factor, and u_j is
the square root of the factor uniqueness, \sqrt{1 - h_j^2}. The
initial intercept parameters are determined by calculating the inverse
normal of the item facility (i.e., item easiness), q_j, to obtain
d_j = q_j / u_j. A similar implementation is also used for obtaining
initial values for polytomous items.
Internally the g and u parameters are transformed using a logit
transformation (log(x/(1-x))), and can be reversed by using 1 / (1 + exp(-x))
following convergence. This also applies when computing confidence intervals for these
parameters, and is done so automatically if coef(mod, rawug = FALSE).
As such, when applying prior distributions to these parameters it is recommended to use a prior
that ranges from negative infinity to positive infinity, such as the normally distributed
prior via the 'norm' input (see mirt.model).
Unrestricted full-information factor analysis is known to have problems with convergence, and some items may need to be constrained or removed entirely to allow for an acceptable solution. As a general rule dichotomous items with means greater than .95, or items that are only .05 greater than the guessing parameter, should be considered for removal from the analysis or treated with prior parameter distributions. The same type of reasoning is applicable when including upper bound parameters as well. For polytomous items, if categories are rarely endorsed then this will cause similar issues. Also, increasing the number of quadrature points per dimension, or using the quasi-Monte Carlo integration method, may help to stabilize the estimation process in higher dimensions. Finally, solutions that are not well defined also will have difficulty converging, and can indicate that the model has been misspecified (e.g., extracting too many dimensions).
For the MH-RM algorithm, when the number of iterations grows very high (e.g., greater than 1500)
or when Max Change = .2500 values are repeatedly printed
to the console too often (indicating that the parameters were being constrained since they are
naturally moving in steps greater than 0.25) then the model may either be ill defined or have a
very flat likelihood surface, and genuine maximum-likelihood parameter estimates may be difficult
to find. Standard errors are computed following the model convergence by passing
SE = TRUE, to perform an addition MH-RM stage but treating the maximum-likelihood
estimates as fixed points.
Additional functions are available in the package which can be useful pre- and post-estimation. These are:
mirt.modelDefine the IRT model specification use special syntax. Useful for defining between and within group parameter constraints, prior parameter distributions, and specifying the slope coefficients for each factor
coef-methodExtract raw coefficients from the model, along with their standard errors and confidence intervals
summary-methodExtract standardized loadings from model. Accepts a rotate argument for exploratory
item response model
anova-methodCompare nested models using likelihood ratio statistics as well as information criteria such as the AIC and BIC
residuals-methodCompute pairwise residuals between each item using methods such as the LD statistic (Chen & Thissen, 1997), as well as response pattern residuals
plot-methodPlot various types of test level plots including the test score and information functions and more
itemplotPlot various types of item level plots, including the score, standard error, and information functions, and more
createItemCreate a customized itemtype that does not currently exist in the package
imputeMissingImpute missing data given some computed Theta matrix
fscoresFind predicted scores for the latent traits using estimation methods such as EAP, MAP, ML, WLE, and EAPsum
waldCompute Wald statistics follow the convergence of a model with a suitable information matrix
M2Limited information goodness of fit test statistic based to determine how well the model fits the data
itemfit and personfitGoodness of fit statistics at the item and person levels, such as the S-X2, infit, outfit, and more
boot.mirtCompute estimated parameter confidence intervals via the bootstrap methods
mirtClusterDefine a cluster for the package functions to use for capitalizing on multi-core architecture to utilize available CPUs when possible. Will help to decrease estimation times for tasks that can be run in parallel
The parameter labels use the follow convention, here using two factors and K as the total
number of categories (using k for specific category instances).
Only one intercept estimated, and the latent variance of \theta is freely estimated. If
the data have more than two categories then a partial credit model is used instead (see
'gpcm' below).
P(x = 1|\theta, d) = \frac{1}{1 + exp(-(\theta + d))}
Depending on the model u may be equal to 1 (e.g., 3PL), g may be equal to 0 (e.g., 2PL),
or the as may be fixed to 1 (e.g., 1PL).
P(x = 1|\theta, \psi) = g + \frac{(u - g)}{
      1 + exp(-(a_1 * \theta_1 + a_2 * \theta_2 + d))}
Currently restricted to unidimensional models
P(x = 1|\theta, \psi) = g + \frac{(u - g)}{
      1 + exp(-(a_1 * \theta_1 + d))^S}
where S allows for asymmetry in the response function and
is transformation constrained to be greater than 0 (i.e., log(S) is estimated rather than S)
Complementary log-log model (see Shim, Bonifay, and Wiedermann, 2022)
P(x = 1|\theta, b) = 1 - exp(-exp(\theta - b))
Currently restricted to unidimensional dichotomous data.
The graded model consists of sequential 2PL models,
P(x = k | \theta, \psi) = P(x \ge k | \theta, \phi) - P(x \ge k + 1 | \theta, \phi)
Note that P(x \ge 1 | \theta, \phi) = 1 while P(x \ge K + 1 | \theta, \phi) = 0
The unipolar log-logistic model (ULL; Lucke, 2015) is defined the same as the graded response model, however
P(x \le k | \theta, \psi) = \frac{\lambda_k\theta^\eta}{1 + \lambda_k\theta^\eta}
.
Internally the \lambda parameters are exponentiated to keep them positive, and should
therefore the reported estimates should be interpreted in log units
A more constrained version of the graded model where graded spacing is equal across item
blocks and only adjusted by a single 'difficulty' parameter (c) while the latent variance
of \theta is freely estimated (see Muraki, 1990 for this exact form).
This is restricted to unidimensional models only.
For the gpcm the d values are treated as fixed and ordered values
from 0:(K-1) (in the nominal model d_0 is also set to 0). Additionally, for
identification in the nominal model ak_0 = 0, ak_{(K-1)} = (K - 1).
P(x = k | \theta, \psi) =
    \frac{exp(ak_{k-1} * (a_1 * \theta_1 + a_2 * \theta_2) + d_{k-1})}
    {\sum_{k=1}^K exp(ak_{k-1} * (a_1 * \theta_1 + a_2 * \theta_2) + d_{k-1})}
For the partial credit model (when itemtype = 'Rasch'; unidimensional only) the above
model is further constrained so that ak = (0,1,\ldots, K-1), a_1 = 1, and the
latent variance of \theta_1 is freely estimated. Alternatively, the partial credit model
can be obtained by containing all the slope parameters in the gpcms to be equal.
More specific scoring function may be included by passing a suitable list or matrices
to the gpcm_mats input argument.
In the nominal model this parametrization helps to identify the empirical ordering of the
categories by inspecting the ak values. Larger values indicate that the item category
is more positively related to the latent trait(s) being measured. For instance, if an item
was truly ordinal (such as a Likert scale), and had 4 response categories, we would expect
to see ak_0 < ak_1 < ak_2 < ak_3 following estimation. If on the other hand
ak_0 > ak_1 then it would appear that the second category is less related to to the
trait than the first, and therefore the second category should be understood as the
'lowest score'.
NOTE: The nominal model can become numerical unstable if poor choices for the high and low
values are chosen, resulting in ak values greater than abs(10) or more. It is
recommended to choose high and low anchors that cause the estimated parameters to fall
between 0 and K - 1 either by theoretical means or by re-estimating
the model with better values following convergence.
The gpcmIRT model is the classical generalized partial credit model for unidimensional response
data. It will obtain the same fit as the gpcm presented above, however the parameterization
allows for the Rasch/generalized rating scale model as a special case.
E.g., for a K = 4 category response model,
P(x = 0 | \theta, \psi) = exp(0) / G
P(x = 1 | \theta, \psi) = exp(a(\theta - b1) + c) / G
P(x = 2 | \theta, \psi) = exp(a(2\theta - b1 - b2) + 2c) / G
P(x = 3 | \theta, \psi) = exp(a(3\theta - b1 - b2 - b3) + 3c) / G
where
G = exp(0) + exp(a(\theta - b1) + c) + exp(a(2\theta - b1 - b2) + 2c) +
       exp(a(3\theta - b1 - b2 - b3) + 3c)
Here a is the slope parameter, the b parameters are the threshold
values for each adjacent category, and c is the so-called difficulty parameter when
a rating scale model is fitted (otherwise, c = 0 and it drops out of the computations).
The gpcmIRT can be constrained to the partial credit IRT model by either constraining all the slopes to be equal, or setting the slopes to 1 and freeing the latent variance parameter.
Finally, the rsm is a more constrained version of the (generalized) partial credit model where the spacing is equal across item blocks and only adjusted by a single 'difficulty' parameter (c). Note that this is analogous to the relationship between the graded model and the grsm (with an additional constraint regarding the fixed discrimination parameters).
The multidimensional sequential response model has the form
P(x = k | \theta, \psi) = \prod (1 - F(a_1 \theta_1 + a_2 \theta_2 + d_{sk}))
      F(a_1 \theta_1 + a_2 \theta_2 + d_{jk})
where F(\cdot) is the cumulative logistic function.
The Tutz variant of this model (Tutz, 1990) (via itemtype = 'Tutz')
assumes that the slope terms are all equal to 1 and the latent
variance terms are estimated (i.e., is a Rasch variant).
The ideal point model has the form, with the upper bound constraint on d set to 0:
P(x = 1 | \theta, \psi) = exp(-0.5 * (a_1 * \theta_1 + a_2 * \theta_2 + d)^2)
Partially compensatory models consist of the product of 2PL probability curves.
P(x = 1 | \theta, \psi) = g + (1 - g) (\frac{1}{1 + exp(-(a_1 * \theta_1 + d_1))}^c_1 *
    \frac{1}{1 + exp(-(a_2 * \theta_2 + d_2))}^c_2)
where $c_1$ and $c_2$ are binary indicator variables reflecting whether the item should include the select compensatory component (1) or not (0). Note that constraining the slopes to be equal across items will reduce the model to Embretson's (Whitely's) multicomponent model (1980).
Nested logistic curves for modeling distractor items. Requires a scoring key. The model is broken into two components for the probability of endorsement. For successful endorsement the probability trace is the 1-4PL model, while for unsuccessful endorsement:
P(x = 0 | \theta, \psi) =
    (1 - P_{1-4PL}(x = 1 | \theta, \psi)) * P_{nominal}(x = k | \theta, \psi)
which is the product of the complement of the dichotomous trace line with the nominal
response model. In the nominal model, the slope parameters defined above are constrained
to be 1's, while the last value of the ak is freely estimated.
The (multidimensional) generalized graded unfolding model is a class of ideal point models useful for ordinal response data. The form is
P(z=k|\theta,\psi)=\frac{exp\left[\left(z\sqrt{\sum_{d=1}^{D}
    a_{id}^{2}(\theta_{jd}-b_{id})^{2}}\right)+\sum_{k=0}^{z}\psi_{ik}\right]+
    exp\left[\left((M-z)\sqrt{\sum_{d=1}^{D}a_{id}^{2}(\theta_{jd}-b_{id})^{2}}\right)+
    \sum_{k=0}^{z}\psi_{ik}\right]}{\sum_{w=0}^{C}\left(exp\left[\left(w
    \sqrt{\sum_{d=1}^{D}a_{id}^{2}(\theta_{jd}-b_{id})^{2}}\right)+
    \sum_{k=0}^{z}\psi_{ik}\right]+exp\left[\left((M-w)
    \sqrt{\sum_{d=1}^{D}a_{id}^{2}(\theta_{jd}-b_{id})^{2}}\right)+
    \sum_{k=0}^{z}\psi_{ik}\right]\right)}
where \theta_{jd} is the location of the jth individual on the dth dimension,
b_{id} is the difficulty location of the ith item on the dth dimension,
a_{id} is the discrimination of the jth individual on the dth dimension
(where the discrimination values are constrained to be positive),
\psi_{ik} is the kth subjective response category threshold for the ith item,
assumed to be symmetric about the item and constant across dimensions, where
\psi_{ik} = \sum_{d=1}^D a_{id} t_{ik}
z = 1,2,\ldots, C (where C is the number of categories minus 1),
and M = 2C + 1.
Spline response models attempt to model the response curves uses non-linear and potentially non-monotonic patterns. The form is
P(x = 1|\theta, \eta) = \frac{1}{1 + exp(-(\eta_1 * X_1 + \eta_2 * X_2 + \cdots + \eta_n * X_n))}
where the X_n are from the spline design matrix X organized from the grid of \theta
values. B-splines with a natural or polynomial basis are supported, and the intercept input is
set to TRUE by default.
Monotone polynomial model for polytomous response data of the form
P(x = k | \theta, \psi) =
    \frac{exp(\sum_1^k (m^*(\psi) + \xi_{c-1})}
    {\sum_1^C exp(\sum_1^K (m^*(\psi) + \xi_{c-1}))}
where m^*(\psi) is the monotone polynomial function without the intercept.
To access examples, vignettes, and exercise files that have been generated with knitr please visit https://github.com/philchalmers/mirt/wiki.
Phil Chalmers rphilip.chalmers@gmail.com
Andrich, D. (1978). A rating scale formulation for ordered response categories. Psychometrika, 43, 561-573.
Bock, R. D., & Aitkin, M. (1981). Marginal maximum likelihood estimation of item parameters: Application of an EM algorithm. Psychometrika, 46(4), 443-459.
Bock, R. D., Gibbons, R., & Muraki, E. (1988). Full-Information Item Factor Analysis. Applied Psychological Measurement, 12(3), 261-280.
Bock, R. D. & Lieberman, M. (1970). Fitting a response model for n dichotomously scored items. Psychometrika, 35, 179-197.
Cai, L. (2010a). High-Dimensional exploratory item factor analysis by a Metropolis-Hastings Robbins-Monro algorithm. Psychometrika, 75, 33-57.
Cai, L. (2010b). Metropolis-Hastings Robbins-Monro algorithm for confirmatory item factor analysis. Journal of Educational and Behavioral Statistics, 35, 307-335.
Chalmers, R., P. (2012). mirt: A Multidimensional Item Response Theory Package for the R Environment. Journal of Statistical Software, 48(6), 1-29. doi:10.18637/jss.v048.i06
Chalmers, R. P. (2015). Extended Mixed-Effects Item Response Models with the MH-RM Algorithm. Journal of Educational Measurement, 52, 200-222. doi:10.1111/jedm.12072
Chalmers, R. P. (2018). Numerical Approximation of the Observed Information Matrix with Oakes' Identity. British Journal of Mathematical and Statistical Psychology DOI: 10.1111/bmsp.12127
Chalmers, R., P. & Flora, D. (2014). Maximum-likelihood Estimation of Noncompensatory IRT Models with the MH-RM Algorithm. Applied Psychological Measurement, 38, 339-358. doi:10.1177/0146621614520958
Chen, W. H. & Thissen, D. (1997). Local dependence indices for item pairs using item response theory. Journal of Educational and Behavioral Statistics, 22, 265-289.
Embretson, S. E. (1984). A general latent trait model for response processes. Psychometrika, 49, 175-186.
Falk, C. F. & Cai, L. (2016). Maximum Marginal Likelihood Estimation of a Monotonic Polynomial Generalized Partial Credit Model with Applications to Multiple Group Analysis. Psychometrika, 81, 434-460.
Fischer, G. H. (1983). Logistic latent trait models with linear constraints. Psychometrika, 48, 3-26.
Lord, F. M. & Novick, M. R. (1968). Statistical theory of mental test scores. Addison-Wesley.
Lucke, J. F. (2015). Unipolar item response models. In S. P. Reise & D. A. Revicki (Eds.), Handbook of item response theory modeling: Applications to typical performance assessment (pp. 272-284). New York, NY: Routledge/Taylor & Francis Group.
Ramsay, J. O. (1975). Solving implicit equations in psychometric data analysis. Psychometrika, 40, 337-360.
Rasch, G. (1960). Probabilistic models for some intelligence and attainment tests. Danish Institute for Educational Research.
Roberts, J. S., Donoghue, J. R., & Laughlin, J. E. (2000). A General Item Response Theory Model for Unfolding Unidimensional Polytomous Responses. Applied Psychological Measurement, 24, 3-32.
Shim, H., Bonifay, W., & Wiedermann, W. (2022). Parsimonious asymmetric item response theory modeling with the complementary log-log link. Behavior Research Methods, 55, 200-219.
Maydeu-Olivares, A., Hernandez, A. & McDonald, R. P. (2006). A Multidimensional Ideal Point Item Response Theory Model for Binary Data. Multivariate Behavioral Research, 41, 445-471.
Muraki, E. (1990). Fitting a polytomous item response model to Likert-type data. Applied Psychological Measurement, 14, 59-71.
Muraki, E. (1992). A generalized partial credit model: Application of an EM algorithm. Applied Psychological Measurement, 16, 159-176.
Muraki, E. & Carlson, E. B. (1995). Full-information factor analysis for polytomous item responses. Applied Psychological Measurement, 19, 73-90.
Samejima, F. (1969). Estimation of latent ability using a response pattern of graded scores. Psychometrika Monographs, 34.
Suh, Y. & Bolt, D. (2010). Nested logit models for multiple-choice item response data. Psychometrika, 75, 454-473.
Sympson, J. B. (1977). A model for testing with multidimensional items. Proceedings of the 1977 Computerized Adaptive Testing Conference.
Thissen, D. (1982). Marginal maximum likelihood estimation for the one-parameter logistic model. Psychometrika, 47, 175-186.
Tutz, G. (1990). Sequential item response models with ordered response. British Journal of Mathematical and Statistical Psychology, 43, 39-55.
Varadhan, R. & Roland, C. (2008). Simple and Globally Convergent Methods for Accelerating the Convergence of Any EM Algorithm. Scandinavian Journal of Statistics, 35, 335-353.
Whitely, S. E. (1980). Multicomponent latent trait models for ability tests. Psychometrika, 45(4), 470-494.
Wood, R., Wilson, D. T., Gibbons, R. D., Schilling, S. G., Muraki, E., & Bock, R. D. (2003). TESTFACT 4 for Windows: Test Scoring, Item Statistics, and Full-information Item Factor Analysis [Computer software]. Lincolnwood, IL: Scientific Software International.
Woods, C. M., and Lin, N. (2009). Item Response Theory With Estimation of the Latent Density Using Davidian Curves. Applied Psychological Measurement,33(2), 102-117.
bfactor,  multipleGroup,  mixedmirt,
expand.table, key2binary, mod2values,
extract.item, iteminfo, testinfo,
probtrace, simdata, averageMI,
fixef, extract.mirt, itemstats
# load LSAT section 7 data and compute 1 and 2 factor models
data <- expand.table(LSAT7)
itemstats(data)
## $overall
##     N mean_total.score sd_total.score ave.r  sd.r alpha SEM.alpha
##  1000            3.707          1.199 0.143 0.052 0.453     0.886
## 
## $itemstats
##           N  mean    sd total.r total.r_if_rm alpha_if_rm
## Item.1 1000 0.828 0.378   0.530         0.246       0.396
## Item.2 1000 0.658 0.475   0.600         0.247       0.394
## Item.3 1000 0.772 0.420   0.611         0.313       0.345
## Item.4 1000 0.606 0.489   0.592         0.223       0.415
## Item.5 1000 0.843 0.364   0.461         0.175       0.438
## 
## $proportions
##            0     1
## Item.1 0.172 0.828
## Item.2 0.342 0.658
## Item.3 0.228 0.772
## Item.4 0.394 0.606
## Item.5 0.157 0.843
(mod1 <- mirt(data, 1))
## 
## Call:
## mirt(data = data, model = 1)
## 
## Full-information item factor analysis with 1 factor(s).
## Converged within 1e-04 tolerance after 28 EM iterations.
## mirt version: 1.43 
## M-step optimizer: BFGS 
## EM acceleration: Ramsay 
## Number of rectangular quadrature: 61
## Latent density type: Gaussian 
## 
## Log-likelihood = -2658.805
## Estimated parameters: 10 
## AIC = 5337.61
## BIC = 5386.688; SABIC = 5354.927
## G2 (21) = 31.7, p = 0.0628
## RMSEA = 0.023, CFI = NaN, TLI = NaN
coef(mod1)
## $Item.1
##        a1     d g u
## par 0.988 1.856 0 1
## 
## $Item.2
##        a1     d g u
## par 1.081 0.808 0 1
## 
## $Item.3
##        a1     d g u
## par 1.706 1.804 0 1
## 
## $Item.4
##        a1     d g u
## par 0.765 0.486 0 1
## 
## $Item.5
##        a1     d g u
## par 0.736 1.855 0 1
## 
## $GroupPars
##     MEAN_1 COV_11
## par      0      1
summary(mod1)
##           F1    h2
## Item.1 0.502 0.252
## Item.2 0.536 0.287
## Item.3 0.708 0.501
## Item.4 0.410 0.168
## Item.5 0.397 0.157
## 
## SS loadings:  1.366 
## Proportion Var:  0.273 
## 
## Factor correlations: 
## 
##    F1
## F1  1
plot(mod1)
plot(mod1, type = 'trace')
## No test: 
(mod2 <- mirt(data, 1, SE = TRUE)) #standard errors via the Oakes method
## 
## Call:
## mirt(data = data, model = 1, SE = TRUE)
## 
## Full-information item factor analysis with 1 factor(s).
## Converged within 1e-04 tolerance after 28 EM iterations.
## mirt version: 1.43 
## M-step optimizer: BFGS 
## EM acceleration: Ramsay 
## Number of rectangular quadrature: 61
## Latent density type: Gaussian 
## 
## Information matrix estimated with method: Oakes
## Second-order test: model is a possible local maximum
## Condition number of information matrix =  30.23088
## 
## Log-likelihood = -2658.805
## Estimated parameters: 10 
## AIC = 5337.61
## BIC = 5386.688; SABIC = 5354.927
## G2 (21) = 31.7, p = 0.0628
## RMSEA = 0.023, CFI = NaN, TLI = NaN
(mod2 <- mirt(data, 1, SE = TRUE, SE.type = 'SEM')) #standard errors with SEM method
## 
## Call:
## mirt(data = data, model = 1, SE = TRUE, SE.type = "SEM")
## 
## Full-information item factor analysis with 1 factor(s).
## Converged within 1e-05 tolerance after 74 EM iterations.
## mirt version: 1.43 
## M-step optimizer: BFGS 
## EM acceleration: none 
## Number of rectangular quadrature: 61
## Latent density type: Gaussian 
## 
## Information matrix estimated with method: SEM
## Second-order test: model is a possible local maximum
## Condition number of information matrix =  30.12751
## 
## Log-likelihood = -2658.805
## Estimated parameters: 10 
## AIC = 5337.61
## BIC = 5386.688; SABIC = 5354.927
## G2 (21) = 31.7, p = 0.0628
## RMSEA = 0.023, CFI = NaN, TLI = NaN
coef(mod2)
## $Item.1
##            a1     d  g  u
## par     0.988 1.856  0  1
## CI_2.5  0.639 1.599 NA NA
## CI_97.5 1.336 2.112 NA NA
## 
## $Item.2
##            a1     d  g  u
## par     1.081 0.808  0  1
## CI_2.5  0.755 0.629 NA NA
## CI_97.5 1.407 0.987 NA NA
## 
## $Item.3
##            a1     d  g  u
## par     1.707 1.805  0  1
## CI_2.5  1.086 1.395 NA NA
## CI_97.5 2.329 2.215 NA NA
## 
## $Item.4
##            a1     d  g  u
## par     0.765 0.486  0  1
## CI_2.5  0.500 0.339 NA NA
## CI_97.5 1.030 0.633 NA NA
## 
## $Item.5
##            a1     d  g  u
## par     0.736 1.854  0  1
## CI_2.5  0.437 1.630 NA NA
## CI_97.5 1.034 2.079 NA NA
## 
## $GroupPars
##         MEAN_1 COV_11
## par          0      1
## CI_2.5      NA     NA
## CI_97.5     NA     NA
(mod3 <- mirt(data, 1, SE = TRUE, SE.type = 'Richardson')) #with numerical Richardson method
## 
## Call:
## mirt(data = data, model = 1, SE = TRUE, SE.type = "Richardson")
## 
## Full-information item factor analysis with 1 factor(s).
## Converged within 1e-04 tolerance after 28 EM iterations.
## mirt version: 1.43 
## M-step optimizer: BFGS 
## EM acceleration: Ramsay 
## Number of rectangular quadrature: 61
## Latent density type: Gaussian 
## 
## Information matrix estimated with method: Richardson
## Second-order test: model is a possible local maximum
## Condition number of information matrix =  30.23102
## 
## Log-likelihood = -2658.805
## Estimated parameters: 10 
## AIC = 5337.61
## BIC = 5386.688; SABIC = 5354.927
## G2 (21) = 31.7, p = 0.0628
## RMSEA = 0.023, CFI = NaN, TLI = NaN
residuals(mod1)
## LD matrix (lower triangle) and standardized residual correlations (upper triangle)
## 
## Upper triangle summary:
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  -0.037  -0.020  -0.007   0.001   0.024   0.051 
## 
##        Item.1 Item.2 Item.3 Item.4 Item.5
## Item.1        -0.021 -0.029  0.051  0.049
## Item.2  0.453         0.033 -0.016 -0.037
## Item.3  0.854  1.060        -0.012 -0.002
## Item.4  2.572  0.267  0.153         0.000
## Item.5  2.389  1.384  0.003  0.000
plot(mod1) #test score function
plot(mod1, type = 'trace') #trace lines
plot(mod2, type = 'info') #test information
plot(mod2, MI=200) #expected total score with 95% confidence intervals
# estimated 3PL model for item 5 only
(mod1.3PL <- mirt(data, 1, itemtype = c('2PL', '2PL', '2PL', '2PL', '3PL')))
## 
## Call:
## mirt(data = data, model = 1, itemtype = c("2PL", "2PL", "2PL", 
##     "2PL", "3PL"))
## 
## Full-information item factor analysis with 1 factor(s).
## Converged within 1e-04 tolerance after 43 EM iterations.
## mirt version: 1.43 
## M-step optimizer: BFGS 
## EM acceleration: Ramsay 
## Number of rectangular quadrature: 61
## Latent density type: Gaussian 
## 
## Log-likelihood = -2658.794
## Estimated parameters: 11 
## AIC = 5339.587
## BIC = 5393.573; SABIC = 5358.636
## G2 (20) = 31.68, p = 0.0469
## RMSEA = 0.024, CFI = NaN, TLI = NaN
coef(mod1.3PL)
## $Item.1
##        a1     d g u
## par 0.987 1.855 0 1
## 
## $Item.2
##        a1     d g u
## par 1.082 0.808 0 1
## 
## $Item.3
##        a1     d g u
## par 1.706 1.805 0 1
## 
## $Item.4
##        a1     d g u
## par 0.764 0.486 0 1
## 
## $Item.5
##        a1     d     g u
## par 0.778 1.643 0.161 1
## 
## $GroupPars
##     MEAN_1 COV_11
## par      0      1
# internally g and u pars are stored as logits, so usually a good idea to include normal prior
#  to help stabilize the parameters. For a value around .182 use a mean
#  of -1.5 (since 1 / (1 + exp(-(-1.5))) == .182)
model <- 'F = 1-5
         PRIOR = (5, g, norm, -1.5, 3)'
mod1.3PL.norm <- mirt(data, model, itemtype = c('2PL', '2PL', '2PL', '2PL', '3PL'))
coef(mod1.3PL.norm)
## $Item.1
##        a1     d g u
## par 0.987 1.855 0 1
## 
## $Item.2
##        a1     d g u
## par 1.083 0.808 0 1
## 
## $Item.3
##        a1     d g u
## par 1.706 1.804 0 1
## 
## $Item.4
##        a1     d g u
## par 0.764 0.486 0 1
## 
## $Item.5
##        a1   d    g u
## par 0.788 1.6 0.19 1
## 
## $GroupPars
##     MEAN_1 COV_11
## par      0      1
#limited information fit statistics
M2(mod1.3PL.norm)
##             M2 df          p      RMSEA RMSEA_5   RMSEA_95      SRMSR       TLI
## stats 8.800082  4 0.06629543 0.03465864       0 0.06610847 0.03207363 0.9454563
##             CFI
## stats 0.9781825
# unidimensional ideal point model
idealpt <- mirt(data, 1, itemtype = 'ideal')
plot(idealpt, type = 'trace', facet_items = TRUE)
plot(idealpt, type = 'trace', facet_items = FALSE)
# two factors (exploratory)
mod2 <- mirt(data, 2)
coef(mod2)
## $Item.1
##         a1   a2     d g u
## par -2.007 0.87 2.648 0 1
## 
## $Item.2
##         a1     a2     d g u
## par -0.849 -0.522 0.788 0 1
## 
## $Item.3
##         a1     a2     d g u
## par -2.153 -1.837 2.483 0 1
## 
## $Item.4
##         a1     a2     d g u
## par -0.756 -0.028 0.485 0 1
## 
## $Item.5
##         a1 a2     d g u
## par -0.757  0 1.864 0 1
## 
## $GroupPars
##     MEAN_1 MEAN_2 COV_11 COV_21 COV_22
## par      0      0      1      0      1
summary(mod2, rotate = 'oblimin') #oblimin rotation
## 
## Rotation:  oblimin 
## 
## Rotated factor loadings: 
## 
##             F1      F2    h2
## Item.1  0.7944 -0.0111 0.623
## Item.2  0.0804  0.4630 0.255
## Item.3 -0.0129  0.8628 0.734
## Item.4  0.2794  0.1925 0.165
## Item.5  0.2929  0.1772 0.165
## 
## Rotated SS loadings:  0.802 1.027 
## 
## Factor correlations: 
## 
##       F1 F2
## F1 1.000   
## F2 0.463  1
residuals(mod2)
## LD matrix (lower triangle) and standardized residual correlations (upper triangle)
## 
## Upper triangle summary:
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  -0.018  -0.001   0.000   0.000   0.002   0.011 
## 
##        Item.1 Item.2 Item.3 Item.4 Item.5
## Item.1        -0.001  0.001  0.002  0.003
## Item.2  0.001         0.000  0.011 -0.018
## Item.3  0.001  0.000        -0.002  0.006
## Item.4  0.002  0.111  0.004        -0.001
## Item.5  0.008  0.325  0.041  0.001
plot(mod2)
plot(mod2, rotate = 'oblimin')
anova(mod1, mod2) #compare the two models
##           AIC    SABIC       HQ      BIC    logLik     X2 df     p
## mod1 5337.610 5354.927 5356.263 5386.688 -2658.805                
## mod2 5335.039 5359.283 5361.153 5403.748 -2653.520 10.571  4 0.032
scoresfull <- fscores(mod2) #factor scores for each response pattern
head(scoresfull)
##             F1        F2
## [1,] -1.700518 -1.711769
## [2,] -1.700518 -1.711769
## [3,] -1.700518 -1.711769
## [4,] -1.700518 -1.711769
## [5,] -1.700518 -1.711769
## [6,] -1.700518 -1.711769
scorestable <- fscores(mod2, full.scores = FALSE) #save factor score table
## 
## Method:  EAP
## Rotate:  oblimin
## 
## Empirical Reliability:
## 
##     F1     F2 
## 0.2717 0.3565
head(scorestable)
##      Item.1 Item.2 Item.3 Item.4 Item.5        F1         F2     SE_F1
## [1,]      0      0      0      0      0 -1.700518 -1.7117695 0.8233469
## [2,]      0      0      0      0      1 -1.442222 -1.5315376 0.8291567
## [3,]      0      0      0      1      0 -1.449006 -1.5246585 0.8289641
## [4,]      0      0      0      1      1 -1.186299 -1.3432791 0.8376106
## [5,]      0      0      1      0      0 -1.369488 -0.7080810 0.8344641
## [6,]      0      0      1      0      1 -1.099377 -0.5102857 0.8455261
##          SE_F2
## [1,] 0.7705757
## [2,] 0.7691490
## [3,] 0.7691109
## [4,] 0.7711287
## [5,] 0.7962932
## [6,] 0.8101314
# confirmatory (as an example, model is not identified since you need 3 items per factor)
# Two ways to define a confirmatory model: with mirt.model, or with a string
# these model definitions are equivalent
cmodel <- mirt.model('
   F1 = 1,4,5
   F2 = 2,3')
cmodel2 <- 'F1 = 1,4,5
            F2 = 2,3'
cmod <- mirt(data, cmodel)
# cmod <- mirt(data, cmodel2) # same as above
coef(cmod)
## $Item.1
##        a1 a2     d g u
## par 1.792  0 2.358 0 1
## 
## $Item.2
##     a1    a2   d g u
## par  0 1.427 0.9 0 1
## 
## $Item.3
##     a1    a2     d g u
## par  0 1.559 1.725 0 1
## 
## $Item.4
##        a1 a2     d g u
## par 0.743  0 0.483 0 1
## 
## $Item.5
##        a1 a2     d g u
## par 0.763  0 1.867 0 1
## 
## $GroupPars
##     MEAN_1 MEAN_2 COV_11 COV_21 COV_22
## par      0      0      1      0      1
anova(cmod, mod2)
##           AIC    SABIC       HQ      BIC    logLik     X2 df p
## cmod 5392.596 5409.913 5411.249 5441.674 -2686.298            
## mod2 5335.039 5359.283 5361.153 5403.748 -2653.520 65.557  4 0
# check if identified by computing information matrix
(cmod <- mirt(data, cmodel, SE = TRUE))
## Warning: Could not invert information matrix; model may not be (empirically)
## identified.
## 
## Call:
## mirt(data = data, model = cmodel, SE = TRUE)
## 
## Full-information item factor analysis with 2 factor(s).
## Converged within 1e-04 tolerance after 125 EM iterations.
## mirt version: 1.43 
## M-step optimizer: BFGS 
## EM acceleration: Ramsay 
## Number of rectangular quadrature: 31
## Latent density type: Gaussian 
## 
## Information matrix estimated with method: Oakes
## Second-order test: model is not a maximum or the information matrix is too inaccurate
## 
## Log-likelihood = -2686.298
## Estimated parameters: 10 
## AIC = 5392.596
## BIC = 5441.674; SABIC = 5409.913
## G2 (21) = 86.69, p = 0
## RMSEA = 0.056, CFI = NaN, TLI = NaN
###########
# data from the 'ltm' package in numeric format
itemstats(Science)
## $overall
##    N mean_total.score sd_total.score ave.r  sd.r alpha SEM.alpha
##  392           11.668          2.003 0.275 0.098 0.598      1.27
## 
## $itemstats
##           N  mean    sd total.r total.r_if_rm alpha_if_rm
## Comfort 392 3.120 0.588   0.596         0.352       0.552
## Work    392 2.722 0.807   0.666         0.332       0.567
## Future  392 2.990 0.757   0.748         0.488       0.437
## Benefit 392 2.837 0.802   0.684         0.363       0.541
## 
## $proportions
##             1     2     3     4
## Comfort 0.013 0.082 0.679 0.227
## Work    0.084 0.250 0.526 0.140
## Future  0.036 0.184 0.536 0.245
## Benefit 0.054 0.255 0.492 0.199
pmod1 <- mirt(Science, 1)
plot(pmod1)
plot(pmod1, type = 'trace')
plot(pmod1, type = 'itemscore')
summary(pmod1)
##            F1    h2
## Comfort 0.522 0.273
## Work    0.584 0.342
## Future  0.803 0.645
## Benefit 0.541 0.293
## 
## SS loadings:  1.552 
## Proportion Var:  0.388 
## 
## Factor correlations: 
## 
##    F1
## F1  1
# Constrain all slopes to be equal with the constrain = list() input or mirt.model() syntax
# first obtain parameter index
values <- mirt(Science,1, pars = 'values')
values #note that slopes are numbered 1,5,9,13, or index with values$parnum[values$name == 'a1']
##    group    item     class   name parnum  value lbound ubound   est const
## 1    all Comfort    graded     a1      1  0.851   -Inf    Inf  TRUE  none
## 2    all Comfort    graded     d1      2  4.390   -Inf    Inf  TRUE  none
## 3    all Comfort    graded     d2      3  2.583   -Inf    Inf  TRUE  none
## 4    all Comfort    graded     d3      4 -1.471   -Inf    Inf  TRUE  none
## 5    all    Work    graded     a1      5  0.851   -Inf    Inf  TRUE  none
## 6    all    Work    graded     d1      6  2.707   -Inf    Inf  TRUE  none
## 7    all    Work    graded     d2      7  0.842   -Inf    Inf  TRUE  none
## 8    all    Work    graded     d3      8 -2.120   -Inf    Inf  TRUE  none
## 9    all  Future    graded     a1      9  0.851   -Inf    Inf  TRUE  none
## 10   all  Future    graded     d1     10  3.543   -Inf    Inf  TRUE  none
## 11   all  Future    graded     d2     11  1.522   -Inf    Inf  TRUE  none
## 12   all  Future    graded     d3     12 -1.357   -Inf    Inf  TRUE  none
## 13   all Benefit    graded     a1     13  0.851   -Inf    Inf  TRUE  none
## 14   all Benefit    graded     d1     14  3.166   -Inf    Inf  TRUE  none
## 15   all Benefit    graded     d2     15  0.982   -Inf    Inf  TRUE  none
## 16   all Benefit    graded     d3     16 -1.661   -Inf    Inf  TRUE  none
## 17   all   GROUP GroupPars MEAN_1     17  0.000   -Inf    Inf FALSE  none
## 18   all   GROUP GroupPars COV_11     18  1.000      0    Inf FALSE  none
##    nconst prior.type prior_1 prior_2
## 1    none       none     NaN     NaN
## 2    none       none     NaN     NaN
## 3    none       none     NaN     NaN
## 4    none       none     NaN     NaN
## 5    none       none     NaN     NaN
## 6    none       none     NaN     NaN
## 7    none       none     NaN     NaN
## 8    none       none     NaN     NaN
## 9    none       none     NaN     NaN
## 10   none       none     NaN     NaN
## 11   none       none     NaN     NaN
## 12   none       none     NaN     NaN
## 13   none       none     NaN     NaN
## 14   none       none     NaN     NaN
## 15   none       none     NaN     NaN
## 16   none       none     NaN     NaN
## 17   none       none     NaN     NaN
## 18   none       none     NaN     NaN
(pmod1_equalslopes <- mirt(Science, 1, constrain = list(c(1,5,9,13))))
## 
## Call:
## mirt(data = Science, model = 1, constrain = list(c(1, 5, 9, 13)))
## 
## Full-information item factor analysis with 1 factor(s).
## Converged within 1e-04 tolerance after 15 EM iterations.
## mirt version: 1.43 
## M-step optimizer: BFGS 
## EM acceleration: Ramsay 
## Number of rectangular quadrature: 61
## Latent density type: Gaussian 
## 
## Log-likelihood = -1613.899
## Estimated parameters: 16 
## AIC = 3253.798
## BIC = 3305.425; SABIC = 3264.176
## G2 (242) = 223.62, p = 0.7959
## RMSEA = 0, CFI = NaN, TLI = NaN
coef(pmod1_equalslopes)
## $Comfort
##        a1    d1    d2     d3
## par 1.321 5.165 2.844 -1.587
## 
## $Work
##        a1    d1    d2     d3
## par 1.321 2.992 0.934 -2.319
## 
## $Future
##        a1    d1    d2     d3
## par 1.321 4.067 1.662 -1.488
## 
## $Benefit
##        a1   d1    d2     d3
## par 1.321 3.55 1.057 -1.806
## 
## $GroupPars
##     MEAN_1 COV_11
## par      0      1
# using mirt.model syntax, constrain all item slopes to be equal
model <- 'F = 1-4
          CONSTRAIN = (1-4, a1)'
(pmod1_equalslopes <- mirt(Science, model))
## 
## Call:
## mirt(data = Science, model = model)
## 
## Full-information item factor analysis with 1 factor(s).
## Converged within 1e-04 tolerance after 15 EM iterations.
## mirt version: 1.43 
## M-step optimizer: BFGS 
## EM acceleration: Ramsay 
## Number of rectangular quadrature: 61
## Latent density type: Gaussian 
## 
## Log-likelihood = -1613.899
## Estimated parameters: 16 
## AIC = 3253.798
## BIC = 3305.425; SABIC = 3264.176
## G2 (242) = 223.62, p = 0.7959
## RMSEA = 0, CFI = NaN, TLI = NaN
coef(pmod1_equalslopes)
## $Comfort
##        a1    d1    d2     d3
## par 1.321 5.165 2.844 -1.587
## 
## $Work
##        a1    d1    d2     d3
## par 1.321 2.992 0.934 -2.319
## 
## $Future
##        a1    d1    d2     d3
## par 1.321 4.067 1.662 -1.488
## 
## $Benefit
##        a1   d1    d2     d3
## par 1.321 3.55 1.057 -1.806
## 
## $GroupPars
##     MEAN_1 COV_11
## par      0      1
coef(pmod1_equalslopes)
## $Comfort
##        a1    d1    d2     d3
## par 1.321 5.165 2.844 -1.587
## 
## $Work
##        a1    d1    d2     d3
## par 1.321 2.992 0.934 -2.319
## 
## $Future
##        a1    d1    d2     d3
## par 1.321 4.067 1.662 -1.488
## 
## $Benefit
##        a1   d1    d2     d3
## par 1.321 3.55 1.057 -1.806
## 
## $GroupPars
##     MEAN_1 COV_11
## par      0      1
anova(pmod1_equalslopes, pmod1) #significantly worse fit with almost all criteria
##                        AIC    SABIC       HQ      BIC    logLik     X2 df     p
## pmod1_equalslopes 3253.798 3264.176 3274.259 3305.425 -1613.899                
## pmod1             3249.739 3262.512 3274.922 3313.279 -1608.870 10.059  3 0.018
pmod2 <- mirt(Science, 2)
summary(pmod2)
## 
## Rotation:  oblimin 
## 
## Rotated factor loadings: 
## 
##              F1      F2    h2
## Comfort  0.6016  0.0312 0.382
## Work    -0.0573  0.7971 0.592
## Future   0.3302  0.5153 0.548
## Benefit  0.7231 -0.0239 0.506
## 
## Rotated SS loadings:  0.997 0.902 
## 
## Factor correlations: 
## 
##       F1 F2
## F1 1.000   
## F2 0.511  1
plot(pmod2, rotate = 'oblimin')
itemplot(pmod2, 1, rotate = 'oblimin')
anova(pmod1, pmod2)
##            AIC    SABIC       HQ      BIC    logLik     X2 df     p
## pmod1 3249.739 3262.512 3274.922 3313.279 -1608.870                
## pmod2 3241.938 3257.106 3271.843 3317.392 -1601.969 13.801  3 0.003
# unidimensional fit with a generalized partial credit and nominal model
(gpcmod <- mirt(Science, 1, 'gpcm'))
## 
## Call:
## mirt(data = Science, model = 1, itemtype = "gpcm")
## 
## Full-information item factor analysis with 1 factor(s).
## Converged within 1e-04 tolerance after 50 EM iterations.
## mirt version: 1.43 
## M-step optimizer: BFGS 
## EM acceleration: Ramsay 
## Number of rectangular quadrature: 61
## Latent density type: Gaussian 
## 
## Log-likelihood = -1612.683
## Estimated parameters: 16 
## AIC = 3257.366
## BIC = 3320.906; SABIC = 3270.139
## G2 (239) = 221.19, p = 0.7896
## RMSEA = 0, CFI = NaN, TLI = NaN
coef(gpcmod)
## $Comfort
##        a1 ak0 ak1 ak2 ak3 d0    d1    d2    d3
## par 0.865   0   1   2   3  0 2.831 5.324 3.998
## 
## $Work
##        a1 ak0 ak1 ak2 ak3 d0    d1    d2    d3
## par 0.841   0   1   2   3  0 1.711 2.578 0.848
## 
## $Future
##        a1 ak0 ak1 ak2 ak3 d0    d1    d2    d3
## par 2.204   0   1   2   3  0 4.601 6.759 4.918
## 
## $Benefit
##        a1 ak0 ak1 ak2 ak3 d0    d1    d2    d3
## par 0.724   0   1   2   3  0 2.099 2.899 1.721
## 
## $GroupPars
##     MEAN_1 COV_11
## par      0      1
# for the nominal model the lowest and highest categories are assumed to be the
#  theoretically lowest and highest categories that related to the latent trait(s)
(nomod <- mirt(Science, 1, 'nominal'))
## 
## Call:
## mirt(data = Science, model = 1, itemtype = "nominal")
## 
## Full-information item factor analysis with 1 factor(s).
## Converged within 1e-04 tolerance after 71 EM iterations.
## mirt version: 1.43 
## M-step optimizer: BFGS 
## EM acceleration: Ramsay 
## Number of rectangular quadrature: 61
## Latent density type: Gaussian 
## 
## Log-likelihood = -1608.455
## Estimated parameters: 24 
## AIC = 3264.91
## BIC = 3360.22; SABIC = 3284.069
## G2 (231) = 212.73, p = 0.8002
## RMSEA = 0, CFI = NaN, TLI = NaN
coef(nomod) #ordering of ak values suggest that the items are indeed ordinal
## $Comfort
##        a1 ak0   ak1   ak2 ak3 d0    d1    d2    d3
## par 1.008   0 1.541 1.999   3  0 3.639 5.905 4.533
## 
## $Work
##        a1 ak0   ak1 ak2 ak3 d0    d1    d2    d3
## par 0.841   0 0.689 1.5   3  0 1.464 2.326 0.325
## 
## $Future
##        a1 ak0   ak1   ak2 ak3 d0    d1    d2    d3
## par 2.041   0 0.762 1.861   3  0 3.668 5.867 3.949
## 
## $Benefit
##        a1 ak0   ak1   ak2 ak3 d0    d1    d2    d3
## par 0.779   0 1.036 1.742   3  0 2.144 2.911 1.621
## 
## $GroupPars
##     MEAN_1 COV_11
## par      0      1
anova(gpcmod, nomod)
##             AIC    SABIC       HQ      BIC    logLik    X2 df    p
## gpcmod 3257.366 3270.139 3282.549 3320.906 -1612.683              
## nomod  3264.910 3284.069 3302.684 3360.220 -1608.455 8.456  8 0.39
itemplot(nomod, 3)
# generalized graded unfolding model
(ggum <- mirt(Science, 1, 'ggum'))
## Warning: EM cycles terminated after 500 iterations.
## 
## Call:
## mirt(data = Science, model = 1, itemtype = "ggum")
## 
## Full-information item factor analysis with 1 factor(s).
## FAILED TO CONVERGE within 1e-04 tolerance after 500 EM iterations.
## mirt version: 1.43 
## M-step optimizer: nlminb 
## EM acceleration: Ramsay 
## Number of rectangular quadrature: 61
## Latent density type: Gaussian 
## 
## Log-likelihood = -1624.053
## Estimated parameters: 20 
## AIC = 3288.107
## BIC = 3367.532; SABIC = 3304.072
## G2 (235) = 243.93, p = 0.3309
## RMSEA = 0.01, CFI = NaN, TLI = NaN
coef(ggum, simplify=TRUE)
## $items
##            a1     b1    t1    t2     t3
## Comfort 1.488 -0.485 3.191 2.635 -0.167
## Work    1.190  0.041 2.171 1.427 -0.720
## Future  4.165 -0.042 2.167 1.346  0.261
## Benefit 1.227 -0.476 2.776 1.497 -0.274
## 
## $means
## F1 
##  0 
## 
## $cov
##    F1
## F1  1
plot(ggum)
plot(ggum, type = 'trace')
plot(ggum, type = 'itemscore')
# monotonic polyomial models
(monopoly <- mirt(Science, 1, 'monopoly'))
## 
## Call:
## mirt(data = Science, model = 1, itemtype = "monopoly")
## 
## Full-information item factor analysis with 1 factor(s).
## Converged within 1e-04 tolerance after 50 EM iterations.
## mirt version: 1.43 
## M-step optimizer: BFGS 
## EM acceleration: Ramsay 
## Number of rectangular quadrature: 61
## Latent density type: Gaussian 
## 
## Log-likelihood = -1601.179
## Estimated parameters: 24 
## AIC = 3250.358
## BIC = 3345.668; SABIC = 3269.517
## G2 (231) = 198.18, p = 0.9423
## RMSEA = 0, CFI = NaN, TLI = NaN
coef(monopoly, simplify=TRUE)
## $items
##          omega   xi1   xi2    xi3 alpha1   tau2
## Comfort -1.469 2.929 2.218 -1.469 -0.968  0.775
## Work    -0.405 1.378 0.699 -2.152 -0.494 -1.179
## Future   0.828 4.939 2.246 -1.909  0.015 -8.461
## Benefit -1.747 1.887 0.618 -1.388 -1.467  0.746
## 
## $means
## F1 
##  0 
## 
## $cov
##    F1
## F1  1
plot(monopoly)
plot(monopoly, type = 'trace')
plot(monopoly, type = 'itemscore')
# unipolar IRT model
unimod <- mirt(Science, itemtype = 'ULL')
coef(unimod, simplify=TRUE)
## $items
##          eta1 log_lambda1 log_lambda2 log_lambda3
## Comfort 1.175       4.775       2.299      -1.709
## Work    1.618       2.533       0.554      -2.737
## Future  2.799       4.029       1.524      -2.593
## Benefit 1.319       3.020       0.681      -1.995
## 
## $GroupPars
##     meanlog sdlog
## par       0     1
plot(unimod)
plot(unimod, type = 'trace')
itemplot(unimod, 1)
# following use the correct log-normal density for latent trait
itemfit(unimod)
##      item   S_X2 df.S_X2 RMSEA.S_X2 p.S_X2
## 1 Comfort  5.662       6      0.000  0.462
## 2    Work 10.135       8      0.026  0.256
## 3  Future 19.460       8      0.061  0.013
## 4 Benefit 12.104      11      0.016  0.356
M2(unimod, type = 'C2')
##             M2 df            p     RMSEA    RMSEA_5  RMSEA_95      SRMSR
## stats 18.69399  2 8.722705e-05 0.1461089 0.09025438 0.2096054 0.07864884
##             TLI       CFI
## stats 0.7377197 0.9125732
fs <- fscores(unimod)
hist(fs, 20)
fscores(unimod, method = 'EAPsum', full.scores = FALSE)
##       df     X2 p.X2 SEM.alpha rxx.alpha rxx_F1
## stats 10 33.917    0     1.305     0.658  0.531
##    Sum.Scores    F1 SE_F1 observed expected std.res
## 4           4 0.138 0.153        2    0.127   5.251
## 5           5 0.304 0.088        1    0.766   0.268
## 6           6 0.328 0.084        2    4.337   1.122
## 7           7 0.352 0.126        1   13.904   3.461
## 8           8 0.407 0.199       11   27.736   3.178
## 9           9 0.530 0.305       32   40.626   1.353
## 10         10 0.748 0.440       58   52.276   0.792
## 11         11 1.053 0.605       70   63.511   0.814
## 12         12 1.478 0.845       91   68.881   2.665
## 13         13 2.164 1.282       56   54.419   0.214
## 14         14 3.299 2.001       36   36.185   0.031
## 15         15 5.109 3.236       20   20.819   0.179
## 16         16 8.222 5.298       12    8.414   1.236
## example applying survey weights.
# weight the first half of the cases to be more representative of population
survey.weights <- c(rep(2, nrow(Science)/2), rep(1, nrow(Science)/2))
survey.weights <- survey.weights/sum(survey.weights) * nrow(Science)
unweighted <- mirt(Science, 1)
weighted <- mirt(Science, 1, survey.weights=survey.weights)
###########
# empirical dimensionality testing that includes 'guessing'
data(SAT12)
data <- key2binary(SAT12,
  key = c(1,4,5,2,3,1,2,1,3,1,2,4,2,1,5,3,4,4,1,4,3,3,4,1,3,5,1,3,1,5,4,5))
itemstats(data)
## $overall
##    N mean_total.score sd_total.score ave.r  sd.r alpha SEM.alpha
##  600           18.202          5.054 0.108 0.075 0.798     2.272
## 
## $itemstats
##           N  mean    sd total.r total.r_if_rm alpha_if_rm
## Item.1  600 0.283 0.451   0.380         0.300       0.793
## Item.2  600 0.568 0.496   0.539         0.464       0.785
## Item.3  600 0.280 0.449   0.446         0.371       0.789
## Item.4  600 0.378 0.485   0.325         0.235       0.796
## Item.5  600 0.620 0.486   0.424         0.340       0.791
## Item.6  600 0.160 0.367   0.414         0.351       0.791
## Item.7  600 0.760 0.427   0.366         0.289       0.793
## Item.8  600 0.202 0.402   0.307         0.233       0.795
## Item.9  600 0.885 0.319   0.189         0.127       0.798
## Item.10 600 0.422 0.494   0.465         0.383       0.789
## Item.11 600 0.983 0.128   0.181         0.156       0.797
## Item.12 600 0.415 0.493   0.173         0.076       0.803
## Item.13 600 0.662 0.474   0.438         0.358       0.790
## Item.14 600 0.723 0.448   0.411         0.333       0.791
## Item.15 600 0.817 0.387   0.393         0.325       0.792
## Item.16 600 0.413 0.493   0.367         0.278       0.794
## Item.17 600 0.963 0.188   0.238         0.202       0.796
## Item.18 600 0.352 0.478   0.576         0.508       0.783
## Item.19 600 0.548 0.498   0.401         0.314       0.792
## Item.20 600 0.873 0.333   0.376         0.318       0.792
## Item.21 600 0.915 0.279   0.190         0.136       0.798
## Item.22 600 0.935 0.247   0.284         0.238       0.795
## Item.23 600 0.313 0.464   0.338         0.253       0.795
## Item.24 600 0.728 0.445   0.422         0.346       0.791
## Item.25 600 0.375 0.485   0.383         0.297       0.793
## Item.26 600 0.460 0.499   0.562         0.489       0.783
## Item.27 600 0.862 0.346   0.425         0.367       0.791
## Item.28 600 0.530 0.500   0.465         0.383       0.789
## Item.29 600 0.340 0.474   0.407         0.324       0.791
## Item.30 600 0.440 0.497   0.255         0.159       0.799
## Item.31 600 0.833 0.373   0.479         0.419       0.788
## Item.32 600 0.162 0.368   0.110         0.037       0.802
## 
## $proportions
##             0     1
## Item.1  0.717 0.283
## Item.2  0.432 0.568
## Item.3  0.720 0.280
## Item.4  0.622 0.378
## Item.5  0.380 0.620
## Item.6  0.840 0.160
## Item.7  0.240 0.760
## Item.8  0.798 0.202
## Item.9  0.115 0.885
## Item.10 0.578 0.422
## Item.11 0.017 0.983
## Item.12 0.585 0.415
## Item.13 0.338 0.662
## Item.14 0.277 0.723
## Item.15 0.183 0.817
## Item.16 0.587 0.413
## Item.17 0.037 0.963
## Item.18 0.648 0.352
## Item.19 0.452 0.548
## Item.20 0.127 0.873
## Item.21 0.085 0.915
## Item.22 0.065 0.935
## Item.23 0.687 0.313
## Item.24 0.272 0.728
## Item.25 0.625 0.375
## Item.26 0.540 0.460
## Item.27 0.138 0.862
## Item.28 0.470 0.530
## Item.29 0.660 0.340
## Item.30 0.560 0.440
## Item.31 0.167 0.833
## Item.32 0.838 0.162
mod1 <- mirt(data, 1)
extract.mirt(mod1, 'time') #time elapsed for each estimation component
## TOTAL:   Data  Estep  Mstep     SE   Post 
##  0.389  0.068  0.123  0.183  0.000  0.001
# optionally use Newton-Raphson for (generally) faster convergence in the M-step's
mod1 <- mirt(data, 1, optimizer = 'NR')
extract.mirt(mod1, 'time')
## TOTAL:   Data  Estep  Mstep     SE   Post 
##  0.239  0.052  0.083  0.088  0.000  0.000
mod2 <- mirt(data, 2, optimizer = 'NR')
## Warning: EM cycles terminated after 500 iterations.
# difficulty converging with reduced quadpts, reduce TOL
mod3 <- mirt(data, 3, TOL = .001, optimizer = 'NR')
anova(mod1,mod2)
##           AIC    SABIC       HQ      BIC    logLik     X2 df p
## mod1 19105.91 19184.13 19215.46 19387.31 -9488.955            
## mod2 19073.92 19190.03 19236.53 19491.63 -9441.963 93.985 31 0
anova(mod2, mod3) #negative AIC, 2 factors probably best
##           AIC    SABIC       HQ      BIC    logLik     X2 df     p
## mod2 19073.92 19190.03 19236.53 19491.63 -9441.963                
## mod3 19080.18 19232.96 19294.13 19629.80 -9415.090 53.744 30 0.005
# same as above, but using the QMCEM method for generally better accuracy in mod3
mod3 <- mirt(data, 3, method = 'QMCEM', TOL = .001, optimizer = 'NR')
anova(mod2, mod3)
##           AIC    SABIC       HQ      BIC    logLik     X2 df     p
## mod2 19073.92 19190.03 19236.53 19491.63 -9441.963                
## mod3 19081.58 19234.36 19295.54 19631.20 -9415.792 52.342 30 0.007
# with fixed guessing parameters
mod1g <- mirt(data, 1, guess = .1)
coef(mod1g)
## $Item.1
##        a1      d   g u
## par 1.211 -1.737 0.1 1
## 
## $Item.2
##       a1     d   g u
## par 1.78 0.147 0.1 1
## 
## $Item.3
##       a1    d   g u
## par 1.91 -2.2 0.1 1
## 
## $Item.4
##        a1      d   g u
## par 0.833 -0.944 0.1 1
## 
## $Item.5
##        a1     d   g u
## par 1.089 0.399 0.1 1
## 
## $Item.6
##        a1      d   g u
## par 3.265 -5.212 0.1 1
## 
## $Item.7
##       a1     d   g u
## par 1.02 1.224 0.1 1
## 
## $Item.8
##        a1      d   g u
## par 1.639 -2.977 0.1 1
## 
## $Item.9
##       a1     d   g u
## par 0.49 2.007 0.1 1
## 
## $Item.10
##        a1      d   g u
## par 1.257 -0.756 0.1 1
## 
## $Item.11
##       a1    d   g u
## par 1.68 5.18 0.1 1
## 
## $Item.12
##        a1      d   g u
## par 0.191 -0.625 0.1 1
## 
## $Item.13
##        a1     d   g u
## par 1.147 0.654 0.1 1
## 
## $Item.14
##        a1     d   g u
## par 1.099 1.008 0.1 1
## 
## $Item.15
##        a1    d   g u
## par 1.337 1.79 0.1 1
## 
## $Item.16
##        a1      d   g u
## par 0.923 -0.744 0.1 1
## 
## $Item.17
##        a1     d   g u
## par 1.519 4.077 0.1 1
## 
## $Item.18
##        a1      d   g u
## par 2.585 -1.749 0.1 1
## 
## $Item.19
##       a1      d   g u
## par 0.91 -0.002 0.1 1
## 
## $Item.20
##        a1     d   g u
## par 1.485 2.438 0.1 1
## 
## $Item.21
##        a1     d   g u
## par 0.616 2.407 0.1 1
## 
## $Item.22
##        a1     d   g u
## par 1.429 3.291 0.1 1
## 
## $Item.23
##       a1      d   g u
## par 0.96 -1.393 0.1 1
## 
## $Item.24
##        a1     d   g u
## par 1.282 1.099 0.1 1
## 
## $Item.25
##        a1  d   g u
## par 1.028 -1 0.1 1
## 
## $Item.26
##        a1      d   g u
## par 2.059 -0.658 0.1 1
## 
## $Item.27
##        a1     d   g u
## par 1.839 2.564 0.1 1
## 
## $Item.28
##        a1      d   g u
## par 1.222 -0.095 0.1 1
## 
## $Item.29
##        a1      d   g u
## par 1.281 -1.357 0.1 1
## 
## $Item.30
##        a1      d   g u
## par 0.444 -0.521 0.1 1
## 
## $Item.31
##        a1     d   g u
## par 2.476 2.697 0.1 1
## 
## $Item.32
##        a1      d   g u
## par 0.461 -2.742 0.1 1
## 
## $GroupPars
##     MEAN_1 COV_11
## par      0      1
###########
# graded rating scale example
# make some data
set.seed(1234)
a <- matrix(rep(1, 10))
d <- matrix(c(1,0.5,-.5,-1), 10, 4, byrow = TRUE)
c <- seq(-1, 1, length.out=10)
data <- simdata(a, d + c, 2000, itemtype = rep('graded',10))
itemstats(data)
## $overall
##     N mean_total.score sd_total.score ave.r  sd.r alpha SEM.alpha
##  2000           20.196           8.33 0.203 0.027 0.719     4.419
## 
## $itemstats
##            N  mean    sd total.r total.r_if_rm alpha_if_rm
## Item_1  2000 1.284 1.510   0.512         0.359       0.700
## Item_2  2000 1.427 1.544   0.529         0.375       0.697
## Item_3  2000 1.592 1.584   0.545         0.389       0.695
## Item_4  2000 1.774 1.586   0.538         0.381       0.696
## Item_5  2000 1.910 1.607   0.539         0.380       0.696
## Item_6  2000 2.124 1.606   0.533         0.373       0.697
## Item_7  2000 2.284 1.598   0.520         0.359       0.700
## Item_8  2000 2.420 1.583   0.578         0.430       0.688
## Item_9  2000 2.606 1.543   0.530         0.377       0.697
## Item_10 2000 2.776 1.491   0.495         0.342       0.702
## 
## $proportions
##             0     1     2     3     4
## Item_1  0.500 0.096 0.182 0.065 0.158
## Item_2  0.450 0.108 0.197 0.059 0.187
## Item_3  0.407 0.108 0.182 0.092 0.212
## Item_4  0.346 0.111 0.212 0.085 0.246
## Item_5  0.319 0.102 0.211 0.086 0.281
## Item_6  0.269 0.097 0.205 0.099 0.330
## Item_7  0.244 0.073 0.211 0.101 0.372
## Item_8  0.216 0.074 0.195 0.106 0.410
## Item_9  0.175 0.072 0.196 0.083 0.473
## Item_10 0.150 0.059 0.174 0.102 0.516
mod1 <- mirt(data, 1)
mod2 <- mirt(data, 1, itemtype = 'grsm')
coef(mod2)
## $Item_1
##        a1    b1     b2     b3     b4 c
## par 0.959 0.001 -0.507 -1.541 -2.032 0
## 
## $Item_2
##        a1    b1     b2     b3     b4     c
## par 0.987 0.001 -0.507 -1.541 -2.032 0.235
## 
## $Item_3
##        a1    b1     b2     b3     b4     c
## par 0.994 0.001 -0.507 -1.541 -2.032 0.457
## 
## $Item_4
##        a1    b1     b2     b3     b4     c
## par 1.027 0.001 -0.507 -1.541 -2.032 0.728
## 
## $Item_5
##        a1    b1     b2     b3     b4     c
## par 0.995 0.001 -0.507 -1.541 -2.032 0.895
## 
## $Item_6
##        a1    b1     b2     b3     b4     c
## par 0.987 0.001 -0.507 -1.541 -2.032 1.179
## 
## $Item_7
##        a1    b1     b2     b3     b4     c
## par 0.957 0.001 -0.507 -1.541 -2.032 1.404
## 
## $Item_8
##       a1    b1     b2     b3     b4     c
## par 1.04 0.001 -0.507 -1.541 -2.032 1.578
## 
## $Item_9
##        a1    b1     b2     b3     b4     c
## par 0.964 0.001 -0.507 -1.541 -2.032 1.878
## 
## $Item_10
##        a1    b1     b2     b3     b4     c
## par 0.947 0.001 -0.507 -1.541 -2.032 2.136
## 
## $GroupPars
##     MEAN_1 COV_11
## par      0      1
anova(mod2, mod1) #not sig, mod2 should be preferred
##           AIC    SABIC       HQ      BIC    logLik     X2 df     p
## mod2 55239.72 55295.47 55287.03 55368.55 -27596.86                
## mod1 55252.05 55373.25 55354.88 55532.10 -27576.03 41.671 27 0.035
itemplot(mod2, 1)
itemplot(mod2, 5)
itemplot(mod2, 10)
###########
# 2PL nominal response model example (Suh and Bolt, 2010)
data(SAT12)
SAT12[SAT12 == 8] <- NA #set 8 as a missing value
head(SAT12)
##   Item.1 Item.2 Item.3 Item.4 Item.5 Item.6 Item.7 Item.8 Item.9 Item.10
## 1      1      4      5      2      3      1      2      1      3       1
## 2      3      4      2     NA      3      3      2     NA      3       1
## 3      1      4      5      4      3      2      2      3      3       2
## 4      2      4      4      2      3      3      2      4      3       2
## 5      2      4      5      2      3      2      2      1      1       2
## 6      1      4      3      1      3      2      2      3      3       1
##   Item.11 Item.12 Item.13 Item.14 Item.15 Item.16 Item.17 Item.18 Item.19
## 1       2       4       2       1       5       3       4       4       1
## 2       2      NA       2       1       5       2       4       1       1
## 3       2       1       3       1       5       5       4       1       3
## 4       2       4       2       1       5       2       4       1       3
## 5       2       4       2       1       5       4       4       5       1
## 6       2       3       2       1       5       5       4       4       1
##   Item.20 Item.21 Item.22 Item.23 Item.24 Item.25 Item.26 Item.27 Item.28
## 1       4       3       3       4       1       3       5       1       3
## 2       4       3       3      NA       1      NA       4       1       4
## 3       4       3       3       1       1       3       4       1       3
## 4       4       3       1       5       2       5       4       1       3
## 5       4       3       3       3       1       1       5       1       3
## 6       4       3       3       4       1       1       4       1       4
##   Item.29 Item.30 Item.31 Item.32
## 1       1       5       4       5
## 2       5      NA       4      NA
## 3       4       4       4       1
## 4       4       2       4       2
## 5       1       2       4       1
## 6       2       3       4       3
# correct answer key
key <- c(1,4,5,2,3,1,2,1,3,1,2,4,2,1,5,3,4,4,1,4,3,3,4,1,3,5,1,3,1,5,4,5)
scoredSAT12 <- key2binary(SAT12, key)
mod0 <- mirt(scoredSAT12, 1)
# for first 5 items use 2PLNRM and nominal
scoredSAT12[,1:5] <- as.matrix(SAT12[,1:5])
mod1 <- mirt(scoredSAT12, 1, c(rep('nominal',5),rep('2PL', 27)))
mod2 <- mirt(scoredSAT12, 1, c(rep('2PLNRM',5),rep('2PL', 27)), key=key)
coef(mod0)$Item.1
##            a1         d g u
## par 0.8107167 -1.042366 0 1
coef(mod1)$Item.1
##             a1 ak0       ak1      ak2      ak3 ak4 d0         d1         d2
## par -0.8772035   0 0.5286601 1.116593 1.129494   4  0 -0.1909232 0.01878857
##             d3       d4
## par -0.1258261 -5.65218
coef(mod2)$Item.1
##            a1        d g u ak0        ak1        ak2       ak3 d0        d1
## par 0.8102548 -1.04233 0 1   0 -0.5653287 -0.5712706 -3.025613  0 0.2117761
##             d2        d3
## par 0.06919723 -5.309272
itemplot(mod0, 1)
itemplot(mod1, 1)
itemplot(mod2, 1)
# compare added information from distractors
Theta <- matrix(seq(-4,4,.01))
par(mfrow = c(2,3))
for(i in 1:5){
    info <- iteminfo(extract.item(mod0,i), Theta)
    info2 <- iteminfo(extract.item(mod2,i), Theta)
    plot(Theta, info2, type = 'l', main = paste('Information for item', i), ylab = 'Information')
    lines(Theta, info, col = 'red')
}
par(mfrow = c(1,1))
# test information
plot(Theta, testinfo(mod2, Theta), type = 'l', main = 'Test information', ylab = 'Information')
lines(Theta, testinfo(mod0, Theta), col = 'red')
###########
# using the MH-RM algorithm
data(LSAT7)
fulldata <- expand.table(LSAT7)
(mod1 <- mirt(fulldata, 1, method = 'MHRM'))
## 
## Call:
## mirt(data = fulldata, model = 1, method = "MHRM")
## 
## Full-information item factor analysis with 1 factor(s).
## Converged within 0.001 tolerance after 73 MHRM iterations.
## mirt version: 1.43 
## M-step optimizer: NR1 
## Latent density type: Gaussian 
## Average MH acceptance ratio(s): 0.4 
## 
## Log-likelihood = -2659.439, SE = 0.018
## Estimated parameters: 10 
## AIC = 5338.878
## BIC = 5387.955; SABIC = 5356.195
## G2 (21) = 32.84, p = 0.048
## RMSEA = 0.024, CFI = NaN, TLI = NaN
# Confirmatory models
# simulate data
a <- matrix(c(
1.5,NA,
0.5,NA,
1.0,NA,
1.0,0.5,
 NA,1.5,
 NA,0.5,
 NA,1.0,
 NA,1.0),ncol=2,byrow=TRUE)
d <- matrix(c(
-1.0,NA,NA,
-1.5,NA,NA,
 1.5,NA,NA,
 0.0,NA,NA,
3.0,2.0,-0.5,
2.5,1.0,-1,
2.0,0.0,NA,
1.0,NA,NA),ncol=3,byrow=TRUE)
sigma <- diag(2)
sigma[1,2] <- sigma[2,1] <- .4
items <- c(rep('2PL',4), rep('graded',3), '2PL')
dataset <- simdata(a,d,2000,items,sigma)
# analyses
# CIFA for 2 factor crossed structure
model.1 <- '
  F1 = 1-4
  F2 = 4-8
  COV = F1*F2'
# compute model, and use parallel computation of the log-likelihood
if(interactive()) mirtCluster()
mod1 <- mirt(dataset, model.1, method = 'MHRM')
coef(mod1)
## $Item_1
##        a1 a2      d g u
## par 1.506  0 -1.084 0 1
## 
## $Item_2
##        a1 a2      d g u
## par 0.528  0 -1.414 0 1
## 
## $Item_3
##        a1 a2     d g u
## par 0.958  0 1.437 0 1
## 
## $Item_4
##       a1   a2     d g u
## par 1.02 0.47 0.008 0 1
## 
## $Item_5
##     a1    a2    d1    d2     d3
## par  0 1.622 3.121 2.139 -0.487
## 
## $Item_6
##     a1    a2    d1   d2     d3
## par  0 0.523 2.507 0.94 -1.017
## 
## $Item_7
##     a1    a2    d1    d2
## par  0 0.907 1.972 0.051
## 
## $Item_8
##     a1    a2     d g u
## par  0 1.056 1.059 0 1
## 
## $GroupPars
##     MEAN_1 MEAN_2 COV_11 COV_21 COV_22
## par      0      0      1   0.27      1
summary(mod1)
##           F1    F2     h2
## Item_1 0.663 0.000 0.4390
## Item_2 0.296 0.000 0.0877
## Item_3 0.491 0.000 0.2407
## Item_4 0.500 0.230 0.3033
## Item_5 0.000 0.690 0.4760
## Item_6 0.000 0.294 0.0862
## Item_7 0.000 0.470 0.2212
## Item_8 0.000 0.527 0.2780
## 
## SS loadings:  1.018 1.115 
## Proportion Var:  0.127 0.139 
## 
## Factor correlations: 
## 
##      F1 F2
## F1 1.00   
## F2 0.27  1
residuals(mod1)
## LD matrix (lower triangle) and standardized residual correlations (upper triangle)
## 
## Upper triangle summary:
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  -0.063  -0.022  -0.007  -0.003   0.018   0.048 
## 
##        Item_1 Item_2 Item_3 Item_4 Item_5 Item_6 Item_7 Item_8
## Item_1         0.008 -0.007  0.005 -0.015 -0.024 -0.027 -0.033
## Item_2  0.124        -0.004 -0.003  0.045 -0.063  0.018 -0.025
## Item_3  0.085  0.029        -0.007  0.036  0.048 -0.011  0.022
## Item_4  0.041  0.022  0.111         0.018 -0.028  0.024 -0.009
## Item_5  0.423  3.987  2.587  0.676        -0.031 -0.016 -0.013
## Item_6  1.175  7.823  4.692  1.576  5.859        -0.021  0.019
## Item_7  1.495  0.672  0.261  1.125  1.057  1.776         0.025
## Item_8  2.184  1.288  0.933  0.147  0.359  0.698  1.217
#####
# bifactor
model.3 <- '
  G = 1-8
  F1 = 1-4
  F2 = 5-8'
mod3 <- mirt(dataset,model.3, method = 'MHRM')
coef(mod3)
## $Item_1
##       a1    a2 a3      d g u
## par 0.52 1.814  0 -1.219 0 1
## 
## $Item_2
##        a1    a2 a3      d g u
## par 0.194 0.503  0 -1.417 0 1
## 
## $Item_3
##        a1    a2 a3     d g u
## par 0.501 0.718  0 1.403 0 1
## 
## $Item_4
##        a1    a2 a3     d g u
## par 1.132 0.799  0 0.014 0 1
## 
## $Item_5
##        a1 a2    a3    d1    d2     d3
## par 1.166  0 0.931 3.008 2.059 -0.455
## 
## $Item_6
##        a1 a2    a3    d1    d2     d3
## par 0.316  0 0.445 2.519 0.948 -1.017
## 
## $Item_7
##        a1 a2    a3    d1    d2
## par 0.596  0 0.823 2.037 0.059
## 
## $Item_8
##        a1 a2    a3     d g u
## par 0.694  0 0.893 1.089 0 1
## 
## $GroupPars
##     MEAN_1 MEAN_2 MEAN_3 COV_11 COV_21 COV_31 COV_22 COV_32 COV_33
## par      0      0      0      1      0      0      1      0      1
summary(mod3)
##            G    F1    F2     h2
## Item_1 0.205 0.714 0.000 0.5515
## Item_2 0.109 0.281 0.000 0.0910
## Item_3 0.262 0.375 0.000 0.2093
## Item_4 0.516 0.364 0.000 0.3987
## Item_5 0.515 0.000 0.411 0.4347
## Item_6 0.177 0.000 0.249 0.0934
## Item_7 0.301 0.000 0.415 0.2629
## Item_8 0.340 0.000 0.437 0.3062
## 
## SS loadings:  0.891 0.862 0.595 
## Proportion Var:  0.111 0.108 0.074 
## 
## Factor correlations: 
## 
##    G F1 F2
## G  1      
## F1 0  1   
## F2 0  0  1
residuals(mod3)
## LD matrix (lower triangle) and standardized residual correlations (upper triangle)
## 
## Upper triangle summary:
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  -0.060  -0.021  -0.007  -0.002   0.013   0.047 
## 
##        Item_1 Item_2 Item_3 Item_4 Item_5 Item_6 Item_7 Item_8
## Item_1        -0.009 -0.005 -0.006  0.016  0.025 -0.022 -0.019
## Item_2  0.154         0.004  0.005  0.045 -0.060  0.020 -0.023
## Item_3  0.052  0.034        -0.010 -0.028  0.047 -0.022  0.011
## Item_4  0.068  0.059  0.184        -0.021 -0.023  0.027  0.005
## Item_5  0.525  4.056  1.552  0.898         0.032 -0.018 -0.008
## Item_6  1.254  7.291  4.429  1.041  6.095        -0.027  0.011
## Item_7  0.955  0.790  0.932  1.481  1.259  2.873        -0.012
## Item_8  0.699  1.030  0.261  0.055  0.128  0.227  0.268
anova(mod1,mod3)
##           AIC    SABIC       HQ      BIC    logLik     X2 df   p
## mod1 24928.40 24984.15 24975.70 25057.22 -12441.20              
## mod3 24942.09 25012.38 25001.73 25104.52 -12442.05 -1.689  6 NaN
#####
# polynomial/combinations
data(SAT12)
data <- key2binary(SAT12,
                  key = c(1,4,5,2,3,1,2,1,3,1,2,4,2,1,5,3,4,4,1,4,3,3,4,1,3,5,1,3,1,5,4,5))
model.quad <- '
       F1 = 1-32
  (F1*F1) = 1-32'
model.combo <- '
       F1 = 1-16
       F2 = 17-32
  (F1*F2) = 1-8'
(mod.quad <- mirt(data, model.quad))
## Warning: EM cycles terminated after 500 iterations.
## 
## Call:
## mirt(data = data, model = model.quad)
## 
## Full-information item factor analysis with 1 factor(s).
## FAILED TO CONVERGE within 1e-04 tolerance after 500 EM iterations.
## mirt version: 1.43 
## M-step optimizer: BFGS 
## EM acceleration: Ramsay 
## Number of rectangular quadrature: 61
## Latent density type: Gaussian 
## 
## Log-likelihood = -9464.029
## Estimated parameters: 96 
## AIC = 19120.06
## BIC = 19542.16; SABIC = 19237.39
## G2 (4294967199) = 11258.34, p = 1
## RMSEA = 0, CFI = NaN, TLI = NaN
summary(mod.quad)
##             F1 (F1*F1)     h2
## Item.1  0.1909  0.3417 0.1532
## Item.2  0.5154  0.5475 0.5654
## Item.3  0.3784  0.3811 0.2884
## Item.4  0.1330  0.3300 0.1266
## Item.5  0.3933  0.3950 0.3107
## Item.6  0.3102  0.3921 0.2500
## Item.7  0.7031  0.3334 0.6055
## Item.8  0.1213  0.3248 0.1202
## Item.9  0.3072  0.1632 0.1210
## Item.10 0.4055  0.3388 0.2792
## Item.11 0.7867  0.5868 0.9632
## Item.12 0.0160  0.1053 0.0113
## Item.13 0.5065  0.3648 0.3896
## Item.14 0.3389  0.5612 0.4298
## Item.15 0.8126  0.4134 0.8312
## Item.16 0.1443  0.3799 0.1651
## Item.17 0.8581  0.4511 0.9398
## Item.18 0.5088  0.5139 0.5230
## Item.19 0.3068  0.3475 0.2149
## Item.20 0.5999  0.6378 0.7667
## Item.21 0.4551  0.2015 0.2477
## Item.22 0.7444  0.5370 0.8426
## Item.23 0.1293  0.3150 0.1159
## Item.24 0.6664  0.4234 0.6234
## Item.25 0.0474  0.4498 0.2046
## Item.26 0.3901  0.6232 0.5405
## Item.27 0.7282  0.5660 0.8507
## Item.28 0.3625  0.4055 0.2959
## Item.29 0.1756  0.4144 0.2026
## Item.30 0.2559  0.0881 0.0733
## Item.31 0.7106  0.6578 0.9377
## Item.32 0.1359  0.0643 0.0226
## 
## SS loadings:  7.261 5.752 
## Proportion Var:  0.227 0.18 
## 
## Factor correlations: 
## 
##    F1
## F1  1
(mod.combo <- mirt(data, model.combo))
## 
## Call:
## mirt(data = data, model = model.combo)
## 
## Full-information item factor analysis with 2 factor(s).
## Converged within 1e-04 tolerance after 20 EM iterations.
## mirt version: 1.43 
## M-step optimizer: BFGS 
## EM acceleration: Ramsay 
## Number of rectangular quadrature: 31
## Latent density type: Gaussian 
## 
## Log-likelihood = -9655.028
## Estimated parameters: 72 
## AIC = 19454.06
## BIC = 19770.63; SABIC = 19542.05
## G2 (4294967223) = 11640.33, p = 1
## RMSEA = 0, CFI = NaN, TLI = NaN
anova(mod.combo, mod.quad)
##                AIC    SABIC       HQ      BIC    logLik      X2 df p
## mod.combo 19454.06 19542.05 19577.29 19770.63 -9655.028             
## mod.quad  19120.06 19237.39 19284.38 19542.16 -9464.029 381.996 24 0
# non-linear item and test plots
plot(mod.quad)
plot(mod.combo, type = 'SE')
itemplot(mod.quad, 1, type = 'score')
itemplot(mod.combo, 2, type = 'score')
itemplot(mod.combo, 2, type = 'infocontour')
## empirical histogram examples (normal, skew and bimodality)
# make some data
set.seed(1234)
a <- matrix(rlnorm(50, .2, .2))
d <- matrix(rnorm(50))
ThetaNormal <- matrix(rnorm(2000))
ThetaBimodal <- scale(matrix(c(rnorm(1000, -2), rnorm(1000,2)))) #bimodal
ThetaSkew <- scale(matrix(rchisq(2000, 3))) #positive skew
datNormal <- simdata(a, d, 2000, itemtype = '2PL', Theta=ThetaNormal)
datBimodal <- simdata(a, d, 2000, itemtype = '2PL', Theta=ThetaBimodal)
datSkew <- simdata(a, d, 2000, itemtype = '2PL', Theta=ThetaSkew)
normal <- mirt(datNormal, 1, dentype = "empiricalhist")
plot(normal, type = 'empiricalhist')
histogram(ThetaNormal, breaks=30)
bimodal <- mirt(datBimodal, 1, dentype = "empiricalhist")
plot(bimodal, type = 'empiricalhist')
histogram(ThetaBimodal, breaks=30)
skew <- mirt(datSkew, 1, dentype = "empiricalhist")
plot(skew, type = 'empiricalhist')
histogram(ThetaSkew, breaks=30)
#####
# non-linear parameter constraints with Rsolnp package (nloptr supported as well):
# Find Rasch model subject to the constraint that the intercepts sum to 0
dat <- expand.table(LSAT6)
itemstats(dat)
## $overall
##     N mean_total.score sd_total.score ave.r sd.r alpha SEM.alpha
##  1000            3.819          1.035 0.077 0.03 0.295     0.869
## 
## $itemstats
##           N  mean    sd total.r total.r_if_rm alpha_if_rm
## Item_1 1000 0.924 0.265   0.362         0.113       0.275
## Item_2 1000 0.709 0.454   0.567         0.153       0.238
## Item_3 1000 0.553 0.497   0.618         0.173       0.217
## Item_4 1000 0.763 0.425   0.534         0.144       0.246
## Item_5 1000 0.870 0.336   0.435         0.122       0.266
## 
## $proportions
##            0     1
## Item_1 0.076 0.924
## Item_2 0.291 0.709
## Item_3 0.447 0.553
## Item_4 0.237 0.763
## Item_5 0.130 0.870
# free latent mean and variance terms
model <- 'Theta = 1-5
          MEAN = Theta
          COV = Theta*Theta'
# view how vector of parameters is organized internally
sv <- mirt(dat, model, itemtype = 'Rasch', pars = 'values')
sv[sv$est, ]
##    group   item     class   name parnum value lbound ubound  est const nconst
## 2    all Item_1      dich      d      2 2.815   -Inf    Inf TRUE  none   none
## 6    all Item_2      dich      d      6 1.082   -Inf    Inf TRUE  none   none
## 10   all Item_3      dich      d     10 0.262   -Inf    Inf TRUE  none   none
## 14   all Item_4      dich      d     14 1.407   -Inf    Inf TRUE  none   none
## 18   all Item_5      dich      d     18 2.214   -Inf    Inf TRUE  none   none
## 21   all  GROUP GroupPars MEAN_1     21 0.000   -Inf    Inf TRUE  none   none
## 22   all  GROUP GroupPars COV_11     22 1.000      0    Inf TRUE  none   none
##    prior.type prior_1 prior_2
## 2        none     NaN     NaN
## 6        none     NaN     NaN
## 10       none     NaN     NaN
## 14       none     NaN     NaN
## 18       none     NaN     NaN
## 21       none     NaN     NaN
## 22       none     NaN     NaN
# constraint: create function for solnp to compute constraint, and declare value in eqB
eqfun <- function(p, optim_args) sum(p[1:5]) #could use browser() here, if it helps
LB <- c(rep(-15, 6), 1e-4) # more reasonable lower bound for variance term
mod <- mirt(dat, model, sv=sv, itemtype = 'Rasch', optimizer = 'solnp',
   solnp_args=list(eqfun=eqfun, eqB=0, LB=LB))
print(mod)
## 
## Call:
## mirt(data = dat, model = model, itemtype = "Rasch", optimizer = "solnp", 
##     solnp_args = list(eqfun = eqfun, eqB = 0, LB = LB), sv = sv)
## 
## Full-information item factor analysis with 1 factor(s).
## Converged within 1e-04 tolerance after 34 EM iterations.
## mirt version: 1.43 
## M-step optimizer: solnp 
## EM acceleration: Ramsay 
## Number of rectangular quadrature: 61
## Latent density type: Gaussian 
## 
## Log-likelihood = -2466.943
## Estimated parameters: 7 
## AIC = 4947.887
## BIC = 4982.241; SABIC = 4960.009
## G2 (25) = 21.81, p = 0.6467
## RMSEA = 0, CFI = NaN, TLI = NaN
coef(mod)
## $Item_1
##     a1     d g u
## par  1 1.253 0 1
## 
## $Item_2
##     a1      d g u
## par  1 -0.475 0 1
## 
## $Item_3
##     a1      d g u
## par  1 -1.233 0 1
## 
## $Item_4
##     a1      d g u
## par  1 -0.168 0 1
## 
## $Item_5
##     a1     d g u
## par  1 0.623 0 1
## 
## $GroupPars
##     MEAN_1 COV_11
## par  1.472  0.559
(ds <- sapply(coef(mod)[1:5], function(x) x[,'d']))
##     Item_1     Item_2     Item_3     Item_4     Item_5 
##  1.2529541 -0.4754463 -1.2327297 -0.1681700  0.6233919
sum(ds)
## [1] 4.607426e-15
# same likelihood location as: mirt(dat, 1, itemtype = 'Rasch')
#######
# latent regression Rasch model
# simulate data
set.seed(1234)
N <- 1000
# covariates
X1 <- rnorm(N); X2 <- rnorm(N)
covdata <- data.frame(X1, X2, X3 = rnorm(N))
Theta <- matrix(0.5 * X1 + -1 * X2 + rnorm(N, sd = 0.5))
# items and response data
a <- matrix(1, 20); d <- matrix(rnorm(20))
dat <- simdata(a, d, 1000, itemtype = '2PL', Theta=Theta)
# unconditional Rasch model
mod0 <- mirt(dat, 1, 'Rasch', SE=TRUE)
coef(mod0, printSE=TRUE)
## $Item_1
##     a1      d logit(g) logit(u)
## par  1 -0.998     -999      999
## SE  NA  0.085       NA       NA
## 
## $Item_2
##     a1      d logit(g) logit(u)
## par  1 -0.917     -999      999
## SE  NA  0.085       NA       NA
## 
## $Item_3
##     a1      d logit(g) logit(u)
## par  1 -0.100     -999      999
## SE  NA  0.081       NA       NA
## 
## $Item_4
##     a1     d logit(g) logit(u)
## par  1 1.893     -999      999
## SE  NA 0.099       NA       NA
## 
## $Item_5
##     a1     d logit(g) logit(u)
## par  1 0.609     -999      999
## SE  NA 0.082       NA       NA
## 
## $Item_6
##     a1     d logit(g) logit(u)
## par  1 1.071     -999      999
## SE  NA 0.086       NA       NA
## 
## $Item_7
##     a1      d logit(g) logit(u)
## par  1 -0.074     -999      999
## SE  NA  0.081       NA       NA
## 
## $Item_8
##     a1      d logit(g) logit(u)
## par  1 -1.405     -999      999
## SE  NA  0.090       NA       NA
## 
## $Item_9
##     a1     d logit(g) logit(u)
## par  1 0.707     -999      999
## SE  NA 0.083       NA       NA
## 
## $Item_10
##     a1      d logit(g) logit(u)
## par  1 -0.258     -999      999
## SE  NA  0.081       NA       NA
## 
## $Item_11
##     a1     d logit(g) logit(u)
## par  1 0.336     -999      999
## SE  NA 0.081       NA       NA
## 
## $Item_12
##     a1     d logit(g) logit(u)
## par  1 0.891     -999      999
## SE  NA 0.084       NA       NA
## 
## $Item_13
##     a1     d logit(g) logit(u)
## par  1 0.653     -999      999
## SE  NA 0.083       NA       NA
## 
## $Item_14
##     a1      d logit(g) logit(u)
## par  1 -1.942     -999      999
## SE  NA  0.099       NA       NA
## 
## $Item_15
##     a1      d logit(g) logit(u)
## par  1 -2.143     -999      999
## SE  NA  0.104       NA       NA
## 
## $Item_16
##     a1     d logit(g) logit(u)
## par  1 1.758     -999      999
## SE  NA 0.096       NA       NA
## 
## $Item_17
##     a1      d logit(g) logit(u)
## par  1 -1.015     -999      999
## SE  NA  0.085       NA       NA
## 
## $Item_18
##     a1      d logit(g) logit(u)
## par  1 -1.009     -999      999
## SE  NA  0.085       NA       NA
## 
## $Item_19
##     a1      d logit(g) logit(u)
## par  1 -1.251     -999      999
## SE  NA  0.088       NA       NA
## 
## $Item_20
##     a1      d logit(g) logit(u)
## par  1 -0.620     -999      999
## SE  NA  0.082       NA       NA
## 
## $GroupPars
##     MEAN_1 COV_11
## par      0  1.393
## SE      NA  0.085
# conditional model using X1, X2, and X3 (bad) as predictors of Theta
mod1 <- mirt(dat, 1, 'Rasch', covdata=covdata, formula = ~ X1 + X2 + X3, SE=TRUE)
coef(mod1, printSE=TRUE)
## $Item_1
##     a1      d logit(g) logit(u)
## par  1 -0.967     -999      999
## SE  NA  0.078       NA       NA
## 
## $Item_2
##     a1      d logit(g) logit(u)
## par  1 -0.887     -999      999
## SE  NA  0.077       NA       NA
## 
## $Item_3
##     a1      d logit(g) logit(u)
## par  1 -0.068     -999      999
## SE  NA  0.073       NA       NA
## 
## $Item_4
##     a1     d logit(g) logit(u)
## par  1 1.920     -999      999
## SE  NA 0.092       NA       NA
## 
## $Item_5
##     a1     d logit(g) logit(u)
## par  1 0.640     -999      999
## SE  NA 0.075       NA       NA
## 
## $Item_6
##     a1     d logit(g) logit(u)
## par  1 1.100     -999      999
## SE  NA 0.079       NA       NA
## 
## $Item_7
##     a1      d logit(g) logit(u)
## par  1 -0.043     -999      999
## SE  NA  0.073       NA       NA
## 
## $Item_8
##     a1      d logit(g) logit(u)
## par  1 -1.375     -999      999
## SE  NA  0.083       NA       NA
## 
## $Item_9
##     a1     d logit(g) logit(u)
## par  1 0.737     -999      999
## SE  NA 0.076       NA       NA
## 
## $Item_10
##     a1      d logit(g) logit(u)
## par  1 -0.227     -999      999
## SE  NA  0.073       NA       NA
## 
## $Item_11
##     a1     d logit(g) logit(u)
## par  1 0.367     -999      999
## SE  NA 0.074       NA       NA
## 
## $Item_12
##     a1     d logit(g) logit(u)
## par  1 0.921     -999      999
## SE  NA 0.077       NA       NA
## 
## $Item_13
##     a1     d logit(g) logit(u)
## par  1 0.683     -999      999
## SE  NA 0.075       NA       NA
## 
## $Item_14
##     a1      d logit(g) logit(u)
## par  1 -1.913     -999      999
## SE  NA  0.093       NA       NA
## 
## $Item_15
##     a1      d logit(g) logit(u)
## par  1 -2.114     -999      999
## SE  NA  0.098       NA       NA
## 
## $Item_16
##     a1     d logit(g) logit(u)
## par  1 1.786     -999      999
## SE  NA 0.090       NA       NA
## 
## $Item_17
##     a1      d logit(g) logit(u)
## par  1 -0.985     -999      999
## SE  NA  0.078       NA       NA
## 
## $Item_18
##     a1      d logit(g) logit(u)
## par  1 -0.979     -999      999
## SE  NA  0.078       NA       NA
## 
## $Item_19
##     a1      d logit(g) logit(u)
## par  1 -1.221     -999      999
## SE  NA  0.081       NA       NA
## 
## $Item_20
##     a1      d logit(g) logit(u)
## par  1 -0.589     -999      999
## SE  NA  0.075       NA       NA
## 
## $GroupPars
##     MEAN_1 COV_11
## par      0  0.210
## SE      NA  0.011
## 
## $lr.betas
## $lr.betas$betas
##                 F1
## (Intercept)  0.000
## X1           0.513
## X2          -1.003
## X3          -0.003
## 
## $lr.betas$SE
##                F1
## (Intercept)    NA
## X1          0.015
## X2          0.015
## X3          0.014
coef(mod1, simplify=TRUE)
## $items
##         a1      d g u
## Item_1   1 -0.967 0 1
## Item_2   1 -0.887 0 1
## Item_3   1 -0.068 0 1
## Item_4   1  1.920 0 1
## Item_5   1  0.640 0 1
## Item_6   1  1.100 0 1
## Item_7   1 -0.043 0 1
## Item_8   1 -1.375 0 1
## Item_9   1  0.737 0 1
## Item_10  1 -0.227 0 1
## Item_11  1  0.367 0 1
## Item_12  1  0.921 0 1
## Item_13  1  0.683 0 1
## Item_14  1 -1.913 0 1
## Item_15  1 -2.114 0 1
## Item_16  1  1.786 0 1
## Item_17  1 -0.985 0 1
## Item_18  1 -0.979 0 1
## Item_19  1 -1.221 0 1
## Item_20  1 -0.589 0 1
## 
## $means
## F1 
##  0 
## 
## $cov
##      F1
## F1 0.21
## 
## $lr.betas
## $lr.betas$betas
##                 F1
## (Intercept)  0.000
## X1           0.513
## X2          -1.003
## X3          -0.003
## 
## $lr.betas$CI_2.5
##                 F1
## (Intercept)     NA
## X1           0.485
## X2          -1.032
## X3          -0.031
## 
## $lr.betas$CI_97.5
##                 F1
## (Intercept)     NA
## X1           0.542
## X2          -0.974
## X3           0.025
anova(mod0, mod1)  # jointly significant predictors of theta
##           AIC    SABIC       HQ      BIC    logLik       X2 df p
## mod0 21935.46 21971.83 21974.63 22038.53 -10946.73              
## mod1 20756.61 20798.17 20801.38 20874.40 -10354.31 1184.851  3 0
# large sample z-ratios and p-values (if one cares)
cfs <- coef(mod1, printSE=TRUE)
(z <- cfs$lr.betas[[1]] / cfs$lr.betas[[2]])
##                     F1
## (Intercept)         NA
## X1           35.266840
## X2          -67.584691
## X3           -0.211456
round(pnorm(abs(z[,1]), lower.tail=FALSE)*2, 3)
## (Intercept)          X1          X2          X3 
##          NA       0.000       0.000       0.833
# drop predictor for nested comparison
mod1b <- mirt(dat, 1, 'Rasch', covdata=covdata, formula = ~ X1 + X2)
anova(mod1b, mod1)
##            AIC    SABIC       HQ      BIC    logLik    X2 df     p
## mod1b 20754.63 20794.46 20797.53 20867.51 -10354.32               
## mod1  20756.61 20798.17 20801.38 20874.40 -10354.31 0.018  1 0.893
# compare to mixedmirt() version of the same model
mod1.mixed <- mixedmirt(dat, 1, itemtype='Rasch',
                        covdata=covdata, lr.fixed = ~ X1 + X2 + X3, SE=TRUE)
coef(mod1.mixed)
## $Item_1
##         (Intercept) a1  d  g  u
## par          -0.131  1  0  0  1
## CI_2.5       -0.166 NA NA NA NA
## CI_97.5      -0.097 NA NA NA NA
## 
## $Item_2
##         (Intercept) a1  d  g  u
## par          -0.131  1  0  0  1
## CI_2.5       -0.166 NA NA NA NA
## CI_97.5      -0.097 NA NA NA NA
## 
## $Item_3
##         (Intercept) a1  d  g  u
## par          -0.131  1  0  0  1
## CI_2.5       -0.166 NA NA NA NA
## CI_97.5      -0.097 NA NA NA NA
## 
## $Item_4
##         (Intercept) a1  d  g  u
## par          -0.131  1  0  0  1
## CI_2.5       -0.166 NA NA NA NA
## CI_97.5      -0.097 NA NA NA NA
## 
## $Item_5
##         (Intercept) a1  d  g  u
## par          -0.131  1  0  0  1
## CI_2.5       -0.166 NA NA NA NA
## CI_97.5      -0.097 NA NA NA NA
## 
## $Item_6
##         (Intercept) a1  d  g  u
## par          -0.131  1  0  0  1
## CI_2.5       -0.166 NA NA NA NA
## CI_97.5      -0.097 NA NA NA NA
## 
## $Item_7
##         (Intercept) a1  d  g  u
## par          -0.131  1  0  0  1
## CI_2.5       -0.166 NA NA NA NA
## CI_97.5      -0.097 NA NA NA NA
## 
## $Item_8
##         (Intercept) a1  d  g  u
## par          -0.131  1  0  0  1
## CI_2.5       -0.166 NA NA NA NA
## CI_97.5      -0.097 NA NA NA NA
## 
## $Item_9
##         (Intercept) a1  d  g  u
## par          -0.131  1  0  0  1
## CI_2.5       -0.166 NA NA NA NA
## CI_97.5      -0.097 NA NA NA NA
## 
## $Item_10
##         (Intercept) a1  d  g  u
## par          -0.131  1  0  0  1
## CI_2.5       -0.166 NA NA NA NA
## CI_97.5      -0.097 NA NA NA NA
## 
## $Item_11
##         (Intercept) a1  d  g  u
## par          -0.131  1  0  0  1
## CI_2.5       -0.166 NA NA NA NA
## CI_97.5      -0.097 NA NA NA NA
## 
## $Item_12
##         (Intercept) a1  d  g  u
## par          -0.131  1  0  0  1
## CI_2.5       -0.166 NA NA NA NA
## CI_97.5      -0.097 NA NA NA NA
## 
## $Item_13
##         (Intercept) a1  d  g  u
## par          -0.131  1  0  0  1
## CI_2.5       -0.166 NA NA NA NA
## CI_97.5      -0.097 NA NA NA NA
## 
## $Item_14
##         (Intercept) a1  d  g  u
## par          -0.131  1  0  0  1
## CI_2.5       -0.166 NA NA NA NA
## CI_97.5      -0.097 NA NA NA NA
## 
## $Item_15
##         (Intercept) a1  d  g  u
## par          -0.131  1  0  0  1
## CI_2.5       -0.166 NA NA NA NA
## CI_97.5      -0.097 NA NA NA NA
## 
## $Item_16
##         (Intercept) a1  d  g  u
## par          -0.131  1  0  0  1
## CI_2.5       -0.166 NA NA NA NA
## CI_97.5      -0.097 NA NA NA NA
## 
## $Item_17
##         (Intercept) a1  d  g  u
## par          -0.131  1  0  0  1
## CI_2.5       -0.166 NA NA NA NA
## CI_97.5      -0.097 NA NA NA NA
## 
## $Item_18
##         (Intercept) a1  d  g  u
## par          -0.131  1  0  0  1
## CI_2.5       -0.166 NA NA NA NA
## CI_97.5      -0.097 NA NA NA NA
## 
## $Item_19
##         (Intercept) a1  d  g  u
## par          -0.131  1  0  0  1
## CI_2.5       -0.166 NA NA NA NA
## CI_97.5      -0.097 NA NA NA NA
## 
## $Item_20
##         (Intercept) a1  d  g  u
## par          -0.131  1  0  0  1
## CI_2.5       -0.166 NA NA NA NA
## CI_97.5      -0.097 NA NA NA NA
## 
## $GroupPars
##         MEAN_1 COV_11
## par          0  0.087
## CI_2.5      NA  0.068
## CI_97.5     NA  0.105
## 
## $lr.betas
##         F1_(Intercept) F1_X1  F1_X2  F1_X3
## par                  0 0.409 -0.795 -0.007
## CI_2.5              NA 0.376 -0.837 -0.040
## CI_97.5             NA 0.441 -0.753  0.027
coef(mod1.mixed, printSE=TRUE)
## $Item_1
##     (Intercept) a1  d    g   u
## par      -0.131  1  0 -999 999
## SE        0.018 NA NA   NA  NA
## 
## $Item_2
##     (Intercept) a1  d    g   u
## par      -0.131  1  0 -999 999
## SE        0.018 NA NA   NA  NA
## 
## $Item_3
##     (Intercept) a1  d    g   u
## par      -0.131  1  0 -999 999
## SE        0.018 NA NA   NA  NA
## 
## $Item_4
##     (Intercept) a1  d    g   u
## par      -0.131  1  0 -999 999
## SE        0.018 NA NA   NA  NA
## 
## $Item_5
##     (Intercept) a1  d    g   u
## par      -0.131  1  0 -999 999
## SE        0.018 NA NA   NA  NA
## 
## $Item_6
##     (Intercept) a1  d    g   u
## par      -0.131  1  0 -999 999
## SE        0.018 NA NA   NA  NA
## 
## $Item_7
##     (Intercept) a1  d    g   u
## par      -0.131  1  0 -999 999
## SE        0.018 NA NA   NA  NA
## 
## $Item_8
##     (Intercept) a1  d    g   u
## par      -0.131  1  0 -999 999
## SE        0.018 NA NA   NA  NA
## 
## $Item_9
##     (Intercept) a1  d    g   u
## par      -0.131  1  0 -999 999
## SE        0.018 NA NA   NA  NA
## 
## $Item_10
##     (Intercept) a1  d    g   u
## par      -0.131  1  0 -999 999
## SE        0.018 NA NA   NA  NA
## 
## $Item_11
##     (Intercept) a1  d    g   u
## par      -0.131  1  0 -999 999
## SE        0.018 NA NA   NA  NA
## 
## $Item_12
##     (Intercept) a1  d    g   u
## par      -0.131  1  0 -999 999
## SE        0.018 NA NA   NA  NA
## 
## $Item_13
##     (Intercept) a1  d    g   u
## par      -0.131  1  0 -999 999
## SE        0.018 NA NA   NA  NA
## 
## $Item_14
##     (Intercept) a1  d    g   u
## par      -0.131  1  0 -999 999
## SE        0.018 NA NA   NA  NA
## 
## $Item_15
##     (Intercept) a1  d    g   u
## par      -0.131  1  0 -999 999
## SE        0.018 NA NA   NA  NA
## 
## $Item_16
##     (Intercept) a1  d    g   u
## par      -0.131  1  0 -999 999
## SE        0.018 NA NA   NA  NA
## 
## $Item_17
##     (Intercept) a1  d    g   u
## par      -0.131  1  0 -999 999
## SE        0.018 NA NA   NA  NA
## 
## $Item_18
##     (Intercept) a1  d    g   u
## par      -0.131  1  0 -999 999
## SE        0.018 NA NA   NA  NA
## 
## $Item_19
##     (Intercept) a1  d    g   u
## par      -0.131  1  0 -999 999
## SE        0.018 NA NA   NA  NA
## 
## $Item_20
##     (Intercept) a1  d    g   u
## par      -0.131  1  0 -999 999
## SE        0.018 NA NA   NA  NA
## 
## $GroupPars
##     MEAN_1 COV_11
## par      0  0.087
## SE      NA  0.009
## 
## $lr.betas
##     F1_(Intercept) F1_X1  F1_X2  F1_X3
## par              0 0.409 -0.795 -0.007
## SE              NA 0.016  0.021  0.017
# draw plausible values for secondary analyses
pv <- fscores(mod1, plausible.draws = 10)
pvmods <- lapply(pv, function(x, covdata) lm(x ~ covdata$X1 + covdata$X2),
                 covdata=covdata)
# population characteristics recovered well, and can be averaged over
so <- lapply(pvmods, summary)
so
## [[1]]
## 
## Call:
## lm(formula = x ~ covdata$X1 + covdata$X2)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -1.52300 -0.31212  0.00589  0.32494  1.53638 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  0.01801    0.01489    1.21    0.227    
## covdata$X1   0.51692    0.01495   34.57   <2e-16 ***
## covdata$X2  -0.99543    0.01520  -65.49   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.4706 on 997 degrees of freedom
## Multiple R-squared:  0.8403,	Adjusted R-squared:  0.8399 
## F-statistic:  2622 on 2 and 997 DF,  p-value: < 2.2e-16
## 
## 
## [[2]]
## 
## Call:
## lm(formula = x ~ covdata$X1 + covdata$X2)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -1.36572 -0.30671 -0.00593  0.32325  1.51994 
## 
## Coefficients:
##              Estimate Std. Error t value Pr(>|t|)    
## (Intercept) -0.002579   0.014671  -0.176     0.86    
## covdata$X1   0.497944   0.014734  33.796   <2e-16 ***
## covdata$X2  -1.003010   0.014976 -66.973   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.4637 on 997 degrees of freedom
## Multiple R-squared:  0.8439,	Adjusted R-squared:  0.8436 
## F-statistic:  2694 on 2 and 997 DF,  p-value: < 2.2e-16
## 
## 
## [[3]]
## 
## Call:
## lm(formula = x ~ covdata$X1 + covdata$X2)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -1.42282 -0.31209 -0.01899  0.31661  1.50947 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) -0.01004    0.01479  -0.679    0.497    
## covdata$X1   0.53654    0.01485  36.132   <2e-16 ***
## covdata$X2  -1.02125    0.01509 -67.661   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.4673 on 997 degrees of freedom
## Multiple R-squared:  0.8494,	Adjusted R-squared:  0.8491 
## F-statistic:  2812 on 2 and 997 DF,  p-value: < 2.2e-16
## 
## 
## [[4]]
## 
## Call:
## lm(formula = x ~ covdata$X1 + covdata$X2)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -1.46910 -0.32522 -0.01239  0.32554  1.66781 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) -0.01266    0.01482  -0.854    0.393    
## covdata$X1   0.49633    0.01489  33.343   <2e-16 ***
## covdata$X2  -0.97779    0.01513 -64.623   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.4685 on 997 degrees of freedom
## Multiple R-squared:  0.8354,	Adjusted R-squared:  0.8351 
## F-statistic:  2530 on 2 and 997 DF,  p-value: < 2.2e-16
## 
## 
## [[5]]
## 
## Call:
## lm(formula = x ~ covdata$X1 + covdata$X2)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -1.59724 -0.32391  0.00388  0.32678  1.48937 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  0.03690    0.01467   2.515    0.012 *  
## covdata$X1   0.51980    0.01473  35.280   <2e-16 ***
## covdata$X2  -1.00444    0.01498 -67.069   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.4637 on 997 degrees of freedom
## Multiple R-squared:  0.8464,	Adjusted R-squared:  0.8461 
## F-statistic:  2746 on 2 and 997 DF,  p-value: < 2.2e-16
## 
## 
## [[6]]
## 
## Call:
## lm(formula = x ~ covdata$X1 + covdata$X2)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -1.36635 -0.31816  0.01238  0.31203  1.38096 
## 
## Coefficients:
##               Estimate Std. Error t value Pr(>|t|)    
## (Intercept) -0.0007106  0.0143664  -0.049    0.961    
## covdata$X1   0.5237845  0.0144280  36.303   <2e-16 ***
## covdata$X2  -0.9872586  0.0146654 -67.319   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.4541 on 997 degrees of freedom
## Multiple R-squared:  0.8487,	Adjusted R-squared:  0.8484 
## F-statistic:  2795 on 2 and 997 DF,  p-value: < 2.2e-16
## 
## 
## [[7]]
## 
## Call:
## lm(formula = x ~ covdata$X1 + covdata$X2)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -1.33076 -0.31723 -0.01176  0.31308  1.26176 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) -0.01469    0.01436  -1.023    0.307    
## covdata$X1   0.52150    0.01442  36.164   <2e-16 ***
## covdata$X2  -1.00755    0.01466 -68.739   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.4538 on 997 degrees of freedom
## Multiple R-squared:  0.8527,	Adjusted R-squared:  0.8524 
## F-statistic:  2885 on 2 and 997 DF,  p-value: < 2.2e-16
## 
## 
## [[8]]
## 
## Call:
## lm(formula = x ~ covdata$X1 + covdata$X2)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -1.22268 -0.32338 -0.01549  0.33582  1.80556 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  0.01243    0.01429   0.869    0.385    
## covdata$X1   0.51347    0.01435  35.773   <2e-16 ***
## covdata$X2  -1.00434    0.01459 -68.839   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.4517 on 997 degrees of freedom
## Multiple R-squared:  0.8524,	Adjusted R-squared:  0.8521 
## F-statistic:  2879 on 2 and 997 DF,  p-value: < 2.2e-16
## 
## 
## [[9]]
## 
## Call:
## lm(formula = x ~ covdata$X1 + covdata$X2)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -1.49680 -0.32330 -0.02865  0.32613  1.73962 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) -0.01151    0.01489  -0.773     0.44    
## covdata$X1   0.53316    0.01496  35.648   <2e-16 ***
## covdata$X2  -0.99371    0.01520 -65.365   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.4707 on 997 degrees of freedom
## Multiple R-squared:  0.8416,	Adjusted R-squared:  0.8413 
## F-statistic:  2648 on 2 and 997 DF,  p-value: < 2.2e-16
## 
## 
## [[10]]
## 
## Call:
## lm(formula = x ~ covdata$X1 + covdata$X2)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -1.31992 -0.30344  0.01249  0.29489  1.55186 
## 
## Coefficients:
##              Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  0.005362   0.014457   0.371    0.711    
## covdata$X1   0.509689   0.014519  35.105   <2e-16 ***
## covdata$X2  -0.999713   0.014758 -67.741   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.4569 on 997 degrees of freedom
## Multiple R-squared:  0.8482,	Adjusted R-squared:  0.8479 
## F-statistic:  2785 on 2 and 997 DF,  p-value: < 2.2e-16
# compute Rubin's multiple imputation average
par <- lapply(so, function(x) x$coefficients[, 'Estimate'])
SEpar <- lapply(so, function(x) x$coefficients[, 'Std. Error'])
averageMI(par, SEpar)
##                par SEpar       t     df     p
## (Intercept)  0.002 0.023   0.091 26.496 0.232
## covdata$X1   0.517 0.020  25.593 40.507     0
## covdata$X2  -0.999 0.019 -51.409 53.412     0
############
# Example using Gauss-Hermite quadrature with custom input functions
library(fastGHQuad)
## Loading required package: Rcpp
data(SAT12)
data <- key2binary(SAT12,
                   key = c(1,4,5,2,3,1,2,1,3,1,2,4,2,1,5,3,4,4,1,4,3,3,4,1,3,5,1,3,1,5,4,5))
GH <- gaussHermiteData(50)
Theta <- matrix(GH$x)
# This prior works for uni- and multi-dimensional models
prior <- function(Theta, Etable){
    P <- grid <- GH$w / sqrt(pi)
    if(ncol(Theta) > 1)
        for(i in 2:ncol(Theta))
            P <- expand.grid(P, grid)
     if(!is.vector(P)) P <- apply(P, 1, prod)
     P
}
GHmod1 <- mirt(data, 1, optimizer = 'NR',
              technical = list(customTheta = Theta, customPriorFun = prior))
coef(GHmod1, simplify=TRUE)
## $items
##            a1      d g u
## Item.1  1.133 -1.045 0 1
## Item.2  2.124  0.438 0 1
## Item.3  1.518 -1.141 0 1
## Item.4  0.826 -0.530 0 1
## Item.5  1.399  0.606 0 1
## Item.6  1.623 -2.049 0 1
## Item.7  1.419  1.383 0 1
## Item.8  0.979 -1.508 0 1
## Item.9  0.750  2.143 0 1
## Item.10 1.424 -0.360 0 1
## Item.11 2.453  5.252 0 1
## Item.12 0.229 -0.345 0 1
## Item.13 1.559  0.851 0 1
## Item.14 1.465  1.174 0 1
## Item.15 1.829  1.925 0 1
## Item.16 1.027 -0.382 0 1
## Item.17 2.193  4.165 0 1
## Item.18 2.404 -0.851 0 1
## Item.19 1.186  0.237 0 1
## Item.20 2.173  2.610 0 1
## Item.21 0.857  2.518 0 1
## Item.22 2.178  3.479 0 1
## Item.23 0.901 -0.850 0 1
## Item.24 1.705  1.270 0 1
## Item.25 1.091 -0.567 0 1
## Item.26 2.169 -0.171 0 1
## Item.27 2.709  2.770 0 1
## Item.28 1.512  0.174 0 1
## Item.29 1.181 -0.750 0 1
## Item.30 0.546 -0.248 0 1
## Item.31 3.304  2.785 0 1
## Item.32 0.183 -1.652 0 1
## 
## $means
## F1 
##  0 
## 
## $cov
##    F1
## F1  1
Theta2 <- as.matrix(expand.grid(Theta, Theta))
GHmod2 <- mirt(data, 2, optimizer = 'NR', TOL = .0002,
              technical = list(customTheta = Theta2, customPriorFun = prior))
summary(GHmod2, suppress=.2)
## 
## Rotation:  oblimin 
## 
## Rotated factor loadings: 
## 
##             F1     F2     h2
## Item.1          0.545 0.3355
## Item.2   0.349  0.526 0.6130
## Item.3   0.386  0.365 0.4464
## Item.4          0.592 0.2801
## Item.5   0.259  0.454 0.4094
## Item.6          0.588 0.4867
## Item.7   0.868        0.5965
## Item.8          0.384 0.2475
## Item.9   0.639 -0.216 0.2942
## Item.10  0.547        0.4467
## Item.11  0.724        0.6794
## Item.12 -0.233  0.367 0.0894
## Item.13  0.622        0.4972
## Item.14         0.703 0.5177
## Item.15  0.803        0.6245
## Item.16         0.544 0.3088
## Item.17  0.635  0.230 0.6267
## Item.18  0.489  0.429 0.6676
## Item.19  0.259  0.382 0.3279
## Item.20  0.381  0.506 0.6267
## Item.21  0.669 -0.207 0.3287
## Item.22  0.601  0.265 0.6164
## Item.23 -0.210  0.698 0.3604
## Item.24  0.637        0.5324
## Item.25         0.715 0.4106
## Item.26         0.675 0.6471
## Item.27  0.680  0.251 0.7243
## Item.28  0.318  0.428 0.4427
## Item.29         0.601 0.3728
## Item.30  0.282        0.1058
## Item.31  0.428  0.572 0.7958
## Item.32               0.0124
## 
## Rotated SS loadings:  6.485 6.008 
## 
## Factor correlations: 
## 
##       F1 F2
## F1 1.000   
## F2 0.583  1
############
# Davidian curve example
dat <- key2binary(SAT12,
                   key = c(1,4,5,2,3,1,2,1,3,1,2,4,2,1,5,3,4,4,1,4,3,3,4,1,3,5,1,3,1,5,4,5))
dav <- mirt(dat, 1, dentype = 'Davidian-4') # use four smoothing parameters
plot(dav, type = 'Davidian') # shape of latent trait distribution
coef(dav, simplify=TRUE)
## $items
##            a1      d g u
## Item.1  0.773 -1.049 0 1
## Item.2  1.698  0.496 0 1
## Item.3  1.056 -1.131 0 1
## Item.4  0.591 -0.541 0 1
## Item.5  1.054  0.614 0 1
## Item.6  1.041 -2.023 0 1
## Item.7  1.103  1.396 0 1
## Item.8  0.652 -1.517 0 1
## Item.9  0.545  2.130 0 1
## Item.10 1.016 -0.353 0 1
## Item.11 2.106  5.430 0 1
## Item.12 0.165 -0.351 0 1
## Item.13 1.203  0.869 0 1
## Item.14 1.182  1.205 0 1
## Item.15 1.428  1.940 0 1
## Item.16 0.736 -0.388 0 1
## Item.17 1.850  4.266 0 1
## Item.18 1.766 -0.786 0 1
## Item.19 0.880  0.238 0 1
## Item.20 1.859  2.723 0 1
## Item.21 0.653  2.514 0 1
## Item.22 1.867  3.596 0 1
## Item.23 0.592 -0.855 0 1
## Item.24 1.361  1.304 0 1
## Item.25 0.742 -0.569 0 1
## Item.26 1.667 -0.121 0 1
## Item.27 2.331  2.925 0 1
## Item.28 1.091  0.181 0 1
## Item.29 0.811 -0.751 0 1
## Item.30 0.358 -0.257 0 1
## Item.31 2.947  3.057 0 1
## Item.32 0.178 -1.664 0 1
## 
## $means
## F1 
##  0 
## 
## $cov
##    F1
## F1  1
## 
## $Davidian_phis
## [1]  1.292  0.084 -0.448  1.243
fs <- fscores(dav) # assume normal prior
fs2 <- fscores(dav, use_dentype_estimate=TRUE) # use Davidian estimated prior shape
head(cbind(fs, fs2))
##                F1           F1
## [1,]  2.669531829  3.597899315
## [2,] -0.006609705 -0.061280076
## [3,]  0.069729118  0.006895571
## [4,] -0.412042000 -0.423011358
## [5,]  0.669954138  0.560980497
## [6,]  0.448041009  0.349320259
itemfit(dav) # assume normal prior
##       item   S_X2 df.S_X2 RMSEA.S_X2 p.S_X2
## 1   Item.1 12.497      15      0.000  0.641
## 2   Item.2 13.571      11      0.020  0.258
## 3   Item.3 13.530      13      0.008  0.408
## 4   Item.4 28.858      16      0.037  0.025
## 5   Item.5 15.832      14      0.015  0.324
## 6   Item.6 15.468      12      0.022  0.217
## 7   Item.7 11.836      13      0.000  0.541
## 8   Item.8 23.748      14      0.034  0.049
## 9   Item.9 16.351      14      0.017  0.292
## 10 Item.10 15.347      14      0.013  0.355
## 11 Item.11    NaN       0        NaN    NaN
## 12 Item.12 20.253      17      0.018  0.262
## 13 Item.13 22.881      12      0.039  0.029
## 14 Item.14 25.390      13      0.040  0.021
## 15 Item.15 19.336      12      0.032  0.081
## 16 Item.16 30.806      15      0.042  0.009
## 17 Item.17 14.287       7      0.042  0.046
## 18 Item.18 16.175      11      0.028  0.135
## 19 Item.19 20.637      14      0.028  0.111
## 20 Item.20 24.700       9      0.054  0.003
## 21 Item.21 25.583      13      0.040  0.019
## 22 Item.22 25.827       7      0.067  0.001
## 23 Item.23 15.944      15      0.010  0.386
## 24 Item.24 12.546      11      0.015  0.324
## 25 Item.25 42.759      15      0.056  0.000
## 26 Item.26 15.351      12      0.022  0.223
## 27 Item.27  7.887       8      0.000  0.445
## 28 Item.28 20.811      14      0.028  0.107
## 29 Item.29 18.062      15      0.018  0.259
## 30 Item.30 29.596      17      0.035  0.029
## 31 Item.31 13.868       7      0.040  0.054
## 32 Item.32 15.633      16      0.000  0.479
itemfit(dav, use_dentype_estimate=TRUE) # use Davidian estimated prior shape
##       item   S_X2 df.S_X2 RMSEA.S_X2 p.S_X2
## 1   Item.1 12.597      15      0.000  0.633
## 2   Item.2 12.672      11      0.016  0.315
## 3   Item.3 11.641      12      0.000  0.475
## 4   Item.4 30.536      16      0.039  0.015
## 5   Item.5 16.870      14      0.018  0.263
## 6   Item.6 14.090      12      0.017  0.295
## 7   Item.7 12.041      13      0.000  0.524
## 8   Item.8 24.325      14      0.035  0.042
## 9   Item.9 16.274      14      0.016  0.297
## 10 Item.10 14.593      13      0.014  0.333
## 11 Item.11    NaN       0        NaN    NaN
## 12 Item.12 18.270      17      0.011  0.372
## 13 Item.13 22.897      12      0.039  0.029
## 14 Item.14 25.147      13      0.039  0.022
## 15 Item.15 19.873      12      0.033  0.070
## 16 Item.16 30.994      14      0.045  0.006
## 17 Item.17 14.838       7      0.043  0.038
## 18 Item.18 15.618      11      0.026  0.156
## 19 Item.19 20.596      14      0.028  0.112
## 20 Item.20 24.158       9      0.053  0.004
## 21 Item.21 15.163      11      0.025  0.175
## 22 Item.22 28.286       8      0.065  0.000
## 23 Item.23 16.309      15      0.012  0.362
## 24 Item.24 12.499      11      0.015  0.327
## 25 Item.25 44.846      15      0.058  0.000
## 26 Item.26 14.583      11      0.023  0.202
## 27 Item.27  7.679       8      0.000  0.465
## 28 Item.28 22.298      14      0.031  0.073
## 29 Item.29 18.642      14      0.024  0.179
## 30 Item.30 27.672      17      0.032  0.049
## 31 Item.31 13.611       8      0.034  0.092
## 32 Item.32 17.803      16      0.014  0.336
###########
# 5PL and restricted 5PL example
dat <- expand.table(LSAT7)
mod2PL <- mirt(dat)
mod2PL
## 
## Call:
## mirt(data = dat)
## 
## Full-information item factor analysis with 1 factor(s).
## Converged within 1e-04 tolerance after 28 EM iterations.
## mirt version: 1.43 
## M-step optimizer: BFGS 
## EM acceleration: Ramsay 
## Number of rectangular quadrature: 61
## Latent density type: Gaussian 
## 
## Log-likelihood = -2658.805
## Estimated parameters: 10 
## AIC = 5337.61
## BIC = 5386.688; SABIC = 5354.927
## G2 (21) = 31.7, p = 0.0628
## RMSEA = 0.023, CFI = NaN, TLI = NaN
# Following does not converge without including strong priors
# mod5PL <- mirt(dat, itemtype = '5PL')
# mod5PL
# restricted version of 5PL (asymmetric 2PL)
model <- 'Theta = 1-5
          FIXED = (1-5, g), (1-5, u)'
mod2PL_asym <- mirt(dat, model=model, itemtype = '5PL')
mod2PL_asym
## 
## Call:
## mirt(data = dat, model = model, itemtype = "5PL")
## 
## Full-information item factor analysis with 1 factor(s).
## Converged within 1e-04 tolerance after 223 EM iterations.
## mirt version: 1.43 
## M-step optimizer: BFGS 
## EM acceleration: Ramsay 
## Number of rectangular quadrature: 61
## Latent density type: Gaussian 
## 
## Log-likelihood = -2657.877
## Estimated parameters: 15 
## AIC = 5345.754
## BIC = 5419.37; SABIC = 5371.729
## G2 (16) = 29.84, p = 0.0188
## RMSEA = 0.029, CFI = NaN, TLI = NaN
coef(mod2PL_asym, simplify=TRUE)
## $items
##           a1      d g u   logS
## Item.1 0.923  2.961 0 1  1.038
## Item.2 2.276 -1.748 0 1 -1.539
## Item.3 1.595  2.029 0 1  0.230
## Item.4 0.609  2.324 0 1  1.614
## Item.5 0.743  2.027 0 1  0.152
## 
## $means
## Theta 
##     0 
## 
## $cov
##       Theta
## Theta     1
coef(mod2PL_asym, simplify=TRUE, IRTpars=TRUE)
## $items
##            a      b g u     S
## Item.1 0.923 -3.208 0 1 2.824
## Item.2 2.276  0.768 0 1 0.215
## Item.3 1.595 -1.272 0 1 1.259
## Item.4 0.609 -3.817 0 1 5.020
## Item.5 0.743 -2.728 0 1 1.164
## 
## $means
## Theta 
##     0 
## 
## $cov
##       Theta
## Theta     1
# no big difference statistically or visually
anova(mod2PL, mod2PL_asym)
##                  AIC    SABIC       HQ      BIC    logLik    X2 df     p
## mod2PL      5337.610 5354.927 5356.263 5386.688 -2658.805               
## mod2PL_asym 5345.754 5371.729 5373.733 5419.370 -2657.877 1.857  5 0.869
plot(mod2PL, type = 'trace')
plot(mod2PL_asym, type = 'trace')
###################
# LLTM example
a <- matrix(rep(1,30))
d <- rep(c(1,0, -1),each = 10)  # first easy, then medium, last difficult
dat <- simdata(a, d, 1000, itemtype = '2PL')
# unconditional model for intercept comparisons
mod <- mirt(dat, itemtype = 'Rasch')
coef(mod, simplify=TRUE)
## $items
##         a1      d g u
## Item_1   1  1.062 0 1
## Item_2   1  1.033 0 1
## Item_3   1  0.949 0 1
## Item_4   1  0.994 0 1
## Item_5   1  1.091 0 1
## Item_6   1  1.144 0 1
## Item_7   1  0.988 0 1
## Item_8   1  1.097 0 1
## Item_9   1  0.965 0 1
## Item_10  1  1.210 0 1
## Item_11  1  0.150 0 1
## Item_12  1  0.032 0 1
## Item_13  1 -0.046 0 1
## Item_14  1 -0.012 0 1
## Item_15  1  0.120 0 1
## Item_16  1 -0.115 0 1
## Item_17  1 -0.002 0 1
## Item_18  1 -0.090 0 1
## Item_19  1 -0.046 0 1
## Item_20  1  0.032 0 1
## Item_21  1 -0.912 0 1
## Item_22  1 -1.002 0 1
## Item_23  1 -0.957 0 1
## Item_24  1 -1.053 0 1
## Item_25  1 -0.929 0 1
## Item_26  1 -0.820 0 1
## Item_27  1 -0.951 0 1
## Item_28  1 -0.912 0 1
## Item_29  1 -0.996 0 1
## Item_30  1 -0.957 0 1
## 
## $means
## F1 
##  0 
## 
## $cov
##       F1
## F1 1.065
# Suppose that the first 10 items were suspected to be easy, followed by 10 medium difficulty items,
# then finally the last 10 items are difficult,
# and we wish to test this item structure hypothesis (more intercept designs are possible
# by including more columns).
itemdesign <- data.frame(difficulty =
   factor(c(rep('easy', 10), rep('medium', 10), rep('hard', 10))))
rownames(itemdesign) <- colnames(dat)
itemdesign
##         difficulty
## Item_1        easy
## Item_2        easy
## Item_3        easy
## Item_4        easy
## Item_5        easy
## Item_6        easy
## Item_7        easy
## Item_8        easy
## Item_9        easy
## Item_10       easy
## Item_11     medium
## Item_12     medium
## Item_13     medium
## Item_14     medium
## Item_15     medium
## Item_16     medium
## Item_17     medium
## Item_18     medium
## Item_19     medium
## Item_20     medium
## Item_21       hard
## Item_22       hard
## Item_23       hard
## Item_24       hard
## Item_25       hard
## Item_26       hard
## Item_27       hard
## Item_28       hard
## Item_29       hard
## Item_30       hard
# LLTM with mirt()
lltm <- mirt(dat, itemtype = 'Rasch', SE=TRUE,
   item.formula = ~ 0 + difficulty, itemdesign=itemdesign)
coef(lltm, simplify=TRUE)
## $items
##         difficultyeasy difficultyhard difficultymedium a1 d g u
## Item_1           1.051          0.000            0.000  1 0 0 1
## Item_2           1.051          0.000            0.000  1 0 0 1
## Item_3           1.051          0.000            0.000  1 0 0 1
## Item_4           1.051          0.000            0.000  1 0 0 1
## Item_5           1.051          0.000            0.000  1 0 0 1
## Item_6           1.051          0.000            0.000  1 0 0 1
## Item_7           1.051          0.000            0.000  1 0 0 1
## Item_8           1.051          0.000            0.000  1 0 0 1
## Item_9           1.051          0.000            0.000  1 0 0 1
## Item_10          1.051          0.000            0.000  1 0 0 1
## Item_11          0.000          0.000            0.002  1 0 0 1
## Item_12          0.000          0.000            0.002  1 0 0 1
## Item_13          0.000          0.000            0.002  1 0 0 1
## Item_14          0.000          0.000            0.002  1 0 0 1
## Item_15          0.000          0.000            0.002  1 0 0 1
## Item_16          0.000          0.000            0.002  1 0 0 1
## Item_17          0.000          0.000            0.002  1 0 0 1
## Item_18          0.000          0.000            0.002  1 0 0 1
## Item_19          0.000          0.000            0.002  1 0 0 1
## Item_20          0.000          0.000            0.002  1 0 0 1
## Item_21          0.000         -0.949            0.000  1 0 0 1
## Item_22          0.000         -0.949            0.000  1 0 0 1
## Item_23          0.000         -0.949            0.000  1 0 0 1
## Item_24          0.000         -0.949            0.000  1 0 0 1
## Item_25          0.000         -0.949            0.000  1 0 0 1
## Item_26          0.000         -0.949            0.000  1 0 0 1
## Item_27          0.000         -0.949            0.000  1 0 0 1
## Item_28          0.000         -0.949            0.000  1 0 0 1
## Item_29          0.000         -0.949            0.000  1 0 0 1
## Item_30          0.000         -0.949            0.000  1 0 0 1
## 
## $means
## F1 
##  0 
## 
## $cov
##       F1
## F1 1.063
coef(lltm, printSE=TRUE)
## $Item_1
##     difficultyeasy difficultyhard difficultymedium a1  d logit(g) logit(u)
## par          1.051         -0.949            0.002  1  0     -999      999
## SE           0.041          0.041            0.040 NA NA       NA       NA
## 
## $Item_2
##     difficultyeasy difficultyhard difficultymedium a1  d logit(g) logit(u)
## par          1.051         -0.949            0.002  1  0     -999      999
## SE           0.041          0.041            0.040 NA NA       NA       NA
## 
## $Item_3
##     difficultyeasy difficultyhard difficultymedium a1  d logit(g) logit(u)
## par          1.051         -0.949            0.002  1  0     -999      999
## SE           0.041          0.041            0.040 NA NA       NA       NA
## 
## $Item_4
##     difficultyeasy difficultyhard difficultymedium a1  d logit(g) logit(u)
## par          1.051         -0.949            0.002  1  0     -999      999
## SE           0.041          0.041            0.040 NA NA       NA       NA
## 
## $Item_5
##     difficultyeasy difficultyhard difficultymedium a1  d logit(g) logit(u)
## par          1.051         -0.949            0.002  1  0     -999      999
## SE           0.041          0.041            0.040 NA NA       NA       NA
## 
## $Item_6
##     difficultyeasy difficultyhard difficultymedium a1  d logit(g) logit(u)
## par          1.051         -0.949            0.002  1  0     -999      999
## SE           0.041          0.041            0.040 NA NA       NA       NA
## 
## $Item_7
##     difficultyeasy difficultyhard difficultymedium a1  d logit(g) logit(u)
## par          1.051         -0.949            0.002  1  0     -999      999
## SE           0.041          0.041            0.040 NA NA       NA       NA
## 
## $Item_8
##     difficultyeasy difficultyhard difficultymedium a1  d logit(g) logit(u)
## par          1.051         -0.949            0.002  1  0     -999      999
## SE           0.041          0.041            0.040 NA NA       NA       NA
## 
## $Item_9
##     difficultyeasy difficultyhard difficultymedium a1  d logit(g) logit(u)
## par          1.051         -0.949            0.002  1  0     -999      999
## SE           0.041          0.041            0.040 NA NA       NA       NA
## 
## $Item_10
##     difficultyeasy difficultyhard difficultymedium a1  d logit(g) logit(u)
## par          1.051         -0.949            0.002  1  0     -999      999
## SE           0.041          0.041            0.040 NA NA       NA       NA
## 
## $Item_11
##     difficultyeasy difficultyhard difficultymedium a1  d logit(g) logit(u)
## par          1.051         -0.949            0.002  1  0     -999      999
## SE           0.041          0.041            0.040 NA NA       NA       NA
## 
## $Item_12
##     difficultyeasy difficultyhard difficultymedium a1  d logit(g) logit(u)
## par          1.051         -0.949            0.002  1  0     -999      999
## SE           0.041          0.041            0.040 NA NA       NA       NA
## 
## $Item_13
##     difficultyeasy difficultyhard difficultymedium a1  d logit(g) logit(u)
## par          1.051         -0.949            0.002  1  0     -999      999
## SE           0.041          0.041            0.040 NA NA       NA       NA
## 
## $Item_14
##     difficultyeasy difficultyhard difficultymedium a1  d logit(g) logit(u)
## par          1.051         -0.949            0.002  1  0     -999      999
## SE           0.041          0.041            0.040 NA NA       NA       NA
## 
## $Item_15
##     difficultyeasy difficultyhard difficultymedium a1  d logit(g) logit(u)
## par          1.051         -0.949            0.002  1  0     -999      999
## SE           0.041          0.041            0.040 NA NA       NA       NA
## 
## $Item_16
##     difficultyeasy difficultyhard difficultymedium a1  d logit(g) logit(u)
## par          1.051         -0.949            0.002  1  0     -999      999
## SE           0.041          0.041            0.040 NA NA       NA       NA
## 
## $Item_17
##     difficultyeasy difficultyhard difficultymedium a1  d logit(g) logit(u)
## par          1.051         -0.949            0.002  1  0     -999      999
## SE           0.041          0.041            0.040 NA NA       NA       NA
## 
## $Item_18
##     difficultyeasy difficultyhard difficultymedium a1  d logit(g) logit(u)
## par          1.051         -0.949            0.002  1  0     -999      999
## SE           0.041          0.041            0.040 NA NA       NA       NA
## 
## $Item_19
##     difficultyeasy difficultyhard difficultymedium a1  d logit(g) logit(u)
## par          1.051         -0.949            0.002  1  0     -999      999
## SE           0.041          0.041            0.040 NA NA       NA       NA
## 
## $Item_20
##     difficultyeasy difficultyhard difficultymedium a1  d logit(g) logit(u)
## par          1.051         -0.949            0.002  1  0     -999      999
## SE           0.041          0.041            0.040 NA NA       NA       NA
## 
## $Item_21
##     difficultyeasy difficultyhard difficultymedium a1  d logit(g) logit(u)
## par          1.051         -0.949            0.002  1  0     -999      999
## SE           0.041          0.041            0.040 NA NA       NA       NA
## 
## $Item_22
##     difficultyeasy difficultyhard difficultymedium a1  d logit(g) logit(u)
## par          1.051         -0.949            0.002  1  0     -999      999
## SE           0.041          0.041            0.040 NA NA       NA       NA
## 
## $Item_23
##     difficultyeasy difficultyhard difficultymedium a1  d logit(g) logit(u)
## par          1.051         -0.949            0.002  1  0     -999      999
## SE           0.041          0.041            0.040 NA NA       NA       NA
## 
## $Item_24
##     difficultyeasy difficultyhard difficultymedium a1  d logit(g) logit(u)
## par          1.051         -0.949            0.002  1  0     -999      999
## SE           0.041          0.041            0.040 NA NA       NA       NA
## 
## $Item_25
##     difficultyeasy difficultyhard difficultymedium a1  d logit(g) logit(u)
## par          1.051         -0.949            0.002  1  0     -999      999
## SE           0.041          0.041            0.040 NA NA       NA       NA
## 
## $Item_26
##     difficultyeasy difficultyhard difficultymedium a1  d logit(g) logit(u)
## par          1.051         -0.949            0.002  1  0     -999      999
## SE           0.041          0.041            0.040 NA NA       NA       NA
## 
## $Item_27
##     difficultyeasy difficultyhard difficultymedium a1  d logit(g) logit(u)
## par          1.051         -0.949            0.002  1  0     -999      999
## SE           0.041          0.041            0.040 NA NA       NA       NA
## 
## $Item_28
##     difficultyeasy difficultyhard difficultymedium a1  d logit(g) logit(u)
## par          1.051         -0.949            0.002  1  0     -999      999
## SE           0.041          0.041            0.040 NA NA       NA       NA
## 
## $Item_29
##     difficultyeasy difficultyhard difficultymedium a1  d logit(g) logit(u)
## par          1.051         -0.949            0.002  1  0     -999      999
## SE           0.041          0.041            0.040 NA NA       NA       NA
## 
## $Item_30
##     difficultyeasy difficultyhard difficultymedium a1  d logit(g) logit(u)
## par          1.051         -0.949            0.002  1  0     -999      999
## SE           0.041          0.041            0.040 NA NA       NA       NA
## 
## $GroupPars
##     MEAN_1 COV_11
## par      0  1.063
## SE      NA  0.060
anova(lltm, mod)  # models fit effectively the same; hence, intercept variability well captured
##           AIC    SABIC       HQ      BIC    logLik    X2 df     p
## lltm 34990.23 34997.16 34997.69 35009.86 -17491.12               
## mod  35013.75 35067.43 35071.58 35165.89 -17475.88 30.48 27 0.293
# additional information for LLTM
plot(lltm)
plot(lltm, type = 'trace')
itemplot(lltm, item=1)
itemfit(lltm)
##       item   S_X2 df.S_X2 RMSEA.S_X2 p.S_X2
## 1   Item_1 16.232      21      0.000  0.756
## 2   Item_2 19.208      21      0.000  0.572
## 3   Item_3 21.446      21      0.005  0.432
## 4   Item_4 18.446      21      0.000  0.621
## 5   Item_5 23.195      21      0.010  0.334
## 6   Item_6 35.889      21      0.027  0.023
## 7   Item_7 28.230      21      0.019  0.134
## 8   Item_8 16.699      21      0.000  0.729
## 9   Item_9 27.285      21      0.017  0.162
## 10 Item_10 22.152      21      0.007  0.391
## 11 Item_11 23.111      21      0.010  0.338
## 12 Item_12 18.297      21      0.000  0.630
## 13 Item_13 28.850      21      0.019  0.118
## 14 Item_14 24.890      21      0.014  0.252
## 15 Item_15 33.610      21      0.025  0.040
## 16 Item_16 24.309      21      0.013  0.278
## 17 Item_17 29.462      21      0.020  0.103
## 18 Item_18 31.419      21      0.022  0.067
## 19 Item_19 24.929      21      0.014  0.250
## 20 Item_20 18.417      21      0.000  0.622
## 21 Item_21 22.546      21      0.009  0.369
## 22 Item_22 18.590      21      0.000  0.611
## 23 Item_23 30.114      21      0.021  0.090
## 24 Item_24 27.321      21      0.017  0.160
## 25 Item_25 21.957      21      0.007  0.402
## 26 Item_26 20.724      21      0.000  0.476
## 27 Item_27 21.339      21      0.004  0.438
## 28 Item_28 21.991      21      0.007  0.400
## 29 Item_29 25.717      21      0.015  0.217
## 30 Item_30 25.537      21      0.015  0.225
head(fscores(lltm))  #EAP estimates
##               F1
## [1,]  0.82758668
## [2,] -0.02811346
## [3,] -0.44589228
## [4,]  0.38939619
## [5,] -0.58867715
## [6,] -2.76922832
fscores(lltm, method='EAPsum', full.scores=FALSE)
##       df     X2  p.X2 SEM.alpha rxx.alpha rxx_F1
## stats 30 16.743 0.976     2.371     0.856  0.852
##    Sum.Scores     F1 SE_F1 observed expected std.res
## 0           0 -2.769 0.575        3    1.238   1.584
## 1           1 -2.463 0.533        3    3.859   0.437
## 2           2 -2.198 0.499        8    7.585   0.151
## 3           3 -1.962 0.472       11   12.079   0.310
## 4           4 -1.750 0.451       15   17.045   0.495
## 5           5 -1.555 0.433       25   22.243   0.585
## 6           6 -1.373 0.419       27   27.478   0.091
## 7           7 -1.202 0.408       29   32.587   0.628
## 8           8 -1.040 0.398       41   37.433   0.583
## 9           9 -0.885 0.391       42   41.899   0.016
## 10         10 -0.735 0.385       50   45.883   0.608
## 11         11 -0.589 0.380       56   49.299   0.954
## 12         12 -0.446 0.376       48   52.079   0.565
## 13         13 -0.305 0.374       49   54.164   0.702
## 14         14 -0.166 0.372       44   55.515   1.545
## 15         15 -0.028 0.372       57   56.104   0.120
## 16         16  0.110 0.372       54   55.922   0.257
## 17         17  0.249 0.374       52   54.970   0.401
## 18         18  0.389 0.376       61   53.265   1.060
## 19         19  0.532 0.379       58   50.837   1.005
## 20         20  0.678 0.384       45   47.731   0.395
## 21         21  0.828 0.390       45   44.002   0.150
## 22         22  0.983 0.398       42   39.721   0.362
## 23         23  1.145 0.407       29   34.974   1.010
## 24         24  1.315 0.419       31   29.862   0.208
## 25         25  1.497 0.433       30   24.510   1.109
## 26         26  1.692 0.451       19   19.074   0.017
## 27         27  1.906 0.473       14   13.752   0.067
## 28         28  2.142 0.500       10    8.805   0.403
## 29         29  2.409 0.535        1    4.580   1.673
## 30         30  2.718 0.578        1    1.507   0.413
M2(lltm) # goodness of fit
##             M2  df         p RMSEA RMSEA_5    RMSEA_95      SRMSR      TLI CFI
## stats 442.3807 461 0.7256192     0       0 0.008538101 0.02845793 1.001469   1
head(personfit(lltm))
##       outfit    z.outfit     infit    z.infit         Zh
## 1 0.86087481 -0.51228154 0.8803875 -0.6776280  0.6605340
## 2 0.95817835 -0.22106722 0.9626179 -0.2181773  0.2642170
## 3 1.36270332  1.88828730 1.3130648  1.9422412 -2.0699187
## 4 0.99884980  0.05384139 0.9422129 -0.3513991  0.2594942
## 5 0.94138194 -0.24074171 0.9189909 -0.4804261  0.4446903
## 6 0.08885079 -1.82362372 0.1309391 -2.5450323  1.5286782
residuals(lltm)
## LD matrix (lower triangle) and standardized residual correlations (upper triangle)
## 
## Upper triangle summary:
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  -0.122  -0.042   0.019   0.004   0.050   0.095 
## 
##         Item_1 Item_2 Item_3 Item_4 Item_5 Item_6 Item_7 Item_8 Item_9 Item_10
## Item_1          0.024 -0.041  0.024  0.029 -0.035 -0.026  0.044  0.038   0.077
## Item_2   0.587        -0.041 -0.022  0.018 -0.040 -0.028 -0.046 -0.036  -0.068
## Item_3   1.696  1.681        -0.045 -0.067 -0.063  0.052 -0.067 -0.057   0.079
## Item_4   0.557  0.505  1.983        -0.029  0.045  0.032 -0.035 -0.037   0.069
## Item_5   0.860  0.325  4.510  0.866        -0.037  0.031  0.022  0.040  -0.059
## Item_6   1.217  1.591  3.978  2.011  1.359        -0.054  0.039 -0.063   0.066
## Item_7   0.681  0.762  2.675  1.003  0.981  2.919         0.036 -0.038  -0.068
## Item_8   1.955  2.147  4.452  1.202  0.502  1.505  1.274        -0.041   0.060
## Item_9   1.475  1.312  3.199  1.394  1.571  3.949  1.466  1.660          0.079
## Item_10  5.866  4.603  6.221  4.734  3.511  4.369  4.684  3.582  6.250        
## Item_11  4.017  4.120  6.339  4.899  3.781  7.926  7.986  3.896  5.707   6.259
## Item_12  5.263  0.997  3.383  0.864  0.450  1.977  2.417  0.785  2.180   4.275
## Item_13  2.553  0.790  1.881  1.155  1.631  2.062  2.918  3.580  1.686   4.321
## Item_14  0.154  0.939  1.734  0.879  0.443  1.363  5.022  0.815  1.344   3.966
## Item_15  2.523  2.973  4.801  4.120  2.489  3.481  3.529  2.517  4.489   6.168
## Item_16  2.333  2.441  3.258  2.435  2.932  4.446  7.758  2.833  3.458   9.102
## Item_17  0.089  0.367  1.991  2.118  1.030  2.870  0.718  0.369  1.207   3.802
## Item_18  1.672  1.696  2.715  6.273  3.989  3.103  1.692  4.455  3.106   5.906
## Item_19  5.144  0.369  1.727  0.762  0.852  1.970  2.759  2.333  3.101   4.343
## Item_20  0.480  1.262  2.131  0.789  1.140  1.267  1.287  3.994  1.484   6.050
## Item_21  2.499  1.474  2.273  0.836  5.644  4.447  1.309  3.556  1.800   6.848
## Item_22  0.492  1.139  2.342  1.071  0.757  3.096  0.996  0.827  1.363   5.094
## Item_23  2.904  0.207  8.288  1.376  0.248  1.372  0.603  2.956  1.277   5.173
## Item_24  5.126  3.520  2.685  1.856  2.336  3.888  3.839  3.370  3.066   5.802
## Item_25  1.218  0.888  2.092  3.101  6.056  1.442  2.614  0.586  1.355   4.005
## Item_26  2.800  4.902  7.169  4.544  2.818  3.494  4.007  3.521 10.231   5.753
## Item_27  0.538  2.124  3.151  0.562  0.583  2.273  0.980  0.704  1.618   5.511
## Item_28  1.620  0.490  2.262  1.243  1.708  7.484  2.396  0.538  1.519   3.597
## Item_29  4.048  0.999  2.021  1.772  1.443  2.242  2.454  6.980  3.030   4.247
## Item_30  3.486 14.764  3.224  1.544  0.339  1.372  3.553  0.439  1.113   3.786
##         Item_11 Item_12 Item_13 Item_14 Item_15 Item_16 Item_17 Item_18 Item_19
## Item_1   -0.063   0.073   0.051   0.012   0.050   0.048   0.009  -0.041   0.072
## Item_2    0.064  -0.032  -0.028  -0.031  -0.055  -0.049   0.019  -0.041   0.019
## Item_3   -0.080   0.058   0.043   0.042   0.069  -0.057   0.045   0.052  -0.042
## Item_4    0.070   0.029  -0.034   0.030   0.064   0.049   0.046  -0.079   0.028
## Item_5   -0.061  -0.021  -0.040  -0.021   0.050   0.054  -0.032  -0.063  -0.029
## Item_6   -0.089   0.044   0.045   0.037   0.059   0.067   0.054  -0.056  -0.044
## Item_7    0.089  -0.049  -0.054   0.071  -0.059  -0.088   0.027   0.041   0.053
## Item_8    0.062   0.028   0.060   0.029  -0.050   0.053  -0.019   0.067   0.048
## Item_9    0.076  -0.047  -0.041   0.037  -0.067   0.059   0.035  -0.056   0.056
## Item_10  -0.079   0.065   0.066   0.063   0.079   0.095   0.062   0.077  -0.066
## Item_11           0.062   0.072   0.086   0.075   0.085   0.066   0.080   0.079
## Item_12   3.806          -0.036  -0.019   0.050   0.052  -0.042   0.075   0.025
## Item_13   5.210   1.295          -0.040  -0.084   0.048  -0.019   0.042  -0.030
## Item_14   7.344   0.359   1.634           0.053   0.047  -0.005  -0.056   0.021
## Item_15   5.623   2.494   7.094   2.789           0.075   0.052   0.069  -0.058
## Item_16   7.214   2.720   2.301   2.222   5.639           0.056  -0.061   0.049
## Item_17   4.344   1.776   0.369   0.024   2.692   3.104           0.038   0.049
## Item_18   6.411   5.582   1.793   3.107   4.752   3.773   1.434          -0.043
## Item_19   6.228   0.646   0.900   0.433   3.306   2.414   2.434   1.881        
## Item_20   5.875   1.277   0.818   0.610   2.628   4.784   1.468   1.999   1.175
## Item_21   3.776   0.596   0.790   0.305   2.490   3.125   0.478   2.510   1.044
## Item_22   4.820   1.751   6.685   1.269   5.146   6.358   2.092   1.529   1.758
## Item_23   4.959   0.514   2.378   0.166   2.548   3.781   2.330   1.949   0.506
## Item_24   6.480   1.919   1.666   5.012   4.821   3.608   1.574   2.542   3.074
## Item_25   4.671   0.232   1.461   1.636   2.565   2.558   0.079   1.715   0.567
## Item_26   5.509   3.844   3.833   2.903   4.706   8.295   3.505   5.440   3.849
## Item_27   5.156   0.293   0.372   0.023   4.061   2.252   0.594   1.389   2.026
## Item_28   3.967   0.456   0.703   0.348   2.494   2.850   0.822   1.987   0.722
## Item_29   6.378   0.741   0.584   5.103   3.230   2.274   1.239   2.059   1.867
## Item_30   3.952   0.272   0.550   0.656   2.664   3.816   1.228   1.408   0.365
##         Item_20 Item_21 Item_22 Item_23 Item_24 Item_25 Item_26 Item_27 Item_28
## Item_1    0.022   0.050   0.022   0.054   0.072   0.035   0.053  -0.023   0.040
## Item_2   -0.036  -0.038  -0.034  -0.014   0.059  -0.030  -0.070   0.046  -0.022
## Item_3   -0.046   0.048  -0.048  -0.091   0.052   0.046  -0.085  -0.056  -0.048
## Item_4    0.028   0.029   0.033  -0.037  -0.043  -0.056  -0.067   0.024  -0.035
## Item_5   -0.034   0.075   0.028   0.016  -0.048  -0.078   0.053  -0.024  -0.041
## Item_6   -0.036   0.067  -0.056   0.037   0.062  -0.038   0.059   0.048  -0.087
## Item_7    0.036  -0.036  -0.032   0.025   0.062  -0.051   0.063   0.031   0.049
## Item_8    0.063   0.060   0.029   0.054   0.058   0.024  -0.059  -0.027   0.023
## Item_9    0.039   0.042  -0.037  -0.036  -0.055   0.037  -0.101  -0.040  -0.039
## Item_10   0.078   0.083   0.071  -0.072   0.076   0.063  -0.076   0.074  -0.060
## Item_11  -0.077   0.061  -0.069  -0.070   0.080   0.068  -0.074   0.072   0.063
## Item_12   0.036  -0.024  -0.042  -0.023  -0.044  -0.015   0.062   0.017   0.021
## Item_13  -0.029   0.028  -0.082  -0.049   0.041  -0.038  -0.062  -0.019  -0.027
## Item_14  -0.025   0.017  -0.036   0.013   0.071   0.040  -0.054   0.005  -0.019
## Item_15  -0.051   0.050  -0.072  -0.050   0.069   0.051   0.069   0.064  -0.050
## Item_16  -0.069  -0.056  -0.080  -0.061   0.060   0.051  -0.091   0.047  -0.053
## Item_17  -0.038   0.022  -0.046  -0.048  -0.040   0.009  -0.059   0.024  -0.029
## Item_18  -0.045   0.050  -0.039  -0.044   0.050   0.041  -0.074   0.037   0.045
## Item_19   0.034   0.032  -0.042   0.022   0.055   0.024   0.062   0.045   0.027
## Item_20           0.019  -0.027   0.030  -0.060  -0.050   0.053   0.017  -0.022
## Item_21   0.347          -0.056   0.036  -0.059   0.020   0.057   0.016  -0.032
## Item_22   0.704   3.134          -0.062  -0.046   0.025  -0.072  -0.022  -0.056
## Item_23   0.891   1.322   3.797           0.041   0.011   0.057   0.031   0.019
## Item_24   3.617   3.480   2.128   1.648           0.051  -0.071   0.046  -0.053
## Item_25   2.530   0.380   0.628   0.123   2.630           0.053   0.016   0.026
## Item_26   2.829   3.232   5.171   3.298   5.050   2.761           0.063  -0.052
## Item_27   0.293   0.247   0.494   0.955   2.075   0.256   3.937          -0.040
## Item_28   0.475   1.056   3.134   0.350   2.861   0.659   2.708   1.593        
## Item_29   0.588   1.579   2.006   0.977   1.711   1.150   6.102   3.981   0.642
## Item_30   3.512   2.839   0.692   0.023   2.373   0.364   4.108   1.071   2.210
##         Item_29 Item_30
## Item_1    0.064   0.059
## Item_2   -0.032  -0.122
## Item_3   -0.045  -0.057
## Item_4   -0.042   0.039
## Item_5    0.038  -0.018
## Item_6    0.047   0.037
## Item_7    0.050   0.060
## Item_8    0.084  -0.021
## Item_9   -0.055  -0.033
## Item_10   0.065  -0.062
## Item_11   0.080  -0.063
## Item_12   0.027   0.016
## Item_13  -0.024  -0.023
## Item_14   0.071   0.026
## Item_15   0.057  -0.052
## Item_16  -0.048   0.062
## Item_17  -0.035  -0.035
## Item_18  -0.045  -0.038
## Item_19  -0.043   0.019
## Item_20   0.024  -0.059
## Item_21   0.040   0.053
## Item_22  -0.045  -0.026
## Item_23   0.031  -0.005
## Item_24   0.041   0.049
## Item_25   0.034  -0.019
## Item_26   0.078  -0.064
## Item_27   0.063  -0.033
## Item_28   0.025  -0.047
## Item_29           0.021
## Item_30   0.454
# intercept across items also possible by removing ~ 0 portion, just interpreted differently
lltm.int <- mirt(dat, itemtype = 'Rasch',
   item.formula = ~ difficulty, itemdesign=itemdesign)
anova(lltm, lltm.int) # same
##               AIC    SABIC       HQ      BIC    logLik X2 df   p
## lltm     34990.23 34997.16 34997.69 35009.86 -17491.12          
## lltm.int 34990.23 34997.16 34997.69 35009.86 -17491.12  0  0 NaN
coef(lltm.int, simplify=TRUE)
## $items
##         (Intercept) difficultyhard difficultymedium a1 d g u
## Item_1        1.051              0             0.00  1 0 0 1
## Item_2        1.051              0             0.00  1 0 0 1
## Item_3        1.051              0             0.00  1 0 0 1
## Item_4        1.051              0             0.00  1 0 0 1
## Item_5        1.051              0             0.00  1 0 0 1
## Item_6        1.051              0             0.00  1 0 0 1
## Item_7        1.051              0             0.00  1 0 0 1
## Item_8        1.051              0             0.00  1 0 0 1
## Item_9        1.051              0             0.00  1 0 0 1
## Item_10       1.051              0             0.00  1 0 0 1
## Item_11       1.051              0            -1.05  1 0 0 1
## Item_12       1.051              0            -1.05  1 0 0 1
## Item_13       1.051              0            -1.05  1 0 0 1
## Item_14       1.051              0            -1.05  1 0 0 1
## Item_15       1.051              0            -1.05  1 0 0 1
## Item_16       1.051              0            -1.05  1 0 0 1
## Item_17       1.051              0            -1.05  1 0 0 1
## Item_18       1.051              0            -1.05  1 0 0 1
## Item_19       1.051              0            -1.05  1 0 0 1
## Item_20       1.051              0            -1.05  1 0 0 1
## Item_21       1.051             -2             0.00  1 0 0 1
## Item_22       1.051             -2             0.00  1 0 0 1
## Item_23       1.051             -2             0.00  1 0 0 1
## Item_24       1.051             -2             0.00  1 0 0 1
## Item_25       1.051             -2             0.00  1 0 0 1
## Item_26       1.051             -2             0.00  1 0 0 1
## Item_27       1.051             -2             0.00  1 0 0 1
## Item_28       1.051             -2             0.00  1 0 0 1
## Item_29       1.051             -2             0.00  1 0 0 1
## Item_30       1.051             -2             0.00  1 0 0 1
## 
## $means
## F1 
##  0 
## 
## $cov
##       F1
## F1 1.063
# using unconditional modeling for first four items
itemdesign.sub <- itemdesign[5:nrow(itemdesign), , drop=FALSE]
itemdesign.sub    # note that rownames are required in this case
##         difficulty
## Item_5        easy
## Item_6        easy
## Item_7        easy
## Item_8        easy
## Item_9        easy
## Item_10       easy
## Item_11     medium
## Item_12     medium
## Item_13     medium
## Item_14     medium
## Item_15     medium
## Item_16     medium
## Item_17     medium
## Item_18     medium
## Item_19     medium
## Item_20     medium
## Item_21       hard
## Item_22       hard
## Item_23       hard
## Item_24       hard
## Item_25       hard
## Item_26       hard
## Item_27       hard
## Item_28       hard
## Item_29       hard
## Item_30       hard
lltm.4 <- mirt(dat, itemtype = 'Rasch',
   item.formula = ~ 0 + difficulty, itemdesign=itemdesign.sub)
coef(lltm.4, simplify=TRUE) # first four items are the standard Rasch
## $items
##         difficultyeasy difficultyhard difficultymedium a1     d g u
## Item_1           0.000          0.000            0.000  1 1.062 0 1
## Item_2           0.000          0.000            0.000  1 1.033 0 1
## Item_3           0.000          0.000            0.000  1 0.948 0 1
## Item_4           0.000          0.000            0.000  1 0.993 0 1
## Item_5           1.081          0.000            0.000  1 0.000 0 1
## Item_6           1.081          0.000            0.000  1 0.000 0 1
## Item_7           1.081          0.000            0.000  1 0.000 0 1
## Item_8           1.081          0.000            0.000  1 0.000 0 1
## Item_9           1.081          0.000            0.000  1 0.000 0 1
## Item_10          1.081          0.000            0.000  1 0.000 0 1
## Item_11          0.000          0.000            0.002  1 0.000 0 1
## Item_12          0.000          0.000            0.002  1 0.000 0 1
## Item_13          0.000          0.000            0.002  1 0.000 0 1
## Item_14          0.000          0.000            0.002  1 0.000 0 1
## Item_15          0.000          0.000            0.002  1 0.000 0 1
## Item_16          0.000          0.000            0.002  1 0.000 0 1
## Item_17          0.000          0.000            0.002  1 0.000 0 1
## Item_18          0.000          0.000            0.002  1 0.000 0 1
## Item_19          0.000          0.000            0.002  1 0.000 0 1
## Item_20          0.000          0.000            0.002  1 0.000 0 1
## Item_21          0.000         -0.948            0.000  1 0.000 0 1
## Item_22          0.000         -0.948            0.000  1 0.000 0 1
## Item_23          0.000         -0.948            0.000  1 0.000 0 1
## Item_24          0.000         -0.948            0.000  1 0.000 0 1
## Item_25          0.000         -0.948            0.000  1 0.000 0 1
## Item_26          0.000         -0.948            0.000  1 0.000 0 1
## Item_27          0.000         -0.948            0.000  1 0.000 0 1
## Item_28          0.000         -0.948            0.000  1 0.000 0 1
## Item_29          0.000         -0.948            0.000  1 0.000 0 1
## Item_30          0.000         -0.948            0.000  1 0.000 0 1
## 
## $means
## F1 
##  0 
## 
## $cov
##       F1
## F1 1.063
anova(lltm, lltm.4) # similar fit, hence more constrained model preferred
##             AIC    SABIC       HQ      BIC    logLik   X2 df     p
## lltm   34990.23 34997.16 34997.69 35009.86 -17491.12              
## lltm.4 34994.76 35008.62 35009.68 35034.02 -17489.38 3.47  4 0.482
# LLTM with mixedmirt() (more flexible in general, but slower)
LLTM <- mixedmirt(dat, model=1, fixed = ~ 0 + difficulty,
                  itemdesign=itemdesign, SE=FALSE)
summary(LLTM)
## 
## Call:
## mixedmirt(data = dat, model = 1, fixed = ~0 + difficulty, itemdesign = itemdesign, 
##     SE = FALSE)
## 
## --------------
## FIXED EFFECTS:
##                  Estimate Std.Error z.value
## difficultyeasy      1.054        NA      NA
## difficultyhard     -0.944        NA      NA
## difficultymedium    0.006        NA      NA
## 
## --------------
## RANDOM EFFECT COVARIANCE(S):
## Correlations on upper diagonal
## 
## $Theta
##      F1
## F1 1.06
coef(LLTM)
## $Item_1
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.054         -0.944            0.006  1 0 0 1
## 
## $Item_2
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.054         -0.944            0.006  1 0 0 1
## 
## $Item_3
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.054         -0.944            0.006  1 0 0 1
## 
## $Item_4
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.054         -0.944            0.006  1 0 0 1
## 
## $Item_5
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.054         -0.944            0.006  1 0 0 1
## 
## $Item_6
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.054         -0.944            0.006  1 0 0 1
## 
## $Item_7
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.054         -0.944            0.006  1 0 0 1
## 
## $Item_8
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.054         -0.944            0.006  1 0 0 1
## 
## $Item_9
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.054         -0.944            0.006  1 0 0 1
## 
## $Item_10
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.054         -0.944            0.006  1 0 0 1
## 
## $Item_11
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.054         -0.944            0.006  1 0 0 1
## 
## $Item_12
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.054         -0.944            0.006  1 0 0 1
## 
## $Item_13
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.054         -0.944            0.006  1 0 0 1
## 
## $Item_14
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.054         -0.944            0.006  1 0 0 1
## 
## $Item_15
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.054         -0.944            0.006  1 0 0 1
## 
## $Item_16
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.054         -0.944            0.006  1 0 0 1
## 
## $Item_17
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.054         -0.944            0.006  1 0 0 1
## 
## $Item_18
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.054         -0.944            0.006  1 0 0 1
## 
## $Item_19
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.054         -0.944            0.006  1 0 0 1
## 
## $Item_20
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.054         -0.944            0.006  1 0 0 1
## 
## $Item_21
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.054         -0.944            0.006  1 0 0 1
## 
## $Item_22
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.054         -0.944            0.006  1 0 0 1
## 
## $Item_23
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.054         -0.944            0.006  1 0 0 1
## 
## $Item_24
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.054         -0.944            0.006  1 0 0 1
## 
## $Item_25
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.054         -0.944            0.006  1 0 0 1
## 
## $Item_26
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.054         -0.944            0.006  1 0 0 1
## 
## $Item_27
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.054         -0.944            0.006  1 0 0 1
## 
## $Item_28
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.054         -0.944            0.006  1 0 0 1
## 
## $Item_29
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.054         -0.944            0.006  1 0 0 1
## 
## $Item_30
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.054         -0.944            0.006  1 0 0 1
## 
## $GroupPars
##     MEAN_1 COV_11
## par      0  1.055
# LLTM with random error estimate (not supported with mirt() )
LLTM.e <- mixedmirt(dat, model=1, fixed = ~ 0 + difficulty,
                  random = ~ 1|items, itemdesign=itemdesign, SE=FALSE)
coef(LLTM.e)
## $Item_1
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.059         -0.979           -0.022  1 0 0 1
## 
## $Item_2
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.059         -0.979           -0.022  1 0 0 1
## 
## $Item_3
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.059         -0.979           -0.022  1 0 0 1
## 
## $Item_4
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.059         -0.979           -0.022  1 0 0 1
## 
## $Item_5
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.059         -0.979           -0.022  1 0 0 1
## 
## $Item_6
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.059         -0.979           -0.022  1 0 0 1
## 
## $Item_7
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.059         -0.979           -0.022  1 0 0 1
## 
## $Item_8
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.059         -0.979           -0.022  1 0 0 1
## 
## $Item_9
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.059         -0.979           -0.022  1 0 0 1
## 
## $Item_10
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.059         -0.979           -0.022  1 0 0 1
## 
## $Item_11
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.059         -0.979           -0.022  1 0 0 1
## 
## $Item_12
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.059         -0.979           -0.022  1 0 0 1
## 
## $Item_13
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.059         -0.979           -0.022  1 0 0 1
## 
## $Item_14
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.059         -0.979           -0.022  1 0 0 1
## 
## $Item_15
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.059         -0.979           -0.022  1 0 0 1
## 
## $Item_16
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.059         -0.979           -0.022  1 0 0 1
## 
## $Item_17
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.059         -0.979           -0.022  1 0 0 1
## 
## $Item_18
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.059         -0.979           -0.022  1 0 0 1
## 
## $Item_19
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.059         -0.979           -0.022  1 0 0 1
## 
## $Item_20
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.059         -0.979           -0.022  1 0 0 1
## 
## $Item_21
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.059         -0.979           -0.022  1 0 0 1
## 
## $Item_22
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.059         -0.979           -0.022  1 0 0 1
## 
## $Item_23
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.059         -0.979           -0.022  1 0 0 1
## 
## $Item_24
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.059         -0.979           -0.022  1 0 0 1
## 
## $Item_25
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.059         -0.979           -0.022  1 0 0 1
## 
## $Item_26
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.059         -0.979           -0.022  1 0 0 1
## 
## $Item_27
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.059         -0.979           -0.022  1 0 0 1
## 
## $Item_28
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.059         -0.979           -0.022  1 0 0 1
## 
## $Item_29
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.059         -0.979           -0.022  1 0 0 1
## 
## $Item_30
##     difficultyeasy difficultyhard difficultymedium a1 d g u
## par          1.059         -0.979           -0.022  1 0 0 1
## 
## $GroupPars
##     MEAN_1 COV_11
## par      0  1.052
## 
## $items
##     COV_items_items
## par           0.006
###################
# General MLTM example (Embretson, 1984)
set.seed(42)
as <- matrix(rep(1,60), ncol=2)
as[11:18,1] <- as[1:9,2] <- 0
d1 <- rep(c(3,1),each = 6)  # first easy, then medium, last difficult for first trait
d2 <- rep(c(0,1,2),times = 4)    # difficult to easy
d <- rnorm(18)
ds <- rbind(cbind(d1=NA, d2=d), cbind(d1, d2))
(pars <- data.frame(a=as, d=ds))
##    a.1 a.2 d.d1        d.d2
## 1    1   0   NA  1.37095845
## 2    1   0   NA -0.56469817
## 3    1   0   NA  0.36312841
## 4    1   0   NA  0.63286260
## 5    1   0   NA  0.40426832
## 6    1   0   NA -0.10612452
## 7    1   0   NA  1.51152200
## 8    1   0   NA -0.09465904
## 9    1   0   NA  2.01842371
## 10   1   1   NA -0.06271410
## 11   0   1   NA  1.30486965
## 12   0   1   NA  2.28664539
## 13   0   1   NA -1.38886070
## 14   0   1   NA -0.27878877
## 15   0   1   NA -0.13332134
## 16   0   1   NA  0.63595040
## 17   0   1   NA -0.28425292
## 18   0   1   NA -2.65645542
## 19   1   1    3  0.00000000
## 20   1   1    3  1.00000000
## 21   1   1    3  2.00000000
## 22   1   1    3  0.00000000
## 23   1   1    3  1.00000000
## 24   1   1    3  2.00000000
## 25   1   1    1  0.00000000
## 26   1   1    1  1.00000000
## 27   1   1    1  2.00000000
## 28   1   1    1  0.00000000
## 29   1   1    1  1.00000000
## 30   1   1    1  2.00000000
dat <- simdata(as, ds, 2500,
  itemtype = c(rep('dich', 18), rep('partcomp', 12)))
itemstats(dat)
## $overall
##     N mean_total.score sd_total.score ave.r  sd.r alpha SEM.alpha
##  2500           16.494           4.83 0.088 0.059 0.747     2.428
## 
## $itemstats
##            N  mean    sd total.r total.r_if_rm alpha_if_rm
## Item_1  2500 0.752 0.432   0.265         0.180       0.745
## Item_2  2500 0.384 0.486   0.328         0.234       0.742
## Item_3  2500 0.563 0.496   0.319         0.222       0.743
## Item_4  2500 0.635 0.481   0.318         0.224       0.743
## Item_5  2500 0.582 0.493   0.320         0.224       0.743
## Item_6  2500 0.478 0.500   0.329         0.233       0.742
## Item_7  2500 0.767 0.423   0.274         0.191       0.744
## Item_8  2500 0.469 0.499   0.315         0.218       0.743
## Item_9  2500 0.849 0.358   0.233         0.161       0.745
## Item_10 2500 0.471 0.499   0.557         0.480       0.727
## Item_11 2500 0.736 0.441   0.352         0.268       0.740
## Item_12 2500 0.882 0.323   0.246         0.182       0.745
## Item_13 2500 0.232 0.422   0.302         0.220       0.743
## Item_14 2500 0.460 0.499   0.319         0.222       0.743
## Item_15 2500 0.480 0.500   0.387         0.294       0.739
## Item_16 2500 0.627 0.484   0.352         0.260       0.741
## Item_17 2500 0.441 0.497   0.318         0.222       0.743
## Item_18 2500 0.097 0.296   0.209         0.149       0.746
## Item_19 2500 0.466 0.499   0.381         0.287       0.739
## Item_20 2500 0.643 0.479   0.360         0.269       0.740
## Item_21 2500 0.788 0.409   0.335         0.257       0.741
## Item_22 2500 0.456 0.498   0.406         0.315       0.737
## Item_23 2500 0.646 0.478   0.403         0.315       0.737
## Item_24 2500 0.769 0.422   0.364         0.284       0.740
## Item_25 2500 0.349 0.477   0.408         0.321       0.737
## Item_26 2500 0.492 0.500   0.414         0.323       0.737
## Item_27 2500 0.586 0.493   0.381         0.289       0.739
## Item_28 2500 0.330 0.470   0.388         0.300       0.738
## Item_29 2500 0.477 0.500   0.361         0.266       0.740
## Item_30 2500 0.587 0.492   0.371         0.278       0.740
## 
## $proportions
##             0     1
## Item_1  0.248 0.752
## Item_2  0.616 0.384
## Item_3  0.437 0.563
## Item_4  0.365 0.635
## Item_5  0.418 0.582
## Item_6  0.522 0.478
## Item_7  0.233 0.767
## Item_8  0.531 0.469
## Item_9  0.151 0.849
## Item_10 0.529 0.471
## Item_11 0.264 0.736
## Item_12 0.118 0.882
## Item_13 0.768 0.232
## Item_14 0.540 0.460
## Item_15 0.520 0.480
## Item_16 0.373 0.627
## Item_17 0.559 0.441
## Item_18 0.903 0.097
## Item_19 0.534 0.466
## Item_20 0.357 0.643
## Item_21 0.212 0.788
## Item_22 0.544 0.456
## Item_23 0.354 0.646
## Item_24 0.231 0.769
## Item_25 0.651 0.349
## Item_26 0.508 0.492
## Item_27 0.414 0.586
## Item_28 0.670 0.330
## Item_29 0.523 0.477
## Item_30 0.413 0.587
# unconditional model
syntax <- "theta1 = 1-9, 19-30
           theta2 = 10-30
           COV = theta1*theta2"
itemtype <- c(rep('Rasch', 18), rep('PC1PL', 12))
mod <- mirt(dat, syntax, itemtype=itemtype)
coef(mod, simplify=TRUE)
## $items
##         a1 a2      d g u    d1     d2
## Item_1   1  0  1.313 0 1    NA     NA
## Item_2   1  0 -0.563 0 1    NA     NA
## Item_3   1  0  0.303 0 1    NA     NA
## Item_4   1  0  0.660 0 1    NA     NA
## Item_5   1  0  0.393 0 1    NA     NA
## Item_6   1  0 -0.105 0 1    NA     NA
## Item_7   1  0  1.404 0 1    NA     NA
## Item_8   1  0 -0.147 0 1    NA     NA
## Item_9   1  0  2.013 0 1    NA     NA
## Item_10  0  1 -0.141 0 1    NA     NA
## Item_11  0  1  1.227 0 1    NA     NA
## Item_12  0  1  2.350 0 1    NA     NA
## Item_13  0  1 -1.429 0 1    NA     NA
## Item_14  0  1 -0.193 0 1    NA     NA
## Item_15  0  1 -0.098 0 1    NA     NA
## Item_16  0  1  0.623 0 1    NA     NA
## Item_17  0  1 -0.286 0 1    NA     NA
## Item_18  0  1 -2.592 0 1    NA     NA
## Item_19  1  1     NA 0 1 2.869  0.013
## Item_20  1  1     NA 0 1 3.716  0.832
## Item_21  1  1     NA 0 1 3.238  1.900
## Item_22  1  1     NA 0 1 4.408 -0.175
## Item_23  1  1     NA 0 1 3.538  0.866
## Item_24  1  1     NA 0 1 2.850  1.890
## Item_25  1  1     NA 0 1 1.197 -0.137
## Item_26  1  1     NA 0 1 1.038  0.975
## Item_27  1  1     NA 0 1 1.063  1.818
## Item_28  1  1     NA 0 1 0.970 -0.138
## Item_29  1  1     NA 0 1 0.902  1.010
## Item_30  1  1     NA 0 1 1.015  1.914
## 
## $means
## theta1 theta2 
##      0      0 
## 
## $cov
##        theta1 theta2
## theta1  0.917  0.081
## theta2  0.081  0.984
data.frame(est=coef(mod, simplify=TRUE)$items, pop=data.frame(a=as, d=ds))
##         est.a1 est.a2       est.d est.g est.u    est.d1      est.d2 pop.a.1
## Item_1       1      0  1.31304391     0     1        NA          NA       1
## Item_2       1      0 -0.56333612     0     1        NA          NA       1
## Item_3       1      0  0.30308480     0     1        NA          NA       1
## Item_4       1      0  0.66014200     0     1        NA          NA       1
## Item_5       1      0  0.39264419     0     1        NA          NA       1
## Item_6       1      0 -0.10532027     0     1        NA          NA       1
## Item_7       1      0  1.40425986     0     1        NA          NA       1
## Item_8       1      0 -0.14745351     0     1        NA          NA       1
## Item_9       1      0  2.01307023     0     1        NA          NA       1
## Item_10      0      1 -0.14050780     0     1        NA          NA       1
## Item_11      0      1  1.22727239     0     1        NA          NA       0
## Item_12      0      1  2.35002537     0     1        NA          NA       0
## Item_13      0      1 -1.42945629     0     1        NA          NA       0
## Item_14      0      1 -0.19283960     0     1        NA          NA       0
## Item_15      0      1 -0.09795273     0     1        NA          NA       0
## Item_16      0      1  0.62283078     0     1        NA          NA       0
## Item_17      0      1 -0.28629164     0     1        NA          NA       0
## Item_18      0      1 -2.59215041     0     1        NA          NA       0
## Item_19      1      1          NA     0     1 2.8693781  0.01320265       1
## Item_20      1      1          NA     0     1 3.7155502  0.83156683       1
## Item_21      1      1          NA     0     1 3.2381301  1.90023689       1
## Item_22      1      1          NA     0     1 4.4079442 -0.17451472       1
## Item_23      1      1          NA     0     1 3.5380699  0.86643925       1
## Item_24      1      1          NA     0     1 2.8503541  1.88964128       1
## Item_25      1      1          NA     0     1 1.1970301 -0.13667664       1
## Item_26      1      1          NA     0     1 1.0376816  0.97507994       1
## Item_27      1      1          NA     0     1 1.0632825  1.81842973       1
## Item_28      1      1          NA     0     1 0.9702647 -0.13759053       1
## Item_29      1      1          NA     0     1 0.9016247  1.00974832       1
## Item_30      1      1          NA     0     1 1.0148298  1.91403020       1
##         pop.a.2 pop.d.d1    pop.d.d2
## Item_1        0       NA  1.37095845
## Item_2        0       NA -0.56469817
## Item_3        0       NA  0.36312841
## Item_4        0       NA  0.63286260
## Item_5        0       NA  0.40426832
## Item_6        0       NA -0.10612452
## Item_7        0       NA  1.51152200
## Item_8        0       NA -0.09465904
## Item_9        0       NA  2.01842371
## Item_10       1       NA -0.06271410
## Item_11       1       NA  1.30486965
## Item_12       1       NA  2.28664539
## Item_13       1       NA -1.38886070
## Item_14       1       NA -0.27878877
## Item_15       1       NA -0.13332134
## Item_16       1       NA  0.63595040
## Item_17       1       NA -0.28425292
## Item_18       1       NA -2.65645542
## Item_19       1        3  0.00000000
## Item_20       1        3  1.00000000
## Item_21       1        3  2.00000000
## Item_22       1        3  0.00000000
## Item_23       1        3  1.00000000
## Item_24       1        3  2.00000000
## Item_25       1        1  0.00000000
## Item_26       1        1  1.00000000
## Item_27       1        1  2.00000000
## Item_28       1        1  0.00000000
## Item_29       1        1  1.00000000
## Item_30       1        1  2.00000000
itemplot(mod, 1)
itemplot(mod, 30)
# MLTM design only for PC1PL items
itemdesign <- data.frame(t1_difficulty= factor(d1, labels=c('medium', 'easy')),
                        t2_difficulty=factor(d2, labels=c('hard', 'medium', 'easy')))
rownames(itemdesign) <- colnames(dat)[19:30]
itemdesign
##         t1_difficulty t2_difficulty
## Item_19          easy          hard
## Item_20          easy        medium
## Item_21          easy          easy
## Item_22          easy          hard
## Item_23          easy        medium
## Item_24          easy          easy
## Item_25        medium          hard
## Item_26        medium        medium
## Item_27        medium          easy
## Item_28        medium          hard
## Item_29        medium        medium
## Item_30        medium          easy
# fit MLTM design, leaving first 18 items as 'Rasch' type
mltm <- mirt(dat, syntax, itemtype=itemtype, itemdesign=itemdesign,
             item.formula = list(theta1 ~ 0 + t1_difficulty,
                                 theta2 ~ 0 + t2_difficulty), SE=TRUE)
coef(mltm, simplify=TRUE)
## $items
##         theta1.t1_difficultyeasy theta1.t1_difficultymedium
## Item_1                      0.00                      0.000
## Item_2                      0.00                      0.000
## Item_3                      0.00                      0.000
## Item_4                      0.00                      0.000
## Item_5                      0.00                      0.000
## Item_6                      0.00                      0.000
## Item_7                      0.00                      0.000
## Item_8                      0.00                      0.000
## Item_9                      0.00                      0.000
## Item_10                     0.00                      0.000
## Item_11                     0.00                      0.000
## Item_12                     0.00                      0.000
## Item_13                     0.00                      0.000
## Item_14                     0.00                      0.000
## Item_15                     0.00                      0.000
## Item_16                     0.00                      0.000
## Item_17                     0.00                      0.000
## Item_18                     0.00                      0.000
## Item_19                     3.19                      0.000
## Item_20                     3.19                      0.000
## Item_21                     3.19                      0.000
## Item_22                     3.19                      0.000
## Item_23                     3.19                      0.000
## Item_24                     3.19                      0.000
## Item_25                     0.00                      1.031
## Item_26                     0.00                      1.031
## Item_27                     0.00                      1.031
## Item_28                     0.00                      1.031
## Item_29                     0.00                      1.031
## Item_30                     0.00                      1.031
##         theta2.t2_difficultyeasy theta2.t2_difficultyhard
## Item_1                     0.000                    0.000
## Item_2                     0.000                    0.000
## Item_3                     0.000                    0.000
## Item_4                     0.000                    0.000
## Item_5                     0.000                    0.000
## Item_6                     0.000                    0.000
## Item_7                     0.000                    0.000
## Item_8                     0.000                    0.000
## Item_9                     0.000                    0.000
## Item_10                    0.000                    0.000
## Item_11                    0.000                    0.000
## Item_12                    0.000                    0.000
## Item_13                    0.000                    0.000
## Item_14                    0.000                    0.000
## Item_15                    0.000                    0.000
## Item_16                    0.000                    0.000
## Item_17                    0.000                    0.000
## Item_18                    0.000                    0.000
## Item_19                    0.000                   -0.078
## Item_20                    0.000                    0.000
## Item_21                    1.857                    0.000
## Item_22                    0.000                   -0.078
## Item_23                    0.000                    0.000
## Item_24                    1.857                    0.000
## Item_25                    0.000                   -0.078
## Item_26                    0.000                    0.000
## Item_27                    1.857                    0.000
## Item_28                    0.000                   -0.078
## Item_29                    0.000                    0.000
## Item_30                    1.857                    0.000
##         theta2.t2_difficultymedium a1 a2      d g u d1 d2
## Item_1                       0.000  1  0  1.314 0 1 NA NA
## Item_2                       0.000  1  0 -0.563 0 1 NA NA
## Item_3                       0.000  1  0  0.303 0 1 NA NA
## Item_4                       0.000  1  0  0.661 0 1 NA NA
## Item_5                       0.000  1  0  0.393 0 1 NA NA
## Item_6                       0.000  1  0 -0.105 0 1 NA NA
## Item_7                       0.000  1  0  1.405 0 1 NA NA
## Item_8                       0.000  1  0 -0.147 0 1 NA NA
## Item_9                       0.000  1  0  2.014 0 1 NA NA
## Item_10                      0.000  0  1 -0.140 0 1 NA NA
## Item_11                      0.000  0  1  1.228 0 1 NA NA
## Item_12                      0.000  0  1  2.351 0 1 NA NA
## Item_13                      0.000  0  1 -1.430 0 1 NA NA
## Item_14                      0.000  0  1 -0.193 0 1 NA NA
## Item_15                      0.000  0  1 -0.098 0 1 NA NA
## Item_16                      0.000  0  1  0.623 0 1 NA NA
## Item_17                      0.000  0  1 -0.286 0 1 NA NA
## Item_18                      0.000  0  1 -2.594 0 1 NA NA
## Item_19                      0.000  1  1     NA 0 1  0  0
## Item_20                      0.924  1  1     NA 0 1  0  0
## Item_21                      0.000  1  1     NA 0 1  0  0
## Item_22                      0.000  1  1     NA 0 1  0  0
## Item_23                      0.924  1  1     NA 0 1  0  0
## Item_24                      0.000  1  1     NA 0 1  0  0
## Item_25                      0.000  1  1     NA 0 1  0  0
## Item_26                      0.924  1  1     NA 0 1  0  0
## Item_27                      0.000  1  1     NA 0 1  0  0
## Item_28                      0.000  1  1     NA 0 1  0  0
## Item_29                      0.924  1  1     NA 0 1  0  0
## Item_30                      0.000  1  1     NA 0 1  0  0
## 
## $means
## theta1 theta2 
##      0      0 
## 
## $cov
##        theta1 theta2
## theta1  0.919  0.074
## theta2  0.074  0.988
coef(mltm, printSE=TRUE)
## $Item_1
##     theta1.t1_difficultyeasy theta1.t1_difficultymedium
## par                        0                          0
## SE                        NA                         NA
##     theta2.t2_difficultyeasy theta2.t2_difficultyhard
## par                        0                        0
## SE                        NA                       NA
##     theta2.t2_difficultymedium a1 a2     d logit(g) logit(u)
## par                          0  1  0 1.314     -999      999
## SE                          NA NA NA 0.054       NA       NA
## 
## $Item_2
##     theta1.t1_difficultyeasy theta1.t1_difficultymedium
## par                        0                          0
## SE                        NA                         NA
##     theta2.t2_difficultyeasy theta2.t2_difficultyhard
## par                        0                        0
## SE                        NA                       NA
##     theta2.t2_difficultymedium a1 a2      d logit(g) logit(u)
## par                          0  1  0 -0.563     -999      999
## SE                          NA NA NA  0.049       NA       NA
## 
## $Item_3
##     theta1.t1_difficultyeasy theta1.t1_difficultymedium
## par                        0                          0
## SE                        NA                         NA
##     theta2.t2_difficultyeasy theta2.t2_difficultyhard
## par                        0                        0
## SE                        NA                       NA
##     theta2.t2_difficultymedium a1 a2     d logit(g) logit(u)
## par                          0  1  0 0.303     -999      999
## SE                          NA NA NA 0.048       NA       NA
## 
## $Item_4
##     theta1.t1_difficultyeasy theta1.t1_difficultymedium
## par                        0                          0
## SE                        NA                         NA
##     theta2.t2_difficultyeasy theta2.t2_difficultyhard
## par                        0                        0
## SE                        NA                       NA
##     theta2.t2_difficultymedium a1 a2     d logit(g) logit(u)
## par                          0  1  0 0.661     -999      999
## SE                          NA NA NA 0.049       NA       NA
## 
## $Item_5
##     theta1.t1_difficultyeasy theta1.t1_difficultymedium
## par                        0                          0
## SE                        NA                         NA
##     theta2.t2_difficultyeasy theta2.t2_difficultyhard
## par                        0                        0
## SE                        NA                       NA
##     theta2.t2_difficultymedium a1 a2     d logit(g) logit(u)
## par                          0  1  0 0.393     -999      999
## SE                          NA NA NA 0.048       NA       NA
## 
## $Item_6
##     theta1.t1_difficultyeasy theta1.t1_difficultymedium
## par                        0                          0
## SE                        NA                         NA
##     theta2.t2_difficultyeasy theta2.t2_difficultyhard
## par                        0                        0
## SE                        NA                       NA
##     theta2.t2_difficultymedium a1 a2      d logit(g) logit(u)
## par                          0  1  0 -0.105     -999      999
## SE                          NA NA NA  0.048       NA       NA
## 
## $Item_7
##     theta1.t1_difficultyeasy theta1.t1_difficultymedium
## par                        0                          0
## SE                        NA                         NA
##     theta2.t2_difficultyeasy theta2.t2_difficultyhard
## par                        0                        0
## SE                        NA                       NA
##     theta2.t2_difficultymedium a1 a2     d logit(g) logit(u)
## par                          0  1  0 1.405     -999      999
## SE                          NA NA NA 0.055       NA       NA
## 
## $Item_8
##     theta1.t1_difficultyeasy theta1.t1_difficultymedium
## par                        0                          0
## SE                        NA                         NA
##     theta2.t2_difficultyeasy theta2.t2_difficultyhard
## par                        0                        0
## SE                        NA                       NA
##     theta2.t2_difficultymedium a1 a2      d logit(g) logit(u)
## par                          0  1  0 -0.147     -999      999
## SE                          NA NA NA  0.048       NA       NA
## 
## $Item_9
##     theta1.t1_difficultyeasy theta1.t1_difficultymedium
## par                        0                          0
## SE                        NA                         NA
##     theta2.t2_difficultyeasy theta2.t2_difficultyhard
## par                        0                        0
## SE                        NA                       NA
##     theta2.t2_difficultymedium a1 a2     d logit(g) logit(u)
## par                          0  1  0 2.014     -999      999
## SE                          NA NA NA 0.063       NA       NA
## 
## $Item_10
##     theta1.t1_difficultyeasy theta1.t1_difficultymedium
## par                        0                          0
## SE                        NA                         NA
##     theta2.t2_difficultyeasy theta2.t2_difficultyhard
## par                        0                        0
## SE                        NA                       NA
##     theta2.t2_difficultymedium a1 a2      d logit(g) logit(u)
## par                          0  0  1 -0.140     -999      999
## SE                          NA NA NA  0.048       NA       NA
## 
## $Item_11
##     theta1.t1_difficultyeasy theta1.t1_difficultymedium
## par                        0                          0
## SE                        NA                         NA
##     theta2.t2_difficultyeasy theta2.t2_difficultyhard
## par                        0                        0
## SE                        NA                       NA
##     theta2.t2_difficultymedium a1 a2     d logit(g) logit(u)
## par                          0  0  1 1.228     -999      999
## SE                          NA NA NA 0.053       NA       NA
## 
## $Item_12
##     theta1.t1_difficultyeasy theta1.t1_difficultymedium
## par                        0                          0
## SE                        NA                         NA
##     theta2.t2_difficultyeasy theta2.t2_difficultyhard
## par                        0                        0
## SE                        NA                       NA
##     theta2.t2_difficultymedium a1 a2     d logit(g) logit(u)
## par                          0  0  1 2.351     -999      999
## SE                          NA NA NA 0.069       NA       NA
## 
## $Item_13
##     theta1.t1_difficultyeasy theta1.t1_difficultymedium
## par                        0                          0
## SE                        NA                         NA
##     theta2.t2_difficultyeasy theta2.t2_difficultyhard
## par                        0                        0
## SE                        NA                       NA
##     theta2.t2_difficultymedium a1 a2      d logit(g) logit(u)
## par                          0  0  1 -1.430     -999      999
## SE                          NA NA NA  0.055       NA       NA
## 
## $Item_14
##     theta1.t1_difficultyeasy theta1.t1_difficultymedium
## par                        0                          0
## SE                        NA                         NA
##     theta2.t2_difficultyeasy theta2.t2_difficultyhard
## par                        0                        0
## SE                        NA                       NA
##     theta2.t2_difficultymedium a1 a2      d logit(g) logit(u)
## par                          0  0  1 -0.193     -999      999
## SE                          NA NA NA  0.048       NA       NA
## 
## $Item_15
##     theta1.t1_difficultyeasy theta1.t1_difficultymedium
## par                        0                          0
## SE                        NA                         NA
##     theta2.t2_difficultyeasy theta2.t2_difficultyhard
## par                        0                        0
## SE                        NA                       NA
##     theta2.t2_difficultymedium a1 a2      d logit(g) logit(u)
## par                          0  0  1 -0.098     -999      999
## SE                          NA NA NA  0.048       NA       NA
## 
## $Item_16
##     theta1.t1_difficultyeasy theta1.t1_difficultymedium
## par                        0                          0
## SE                        NA                         NA
##     theta2.t2_difficultyeasy theta2.t2_difficultyhard
## par                        0                        0
## SE                        NA                       NA
##     theta2.t2_difficultymedium a1 a2     d logit(g) logit(u)
## par                          0  0  1 0.623     -999      999
## SE                          NA NA NA 0.050       NA       NA
## 
## $Item_17
##     theta1.t1_difficultyeasy theta1.t1_difficultymedium
## par                        0                          0
## SE                        NA                         NA
##     theta2.t2_difficultyeasy theta2.t2_difficultyhard
## par                        0                        0
## SE                        NA                       NA
##     theta2.t2_difficultymedium a1 a2      d logit(g) logit(u)
## par                          0  0  1 -0.286     -999      999
## SE                          NA NA NA  0.049       NA       NA
## 
## $Item_18
##     theta1.t1_difficultyeasy theta1.t1_difficultymedium
## par                        0                          0
## SE                        NA                         NA
##     theta2.t2_difficultyeasy theta2.t2_difficultyhard
## par                        0                        0
## SE                        NA                       NA
##     theta2.t2_difficultymedium a1 a2      d logit(g) logit(u)
## par                          0  0  1 -2.594     -999      999
## SE                          NA NA NA  0.074       NA       NA
## 
## $Item_19
##     theta1.t1_difficultyeasy theta1.t1_difficultymedium
## par                    3.190                      1.031
## SE                     0.145                      0.048
##     theta2.t2_difficultyeasy theta2.t2_difficultyhard
## par                    1.857                   -0.078
## SE                     0.065                    0.037
##     theta2.t2_difficultymedium a1 a2 d1 d2 logit(g) logit(u)
## par                      0.924  1  1  0  0     -999      999
## SE                       0.046 NA NA NA NA       NA       NA
## 
## $Item_20
##     theta1.t1_difficultyeasy theta1.t1_difficultymedium
## par                    3.190                      1.031
## SE                     0.145                      0.048
##     theta2.t2_difficultyeasy theta2.t2_difficultyhard
## par                    1.857                   -0.078
## SE                     0.065                    0.037
##     theta2.t2_difficultymedium a1 a2 d1 d2 logit(g) logit(u)
## par                      0.924  1  1  0  0     -999      999
## SE                       0.046 NA NA NA NA       NA       NA
## 
## $Item_21
##     theta1.t1_difficultyeasy theta1.t1_difficultymedium
## par                    3.190                      1.031
## SE                     0.145                      0.048
##     theta2.t2_difficultyeasy theta2.t2_difficultyhard
## par                    1.857                   -0.078
## SE                     0.065                    0.037
##     theta2.t2_difficultymedium a1 a2 d1 d2 logit(g) logit(u)
## par                      0.924  1  1  0  0     -999      999
## SE                       0.046 NA NA NA NA       NA       NA
## 
## $Item_22
##     theta1.t1_difficultyeasy theta1.t1_difficultymedium
## par                    3.190                      1.031
## SE                     0.145                      0.048
##     theta2.t2_difficultyeasy theta2.t2_difficultyhard
## par                    1.857                   -0.078
## SE                     0.065                    0.037
##     theta2.t2_difficultymedium a1 a2 d1 d2 logit(g) logit(u)
## par                      0.924  1  1  0  0     -999      999
## SE                       0.046 NA NA NA NA       NA       NA
## 
## $Item_23
##     theta1.t1_difficultyeasy theta1.t1_difficultymedium
## par                    3.190                      1.031
## SE                     0.145                      0.048
##     theta2.t2_difficultyeasy theta2.t2_difficultyhard
## par                    1.857                   -0.078
## SE                     0.065                    0.037
##     theta2.t2_difficultymedium a1 a2 d1 d2 logit(g) logit(u)
## par                      0.924  1  1  0  0     -999      999
## SE                       0.046 NA NA NA NA       NA       NA
## 
## $Item_24
##     theta1.t1_difficultyeasy theta1.t1_difficultymedium
## par                    3.190                      1.031
## SE                     0.145                      0.048
##     theta2.t2_difficultyeasy theta2.t2_difficultyhard
## par                    1.857                   -0.078
## SE                     0.065                    0.037
##     theta2.t2_difficultymedium a1 a2 d1 d2 logit(g) logit(u)
## par                      0.924  1  1  0  0     -999      999
## SE                       0.046 NA NA NA NA       NA       NA
## 
## $Item_25
##     theta1.t1_difficultyeasy theta1.t1_difficultymedium
## par                    3.190                      1.031
## SE                     0.145                      0.048
##     theta2.t2_difficultyeasy theta2.t2_difficultyhard
## par                    1.857                   -0.078
## SE                     0.065                    0.037
##     theta2.t2_difficultymedium a1 a2 d1 d2 logit(g) logit(u)
## par                      0.924  1  1  0  0     -999      999
## SE                       0.046 NA NA NA NA       NA       NA
## 
## $Item_26
##     theta1.t1_difficultyeasy theta1.t1_difficultymedium
## par                    3.190                      1.031
## SE                     0.145                      0.048
##     theta2.t2_difficultyeasy theta2.t2_difficultyhard
## par                    1.857                   -0.078
## SE                     0.065                    0.037
##     theta2.t2_difficultymedium a1 a2 d1 d2 logit(g) logit(u)
## par                      0.924  1  1  0  0     -999      999
## SE                       0.046 NA NA NA NA       NA       NA
## 
## $Item_27
##     theta1.t1_difficultyeasy theta1.t1_difficultymedium
## par                    3.190                      1.031
## SE                     0.145                      0.048
##     theta2.t2_difficultyeasy theta2.t2_difficultyhard
## par                    1.857                   -0.078
## SE                     0.065                    0.037
##     theta2.t2_difficultymedium a1 a2 d1 d2 logit(g) logit(u)
## par                      0.924  1  1  0  0     -999      999
## SE                       0.046 NA NA NA NA       NA       NA
## 
## $Item_28
##     theta1.t1_difficultyeasy theta1.t1_difficultymedium
## par                    3.190                      1.031
## SE                     0.145                      0.048
##     theta2.t2_difficultyeasy theta2.t2_difficultyhard
## par                    1.857                   -0.078
## SE                     0.065                    0.037
##     theta2.t2_difficultymedium a1 a2 d1 d2 logit(g) logit(u)
## par                      0.924  1  1  0  0     -999      999
## SE                       0.046 NA NA NA NA       NA       NA
## 
## $Item_29
##     theta1.t1_difficultyeasy theta1.t1_difficultymedium
## par                    3.190                      1.031
## SE                     0.145                      0.048
##     theta2.t2_difficultyeasy theta2.t2_difficultyhard
## par                    1.857                   -0.078
## SE                     0.065                    0.037
##     theta2.t2_difficultymedium a1 a2 d1 d2 logit(g) logit(u)
## par                      0.924  1  1  0  0     -999      999
## SE                       0.046 NA NA NA NA       NA       NA
## 
## $Item_30
##     theta1.t1_difficultyeasy theta1.t1_difficultymedium
## par                    3.190                      1.031
## SE                     0.145                      0.048
##     theta2.t2_difficultyeasy theta2.t2_difficultyhard
## par                    1.857                   -0.078
## SE                     0.065                    0.037
##     theta2.t2_difficultymedium a1 a2 d1 d2 logit(g) logit(u)
## par                      0.924  1  1  0  0     -999      999
## SE                       0.046 NA NA NA NA       NA       NA
## 
## $GroupPars
##     MEAN_1 MEAN_2 COV_11 COV_21 COV_22
## par      0      0  0.919  0.074  0.988
## SE      NA     NA  0.047  0.029  0.045
anova(mltm, mod) # similar fit; hence more constrained version preferred
##           AIC    SABIC       HQ      BIC    logLik     X2 df     p
## mltm 87789.31 87858.12 87844.28 87940.73 -43868.65                
## mod  87810.34 87929.44 87905.48 88072.42 -43860.17 16.972 19 0.592
M2(mltm) # goodness of fit
##             M2  df            p      RMSEA    RMSEA_5   RMSEA_95      SRMSR
## stats 724.3172 439 2.220446e-16 0.01612682 0.01400711 0.01819185 0.03054497
##             TLI       CFI
## stats 0.9760504 0.9758302
head(personfit(mltm))
##      outfit   z.outfit     infit    z.infit         Zh
## 1 0.4099099 -2.3020946 0.5059333 -2.9261987  2.2862271
## 2 1.9123381  2.2520122 1.2180485  1.0684326 -1.5047068
## 3 0.6286134 -0.8426796 0.7783420 -0.9888075  1.0049593
## 4 0.7758581 -0.9554195 0.8563429 -0.8899482  0.9411026
## 5 0.7022092 -0.8066743 0.8190887 -0.9254924  0.9442140
## 6 0.4515079 -1.3679129 0.5692678 -1.6827417  1.4501640
residuals(mltm)
## LD matrix (lower triangle) and standardized residual correlations (upper triangle)
## 
## Upper triangle summary:
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  -0.064  -0.024  -0.007   0.000   0.019   0.174 
## 
##         Item_1 Item_2 Item_3 Item_4 Item_5 Item_6 Item_7 Item_8 Item_9 Item_10
## Item_1         -0.010 -0.027 -0.040 -0.031  0.029  0.000  0.003 -0.012   0.098
## Item_2   0.229         0.008  0.013  0.041  0.049  0.015  0.003 -0.004   0.142
## Item_3   1.788  0.156        -0.025 -0.007 -0.012  0.012 -0.014 -0.015   0.174
## Item_4   3.964  0.437  1.577        -0.006  0.014 -0.014  0.008  0.018   0.134
## Item_5   2.393  4.302  0.134  0.085         0.021  0.010 -0.015 -0.005   0.154
## Item_6   2.036  5.900  0.344  0.505  1.137         0.005 -0.012 -0.003   0.127
## Item_7   0.000  0.581  0.363  0.498  0.261  0.064         0.005  0.006   0.134
## Item_8   0.029  0.019  0.490  0.155  0.560  0.341  0.075        -0.011   0.166
## Item_9   0.389  0.035  0.534  0.827  0.072  0.029  0.097  0.280          0.093
## Item_10 23.809 50.189 75.667 44.656 59.645 40.070 45.056 68.776 21.841        
## Item_11  0.009  0.716  0.194  0.626  0.034  0.175  0.028  0.385  0.126   1.843
## Item_12  0.753  0.236  3.918  0.095  1.109  0.582  0.818  1.181  5.494   0.011
## Item_13  1.267  4.570  0.152  0.017  0.396  2.507  3.411  0.105  2.038   3.111
## Item_14  7.361  0.214  0.172  0.934  1.965  4.761  4.570  0.319  3.784   1.714
## Item_15  0.081  0.605  1.939  0.315  0.397  0.037  0.590  0.408  0.605   0.871
## Item_16  1.749  0.256  2.779  0.965  1.745  0.134  2.493  0.001  3.477   0.281
## Item_17  3.492  0.001  0.804  0.708  0.094  2.145  3.168  6.440  0.076   6.330
## Item_18  1.296  0.205  0.068  0.911  0.003  0.029  0.301  0.309  0.158   0.447
## Item_19  1.950  1.095  0.940  1.227  3.901  2.358  0.879  2.427  1.078   0.912
## Item_20  0.821  1.556  0.836  0.291  0.071  0.862  1.370  0.069  6.617   0.072
## Item_21  3.887  1.725  3.272  1.120  1.616  0.749  1.266  0.857  0.744   4.508
## Item_22  2.670  0.247  0.016  4.552  0.345  0.736  0.650  0.010  0.030   1.904
## Item_23  1.723  2.043  0.011  0.415  0.958  0.032  0.670  0.042  0.005   3.678
## Item_24  6.039  2.418  3.100  6.560  2.357  5.733  2.198  2.200  4.635   7.501
## Item_25  0.562  0.795  0.373  4.838  1.043  3.118  0.468  1.310  0.631  11.477
## Item_26  4.256  1.169  0.889  0.700  1.184  0.633  3.494  1.488  3.177  20.873
## Item_27  0.025  0.170  0.067  0.374  2.965  0.050  0.179  0.187  2.212  30.192
## Item_28  2.586  7.102  3.183  2.042  2.136  2.200  2.730  3.809  2.120  13.578
## Item_29  1.392  0.557  1.223  0.458  0.710  1.588  2.200  0.723  3.080  11.310
## Item_30  0.831  1.556  1.214  0.302  0.109  0.449  0.573  0.340  0.424  20.892
##         Item_11 Item_12 Item_13 Item_14 Item_15 Item_16 Item_17 Item_18 Item_19
## Item_1    0.002  -0.017  -0.023  -0.054  -0.006  -0.026  -0.037  -0.023  -0.028
## Item_2   -0.017  -0.010  -0.043  -0.009  -0.016  -0.010   0.000  -0.009   0.021
## Item_3    0.009  -0.040   0.008  -0.008   0.028  -0.033  -0.018  -0.005  -0.019
## Item_4   -0.016  -0.006  -0.003  -0.019  -0.011  -0.020  -0.017   0.019   0.022
## Item_5   -0.004  -0.021  -0.013  -0.028  -0.013  -0.026   0.006  -0.001   0.040
## Item_6    0.008  -0.015  -0.032  -0.044   0.004   0.007  -0.029  -0.003   0.031
## Item_7    0.003   0.018  -0.037  -0.043  -0.015  -0.032  -0.036   0.011   0.019
## Item_8    0.012  -0.022   0.006  -0.011   0.013   0.001  -0.051  -0.011   0.031
## Item_9    0.007  -0.047  -0.029  -0.039  -0.016  -0.037   0.006   0.008   0.021
## Item_10   0.027   0.002   0.035  -0.026   0.019   0.011  -0.050   0.013  -0.019
## Item_11          -0.022   0.022  -0.006   0.009  -0.009  -0.027   0.006  -0.019
## Item_12   1.214          -0.008  -0.027   0.011  -0.003  -0.014  -0.028  -0.020
## Item_13   1.222   0.141          -0.013   0.035   0.017  -0.008  -0.027  -0.023
## Item_14   0.096   1.802   0.404          -0.014   0.018  -0.013  -0.038  -0.042
## Item_15   0.208   0.293   3.129   0.460           0.021  -0.011  -0.026   0.029
## Item_16   0.208   0.018   0.759   0.815   1.060          -0.007  -0.003  -0.019
## Item_17   1.773   0.461   0.180   0.392   0.290   0.140           0.005  -0.023
## Item_18   0.100   1.944   1.860   3.653   1.691   0.020   0.064          -0.033
## Item_19   0.913   1.015   1.284   4.468   2.033   0.859   1.304   2.695        
## Item_20   1.446   0.039   0.045   2.772   4.411   0.072   0.042   3.259   0.949
## Item_21   0.747   0.875   1.773   1.730   0.759   1.420   0.979   1.137   5.069
## Item_22   0.717   0.201   0.020   0.388   5.866   0.030   0.071   1.091   1.009
## Item_23   8.564   1.754   4.656   0.443   1.035   0.005   0.173   1.152   0.997
## Item_24   3.006   2.617   2.265   2.175   2.826   2.329   2.261   4.236   6.280
## Item_25   0.480   0.827   0.946   0.373   0.576   1.593   2.431   1.596   1.146
## Item_26   1.356   0.825   1.879   2.200   0.954   1.011   5.500   4.288   1.332
## Item_27   0.144   1.132   0.112   0.783   0.837   1.936   4.920   3.868   1.314
## Item_28   2.097   4.981   2.166   2.329   2.583   3.338   2.228   2.049   3.566
## Item_29   0.736   0.533   1.869   2.544   2.516   1.394  10.093   0.484   5.562
## Item_30   0.221   0.086   5.871   2.007   0.460   0.201   5.445   2.127   0.862
##         Item_20 Item_21 Item_22 Item_23 Item_24 Item_25 Item_26 Item_27 Item_28
## Item_1   -0.018   0.039  -0.033  -0.026   0.049   0.015   0.041   0.003  -0.032
## Item_2   -0.025  -0.026   0.010   0.029   0.031   0.018   0.022  -0.008   0.053
## Item_3   -0.018  -0.036   0.003   0.002   0.035   0.012   0.019   0.005  -0.036
## Item_4   -0.011   0.021   0.043   0.013   0.051  -0.044   0.017   0.012  -0.029
## Item_5   -0.005  -0.025   0.012  -0.020   0.031  -0.020   0.022  -0.034   0.029
## Item_6    0.019  -0.017  -0.017   0.004   0.048   0.035  -0.016  -0.004   0.030
## Item_7   -0.023   0.023  -0.016  -0.016   0.030   0.014  -0.037  -0.008   0.033
## Item_8    0.005  -0.019   0.002   0.004   0.030  -0.023  -0.024   0.009  -0.039
## Item_9   -0.051  -0.017  -0.003  -0.001   0.043  -0.016  -0.036   0.030   0.029
## Item_10  -0.005   0.042   0.028   0.038   0.055   0.068   0.091   0.110   0.074
## Item_11   0.024  -0.017  -0.017   0.059   0.035  -0.014   0.023   0.008   0.029
## Item_12  -0.004  -0.019   0.009   0.026  -0.032   0.018   0.018   0.021  -0.045
## Item_13  -0.004  -0.027  -0.003   0.043  -0.030  -0.019  -0.027   0.007  -0.029
## Item_14  -0.033  -0.026  -0.012  -0.013   0.029  -0.012  -0.030  -0.018  -0.031
## Item_15   0.042  -0.017   0.048   0.020  -0.034   0.015  -0.020   0.018  -0.032
## Item_16   0.005  -0.024   0.003   0.001   0.031   0.025   0.020  -0.028  -0.037
## Item_17  -0.004   0.020   0.005  -0.008  -0.030  -0.031  -0.047  -0.044  -0.030
## Item_18  -0.036   0.021   0.021  -0.021   0.041  -0.025  -0.041  -0.039  -0.029
## Item_19   0.019  -0.045   0.020   0.020  -0.050   0.021  -0.023  -0.023  -0.038
## Item_20           0.021   0.029   0.011  -0.045  -0.015  -0.019  -0.026  -0.036
## Item_21   1.097          -0.018   0.022   0.041   0.031  -0.023   0.027   0.041
## Item_22   2.114   0.778           0.023   0.046   0.035   0.016   0.005  -0.033
## Item_23   0.304   1.176   1.318           0.036  -0.019   0.022  -0.003   0.037
## Item_24   5.068   4.269   5.374   3.166           0.034   0.050  -0.030  -0.047
## Item_25   0.569   2.467   3.060   0.871   2.822           0.019   0.020  -0.039
## Item_26   0.944   1.268   0.663   1.248   6.316   0.862           0.032   0.036
## Item_27   1.693   1.795   0.068   0.021   2.264   0.969   2.522          -0.033
## Item_28   3.193   4.110   2.716   3.385   5.434   3.775   3.228   2.749        
## Item_29   0.841   1.320   2.280   2.795   4.367   1.522   2.260   5.717   4.057
## Item_30   6.126   0.904   0.633   3.421   3.815   0.664   1.371   0.140   3.142
##         Item_29 Item_30
## Item_1   -0.024  -0.018
## Item_2   -0.015   0.025
## Item_3    0.022   0.022
## Item_4    0.014  -0.011
## Item_5    0.017   0.007
## Item_6   -0.025  -0.013
## Item_7   -0.030  -0.015
## Item_8   -0.017  -0.012
## Item_9   -0.035  -0.013
## Item_10   0.067   0.091
## Item_11  -0.017  -0.009
## Item_12  -0.015  -0.006
## Item_13  -0.027  -0.048
## Item_14  -0.032  -0.028
## Item_15  -0.032   0.014
## Item_16  -0.024  -0.009
## Item_17  -0.064  -0.047
## Item_18  -0.014  -0.029
## Item_19  -0.047  -0.019
## Item_20  -0.018  -0.050
## Item_21  -0.023  -0.019
## Item_22  -0.030   0.016
## Item_23  -0.033   0.037
## Item_24  -0.042  -0.039
## Item_25  -0.025   0.016
## Item_26  -0.030   0.023
## Item_27  -0.048   0.007
## Item_28  -0.040  -0.035
## Item_29          -0.029
## Item_30   2.120
# EAP estimates
fscores(mltm) |> head()
##           theta1     theta2
## [1,] -2.00196417 -0.4814745
## [2,]  1.13845080  0.3697976
## [3,]  0.14931598  1.7928620
## [4,] -1.30231550 -0.2349520
## [5,] -0.09143038  1.4551914
## [6,]  1.47204804  1.1222929
## End(No test)