Exercise 1

This exercise requires setting up mirt() for estimating IRT models with the data provided, and applies some useful generic functions on the estimated objects. The data is available in the Exercise_01.Rdata file.

Load the dataset into R using the load() function, and inspect the defined objects. The object named data represents an unscored multiple choice test, where each number indicates which category was selected, while the key object provides the scoring key (indicates which category is correct).

  1. Use key2binary(), along with the key object, to convert data into a new object that indicates correct responses (call it scored.data). Find some general statistical information about this test (means, sds, total scores, etc) to get a feel for the scored.dat object (I recommend using psych::describe()).
library(mirt)
## Loading required package: stats4
## Loading required package: lattice
load('Exercise_01.Rdata')
head(data)
##      Item_1 Item_2 Item_3 Item_4 Item_5 Item_6 Item_7 Item_8 Item_9
## [1,]      2      2      3      4      1      5      2      1      2
## [2,]      3      3      4      1      5      5      5      5      3
## [3,]      3      2      3      5      5      4      1      5      5
## [4,]      2      5      2      3      5      1      4      2      2
## [5,]      3      4      4      5      5      3      3      1      5
## [6,]      4      2      1      5      5      4      5      2      4
##      Item_10 Item_11 Item_12 Item_13 Item_14 Item_15 Item_16 Item_17
## [1,]       2       3       1       5       3       2       4       5
## [2,]       4       5       4       4       3       5       3       5
## [3,]       1       5       5       1       3       2       1       4
## [4,]       4       1       1       5       3       1       1       5
## [5,]       3       5       3       1       5       5       5       3
## [6,]       1       5       3       1       3       5       5       3
##      Item_18 Item_19 Item_20 Item_21 Item_22 Item_23 Item_24 Item_25
## [1,]       5       5       1       4       5       4       1       1
## [2,]       1       4       4       3       2       2       5       1
## [3,]       1       4       4       5       1       3       3       3
## [4,]       4       5       3       5       3       5       4       2
## [5,]       1       1       2       1       2       3       2       3
## [6,]       1       4       2       1       2       1       2       4
##      Item_26 Item_27 Item_28 Item_29 Item_30 Item_31 Item_32 Item_33
## [1,]       2       1       5       4       1       2       1       2
## [2,]       5       4       3       3       4       4       2       4
## [3,]       3       4       1       2       3       1       5       4
## [4,]       4       1       4       1       4       4       4       1
## [5,]       3       1       1       1       3       1       5       1
## [6,]       5       2       1       2       3       1       4       4
##      Item_34 Item_35 Item_36 Item_37 Item_38 Item_39 Item_40 Item_41
## [1,]       3       3       1       3       2       3       3       2
## [2,]       1       1       5       5       1       2       4       5
## [3,]       1       1       5       1       3       4       5       5
## [4,]       5       3       1       2       1       4       1       4
## [5,]       1       1       5       1       3       2       1       5
## [6,]       1       1       2       4       3       4       3       5
##      Item_42 Item_43 Item_44 Item_45 Item_46 Item_47 Item_48 Item_49
## [1,]       3       1       4       1       1       3       1       5
## [2,]       3       4       5       5       2       5       2       5
## [3,]       2       4       4       1       3       2       3       2
## [4,]       1       1       3       2       5       3       4       1
## [5,]       3       4       4       5       3       2       3       2
## [6,]       4       4       4       5       1       2       4       2
##      Item_50
## [1,]       2
## [2,]       2
## [3,]       2
## [4,]       3
## [5,]       2
## [6,]       2
print(key)
##  [1] 3 2 4 5 5 5 5 5 1 1 5 3 1 3 5 5 4 1 4 4 4 2 3 3 3 3 4 1 2 3 1 4 4 1 1
## [36] 5 1 3 4 5 5 1 4 4 5 1 5 3 2 2
scored.data <- key2binary(data, key)
head(scored.data)
##      Item_1 Item_2 Item_3 Item_4 Item_5 Item_6 Item_7 Item_8 Item_9
## [1,]      0      1      0      0      0      1      0      0      0
## [2,]      1      0      1      0      1      1      1      1      0
## [3,]      1      1      0      1      1      0      0      1      0
## [4,]      0      0      0      0      1      0      0      0      0
## [5,]      1      0      1      1      1      0      0      0      0
## [6,]      0      1      0      1      1      0      1      0      0
##      Item_10 Item_11 Item_12 Item_13 Item_14 Item_15 Item_16 Item_17
## [1,]       0       0       0       0       1       0       0       0
## [2,]       0       1       0       0       1       1       0       0
## [3,]       1       1       0       1       1       0       0       1
## [4,]       0       0       0       0       1       0       0       0
## [5,]       0       1       1       1       0       1       1       0
## [6,]       1       1       1       1       1       1       1       0
##      Item_18 Item_19 Item_20 Item_21 Item_22 Item_23 Item_24 Item_25
## [1,]       0       0       0       1       0       0       0       0
## [2,]       1       1       1       0       1       0       0       0
## [3,]       1       1       1       0       0       1       1       1
## [4,]       0       0       0       0       0       0       0       0
## [5,]       1       0       0       0       1       1       0       1
## [6,]       1       1       0       0       1       0       0       0
##      Item_26 Item_27 Item_28 Item_29 Item_30 Item_31 Item_32 Item_33
## [1,]       0       0       0       0       0       0       0       0
## [2,]       0       1       0       0       0       0       0       1
## [3,]       1       1       1       1       1       1       0       1
## [4,]       0       0       0       0       0       0       1       0
## [5,]       1       0       1       0       1       1       0       0
## [6,]       0       0       1       1       1       1       1       1
##      Item_34 Item_35 Item_36 Item_37 Item_38 Item_39 Item_40 Item_41
## [1,]       0       0       0       0       0       0       0       0
## [2,]       1       1       1       0       0       0       0       1
## [3,]       1       1       1       1       1       1       1       1
## [4,]       0       0       0       0       0       1       0       0
## [5,]       1       1       1       1       1       0       0       1
## [6,]       1       1       0       0       1       1       0       1
##      Item_42 Item_43 Item_44 Item_45 Item_46 Item_47 Item_48 Item_49
## [1,]       0       0       1       0       1       0       0       0
## [2,]       0       1       0       1       0       1       0       0
## [3,]       0       1       1       0       0       0       1       1
## [4,]       1       0       0       0       0       0       0       0
## [5,]       0       1       1       1       0       0       1       1
## [6,]       0       1       1       1       1       0       0       1
##      Item_50
## [1,]       1
## [2,]       1
## [3,]       1
## [4,]       0
## [5,]       1
## [6,]       1
psych::describe(scored.data)
##         vars    n mean   sd median trimmed  mad min max range  skew
## Item_1     1 2500 0.39 0.49    0.0    0.36 0.00   0   1     1  0.47
## Item_2     2 2500 0.43 0.50    0.0    0.42 0.00   0   1     1  0.27
## Item_3     3 2500 0.63 0.48    1.0    0.66 0.00   0   1     1 -0.53
## Item_4     4 2500 0.47 0.50    0.0    0.47 0.00   0   1     1  0.10
## Item_5     5 2500 0.56 0.50    1.0    0.58 0.00   0   1     1 -0.25
## Item_6     6 2500 0.48 0.50    0.0    0.47 0.00   0   1     1  0.08
## Item_7     7 2500 0.58 0.49    1.0    0.60 0.00   0   1     1 -0.34
## Item_8     8 2500 0.34 0.47    0.0    0.30 0.00   0   1     1  0.67
## Item_9     9 2500 0.46 0.50    0.0    0.44 0.00   0   1     1  0.18
## Item_10   10 2500 0.50 0.50    0.5    0.50 0.74   0   1     1  0.00
## Item_11   11 2500 0.54 0.50    1.0    0.55 0.00   0   1     1 -0.16
## Item_12   12 2500 0.49 0.50    0.0    0.48 0.00   0   1     1  0.06
## Item_13   13 2500 0.48 0.50    0.0    0.47 0.00   0   1     1  0.09
## Item_14   14 2500 0.63 0.48    1.0    0.66 0.00   0   1     1 -0.54
## Item_15   15 2500 0.56 0.50    1.0    0.57 0.00   0   1     1 -0.23
## Item_16   16 2500 0.46 0.50    0.0    0.44 0.00   0   1     1  0.18
## Item_17   17 2500 0.50 0.50    0.5    0.50 0.74   0   1     1  0.00
## Item_18   18 2500 0.56 0.50    1.0    0.57 0.00   0   1     1 -0.24
## Item_19   19 2500 0.34 0.47    0.0    0.30 0.00   0   1     1  0.66
## Item_20   20 2500 0.58 0.49    1.0    0.61 0.00   0   1     1 -0.34
## Item_21   21 2500 0.65 0.48    1.0    0.68 0.00   0   1     1 -0.62
## Item_22   22 2500 0.42 0.49    0.0    0.40 0.00   0   1     1  0.31
## Item_23   23 2500 0.49 0.50    0.0    0.49 0.00   0   1     1  0.02
## Item_24   24 2500 0.34 0.47    0.0    0.30 0.00   0   1     1  0.69
## Item_25   25 2500 0.48 0.50    0.0    0.47 0.00   0   1     1  0.10
## Item_26   26 2500 0.51 0.50    1.0    0.51 0.00   0   1     1 -0.03
## Item_27   27 2500 0.42 0.49    0.0    0.40 0.00   0   1     1  0.31
## Item_28   28 2500 0.50 0.50    0.0    0.50 0.00   0   1     1  0.01
## Item_29   29 2500 0.42 0.49    0.0    0.40 0.00   0   1     1  0.32
## Item_30   30 2500 0.49 0.50    0.0    0.49 0.00   0   1     1  0.03
## Item_31   31 2500 0.48 0.50    0.0    0.47 0.00   0   1     1  0.09
## Item_32   32 2500 0.56 0.50    1.0    0.57 0.00   0   1     1 -0.24
## Item_33   33 2500 0.56 0.50    1.0    0.57 0.00   0   1     1 -0.24
## Item_34   34 2500 0.64 0.48    1.0    0.68 0.00   0   1     1 -0.59
## Item_35   35 2500 0.59 0.49    1.0    0.61 0.00   0   1     1 -0.36
## Item_36   36 2500 0.46 0.50    0.0    0.45 0.00   0   1     1  0.15
## Item_37   37 2500 0.49 0.50    0.0    0.49 0.00   0   1     1  0.04
## Item_38   38 2500 0.67 0.47    1.0    0.72 0.00   0   1     1 -0.74
## Item_39   39 2500 0.39 0.49    0.0    0.36 0.00   0   1     1  0.47
## Item_40   40 2500 0.50 0.50    0.0    0.50 0.00   0   1     1  0.01
## Item_41   41 2500 0.68 0.47    1.0    0.73 0.00   0   1     1 -0.79
## Item_42   42 2500 0.17 0.37    0.0    0.09 0.00   0   1     1  1.77
## Item_43   43 2500 0.41 0.49    0.0    0.38 0.00   0   1     1  0.38
## Item_44   44 2500 0.51 0.50    1.0    0.51 0.00   0   1     1 -0.04
## Item_45   45 2500 0.40 0.49    0.0    0.38 0.00   0   1     1  0.40
## Item_46   46 2500 0.37 0.48    0.0    0.34 0.00   0   1     1  0.53
## Item_47   47 2500 0.48 0.50    0.0    0.48 0.00   0   1     1  0.07
## Item_48   48 2500 0.38 0.49    0.0    0.35 0.00   0   1     1  0.50
## Item_49   49 2500 0.45 0.50    0.0    0.44 0.00   0   1     1  0.18
## Item_50   50 2500 0.64 0.48    1.0    0.67 0.00   0   1     1 -0.58
##         kurtosis   se
## Item_1     -1.78 0.01
## Item_2     -1.93 0.01
## Item_3     -1.72 0.01
## Item_4     -1.99 0.01
## Item_5     -1.94 0.01
## Item_6     -1.99 0.01
## Item_7     -1.89 0.01
## Item_8     -1.55 0.01
## Item_9     -1.97 0.01
## Item_10    -2.00 0.01
## Item_11    -1.98 0.01
## Item_12    -2.00 0.01
## Item_13    -1.99 0.01
## Item_14    -1.71 0.01
## Item_15    -1.95 0.01
## Item_16    -1.97 0.01
## Item_17    -2.00 0.01
## Item_18    -1.94 0.01
## Item_19    -1.56 0.01
## Item_20    -1.88 0.01
## Item_21    -1.62 0.01
## Item_22    -1.90 0.01
## Item_23    -2.00 0.01
## Item_24    -1.53 0.01
## Item_25    -1.99 0.01
## Item_26    -2.00 0.01
## Item_27    -1.90 0.01
## Item_28    -2.00 0.01
## Item_29    -1.90 0.01
## Item_30    -2.00 0.01
## Item_31    -1.99 0.01
## Item_32    -1.94 0.01
## Item_33    -1.95 0.01
## Item_34    -1.65 0.01
## Item_35    -1.87 0.01
## Item_36    -1.98 0.01
## Item_37    -2.00 0.01
## Item_38    -1.46 0.01
## Item_39    -1.78 0.01
## Item_40    -2.00 0.01
## Item_41    -1.38 0.01
## Item_42     1.14 0.01
## Item_43    -1.86 0.01
## Item_44    -2.00 0.01
## Item_45    -1.84 0.01
## Item_46    -1.71 0.01
## Item_47    -2.00 0.01
## Item_48    -1.75 0.01
## Item_49    -1.97 0.01
## Item_50    -1.66 0.01
total <- rowSums(scored.data)
histogram(~total, breaks=50)

  1. Fit a one and two factor model to the data, using the 2PL model for each item, and compare them using anova(). Which model fits better and why (keep in mind, the models are not nested)?
unidim <- mirt(scored.data, 1)
## 
Iteration: 1, Log-Lik: -74587.878, Max-Change: 1.76300
Iteration: 2, Log-Lik: -73150.518, Max-Change: 0.26340
Iteration: 3, Log-Lik: -73098.024, Max-Change: 0.06889
Iteration: 4, Log-Lik: -73076.919, Max-Change: 0.04955
Iteration: 5, Log-Lik: -73064.632, Max-Change: 0.03444
Iteration: 6, Log-Lik: -73057.278, Max-Change: 0.02889
Iteration: 7, Log-Lik: -73052.735, Max-Change: 0.02178
Iteration: 8, Log-Lik: -73049.904, Max-Change: 0.01771
Iteration: 9, Log-Lik: -73048.083, Max-Change: 0.01523
Iteration: 10, Log-Lik: -73044.881, Max-Change: 0.00629
Iteration: 11, Log-Lik: -73044.801, Max-Change: 0.00319
Iteration: 12, Log-Lik: -73044.755, Max-Change: 0.00245
Iteration: 13, Log-Lik: -73044.659, Max-Change: 0.00081
Iteration: 14, Log-Lik: -73044.657, Max-Change: 0.00028
Iteration: 15, Log-Lik: -73044.656, Max-Change: 0.00025
Iteration: 16, Log-Lik: -73044.654, Max-Change: 0.00019
Iteration: 17, Log-Lik: -73044.654, Max-Change: 0.00018
Iteration: 18, Log-Lik: -73044.654, Max-Change: 0.00017
Iteration: 19, Log-Lik: -73044.652, Max-Change: 0.00009
multidim <- mirt(scored.data, 2)
## 
Iteration: 1, Log-Lik: -79400.026, Max-Change: 0.63155
Iteration: 2, Log-Lik: -74867.274, Max-Change: 0.43555
Iteration: 3, Log-Lik: -73970.906, Max-Change: 0.33273
Iteration: 4, Log-Lik: -73471.076, Max-Change: 0.15765
Iteration: 5, Log-Lik: -73247.934, Max-Change: 0.11765
Iteration: 6, Log-Lik: -73145.346, Max-Change: 0.09598
Iteration: 7, Log-Lik: -73088.094, Max-Change: 0.06489
Iteration: 8, Log-Lik: -73056.760, Max-Change: 0.05566
Iteration: 9, Log-Lik: -73038.203, Max-Change: 0.06045
Iteration: 10, Log-Lik: -73026.077, Max-Change: 0.03752
Iteration: 11, Log-Lik: -73018.693, Max-Change: 0.02546
Iteration: 12, Log-Lik: -73014.002, Max-Change: 0.02436
Iteration: 13, Log-Lik: -73010.877, Max-Change: 0.02058
Iteration: 14, Log-Lik: -73008.798, Max-Change: 0.01677
Iteration: 15, Log-Lik: -73007.350, Max-Change: 0.01382
Iteration: 16, Log-Lik: -73004.519, Max-Change: 0.00692
Iteration: 17, Log-Lik: -73004.311, Max-Change: 0.00662
Iteration: 18, Log-Lik: -73004.146, Max-Change: 0.00636
Iteration: 19, Log-Lik: -73003.418, Max-Change: 0.00502
Iteration: 20, Log-Lik: -73003.343, Max-Change: 0.00472
Iteration: 21, Log-Lik: -73003.276, Max-Change: 0.00458
Iteration: 22, Log-Lik: -73002.940, Max-Change: 0.00391
Iteration: 23, Log-Lik: -73002.901, Max-Change: 0.00369
Iteration: 24, Log-Lik: -73002.865, Max-Change: 0.00351
Iteration: 25, Log-Lik: -73002.681, Max-Change: 0.00244
Iteration: 26, Log-Lik: -73002.663, Max-Change: 0.00233
Iteration: 27, Log-Lik: -73002.645, Max-Change: 0.00222
Iteration: 28, Log-Lik: -73002.553, Max-Change: 0.00172
Iteration: 29, Log-Lik: -73002.542, Max-Change: 0.00164
Iteration: 30, Log-Lik: -73002.531, Max-Change: 0.00157
Iteration: 31, Log-Lik: -73002.476, Max-Change: 0.00120
Iteration: 32, Log-Lik: -73002.469, Max-Change: 0.00117
Iteration: 33, Log-Lik: -73002.463, Max-Change: 0.00114
Iteration: 34, Log-Lik: -73002.429, Max-Change: 0.00099
Iteration: 35, Log-Lik: -73002.424, Max-Change: 0.00097
Iteration: 36, Log-Lik: -73002.420, Max-Change: 0.00095
Iteration: 37, Log-Lik: -73002.397, Max-Change: 0.00083
Iteration: 38, Log-Lik: -73002.394, Max-Change: 0.00081
Iteration: 39, Log-Lik: -73002.391, Max-Change: 0.00080
Iteration: 40, Log-Lik: -73002.375, Max-Change: 0.00070
Iteration: 41, Log-Lik: -73002.373, Max-Change: 0.00069
Iteration: 42, Log-Lik: -73002.371, Max-Change: 0.00048
Iteration: 43, Log-Lik: -73002.367, Max-Change: 0.00048
Iteration: 44, Log-Lik: -73002.365, Max-Change: 0.00048
Iteration: 45, Log-Lik: -73002.364, Max-Change: 0.00047
Iteration: 46, Log-Lik: -73002.356, Max-Change: 0.00044
Iteration: 47, Log-Lik: -73002.354, Max-Change: 0.00044
Iteration: 48, Log-Lik: -73002.353, Max-Change: 0.00043
Iteration: 49, Log-Lik: -73002.347, Max-Change: 0.00040
Iteration: 50, Log-Lik: -73002.346, Max-Change: 0.00040
Iteration: 51, Log-Lik: -73002.345, Max-Change: 0.00040
Iteration: 52, Log-Lik: -73002.339, Max-Change: 0.00037
Iteration: 53, Log-Lik: -73002.338, Max-Change: 0.00037
Iteration: 54, Log-Lik: -73002.337, Max-Change: 0.00036
Iteration: 55, Log-Lik: -73002.333, Max-Change: 0.00034
Iteration: 56, Log-Lik: -73002.332, Max-Change: 0.00034
Iteration: 57, Log-Lik: -73002.332, Max-Change: 0.00033
Iteration: 58, Log-Lik: -73002.328, Max-Change: 0.00031
Iteration: 59, Log-Lik: -73002.327, Max-Change: 0.00031
Iteration: 60, Log-Lik: -73002.327, Max-Change: 0.00031
Iteration: 61, Log-Lik: -73002.323, Max-Change: 0.00029
Iteration: 62, Log-Lik: -73002.323, Max-Change: 0.00028
Iteration: 63, Log-Lik: -73002.322, Max-Change: 0.00028
Iteration: 64, Log-Lik: -73002.320, Max-Change: 0.00026
Iteration: 65, Log-Lik: -73002.319, Max-Change: 0.00026
Iteration: 66, Log-Lik: -73002.319, Max-Change: 0.00026
Iteration: 67, Log-Lik: -73002.317, Max-Change: 0.00024
Iteration: 68, Log-Lik: -73002.316, Max-Change: 0.00024
Iteration: 69, Log-Lik: -73002.316, Max-Change: 0.00024
Iteration: 70, Log-Lik: -73002.314, Max-Change: 0.00022
Iteration: 71, Log-Lik: -73002.314, Max-Change: 0.00022
Iteration: 72, Log-Lik: -73002.314, Max-Change: 0.00022
Iteration: 73, Log-Lik: -73002.312, Max-Change: 0.00020
Iteration: 74, Log-Lik: -73002.312, Max-Change: 0.00020
Iteration: 75, Log-Lik: -73002.311, Max-Change: 0.00020
Iteration: 76, Log-Lik: -73002.310, Max-Change: 0.00019
Iteration: 77, Log-Lik: -73002.310, Max-Change: 0.00019
Iteration: 78, Log-Lik: -73002.310, Max-Change: 0.00018
Iteration: 79, Log-Lik: -73002.309, Max-Change: 0.00017
Iteration: 80, Log-Lik: -73002.308, Max-Change: 0.00017
Iteration: 81, Log-Lik: -73002.308, Max-Change: 0.00017
Iteration: 82, Log-Lik: -73002.307, Max-Change: 0.00016
Iteration: 83, Log-Lik: -73002.307, Max-Change: 0.00016
Iteration: 84, Log-Lik: -73002.307, Max-Change: 0.00016
Iteration: 85, Log-Lik: -73002.306, Max-Change: 0.00015
Iteration: 86, Log-Lik: -73002.306, Max-Change: 0.00014
Iteration: 87, Log-Lik: -73002.306, Max-Change: 0.00014
Iteration: 88, Log-Lik: -73002.305, Max-Change: 0.00013
Iteration: 89, Log-Lik: -73002.305, Max-Change: 0.00013
Iteration: 90, Log-Lik: -73002.305, Max-Change: 0.00013
Iteration: 91, Log-Lik: -73002.304, Max-Change: 0.00012
Iteration: 92, Log-Lik: -73002.304, Max-Change: 0.00012
Iteration: 93, Log-Lik: -73002.304, Max-Change: 0.00012
Iteration: 94, Log-Lik: -73002.304, Max-Change: 0.00011
Iteration: 95, Log-Lik: -73002.304, Max-Change: 0.00011
Iteration: 96, Log-Lik: -73002.304, Max-Change: 0.00011
Iteration: 97, Log-Lik: -73002.303, Max-Change: 0.00011
Iteration: 98, Log-Lik: -73002.303, Max-Change: 0.00010
Iteration: 99, Log-Lik: -73002.303, Max-Change: 0.00010
Iteration: 100, Log-Lik: -73002.303, Max-Change: 0.00010
anova(unidim, multidim)
## 
## Model 1: mirt(data = scored.data, model = 1)
## Model 2: mirt(data = scored.data, model = 2)
##        AIC     AICc  SABIC      BIC    logLik     X2  df      p
## 1 146289.3 146297.7 146554 146871.7 -73044.65    NaN NaN    NaN
## 2 146302.6 146321.6 146697 147170.4 -73002.30 84.699  49 0.0012
  1. Try inspecting the best model using coef(), summary(), plot(), and itemplot(). Try to understand the output of each by referring to the help(mirt) and help(itemplot) documentation, as well as the generic help functions (e.g., help("plot-method", package = "mirt")).
coef(unidim, simplify=TRUE)
## $items
##             a1      d g u
## Item_1   0.748 -0.520 0 1
## Item_2   1.523 -0.382 0 1
## Item_3   0.895  0.613 0 1
## Item_4   1.107 -0.126 0 1
## Item_5   0.610  0.271 0 1
## Item_6   0.743 -0.093 0 1
## Item_7   0.969  0.405 0 1
## Item_8   1.316 -0.873 0 1
## Item_9   1.344 -0.234 0 1
## Item_10  1.663  0.004 0 1
## Item_11  1.385  0.219 0 1
## Item_12  1.319 -0.072 0 1
## Item_13  1.012 -0.107 0 1
## Item_14  0.823  0.612 0 1
## Item_15  0.943  0.274 0 1
## Item_16  0.712 -0.198 0 1
## Item_17  1.208  0.002 0 1
## Item_18  1.080  0.294 0 1
## Item_19  1.034 -0.790 0 1
## Item_20  1.381  0.465 0 1
## Item_21  0.854  0.706 0 1
## Item_22  0.608 -0.335 0 1
## Item_23  1.363 -0.025 0 1
## Item_24  0.935 -0.794 0 1
## Item_25  0.961 -0.115 0 1
## Item_26  1.019  0.039 0 1
## Item_27  1.054 -0.380 0 1
## Item_28  1.773 -0.015 0 1
## Item_29  1.684 -0.474 0 1
## Item_30  1.269 -0.035 0 1
## Item_31  1.194 -0.108 0 1
## Item_32  1.029  0.292 0 1
## Item_33  1.650  0.350 0 1
## Item_34  1.350  0.785 0 1
## Item_35  1.427  0.492 0 1
## Item_36  0.715 -0.167 0 1
## Item_37  1.662 -0.048 0 1
## Item_38  1.004  0.871 0 1
## Item_39  1.394 -0.628 0 1
## Item_40  1.161 -0.016 0 1
## Item_41  1.827  1.192 0 1
## Item_42 -1.407 -2.129 0 1
## Item_43  1.759 -0.565 0 1
## Item_44  1.210  0.054 0 1
## Item_45  1.151 -0.497 0 1
## Item_46  1.308 -0.697 0 1
## Item_47  1.501 -0.100 0 1
## Item_48  1.186 -0.626 0 1
## Item_49  0.800 -0.209 0 1
## Item_50  0.838  0.658 0 1
## 
## $means
## F1 
##  0 
## 
## $cov
##    F1
## F1  1
summary(unidim)
##             F1    h2
## Item_1   0.402 0.162
## Item_2   0.667 0.445
## Item_3   0.465 0.217
## Item_4   0.545 0.297
## Item_5   0.338 0.114
## Item_6   0.400 0.160
## Item_7   0.495 0.245
## Item_8   0.612 0.374
## Item_9   0.620 0.384
## Item_10  0.699 0.488
## Item_11  0.631 0.398
## Item_12  0.612 0.375
## Item_13  0.511 0.261
## Item_14  0.435 0.189
## Item_15  0.485 0.235
## Item_16  0.386 0.149
## Item_17  0.579 0.335
## Item_18  0.536 0.287
## Item_19  0.519 0.270
## Item_20  0.630 0.397
## Item_21  0.448 0.201
## Item_22  0.337 0.113
## Item_23  0.625 0.391
## Item_24  0.482 0.232
## Item_25  0.491 0.242
## Item_26  0.514 0.264
## Item_27  0.527 0.277
## Item_28  0.721 0.520
## Item_29  0.703 0.495
## Item_30  0.598 0.357
## Item_31  0.574 0.330
## Item_32  0.517 0.268
## Item_33  0.696 0.484
## Item_34  0.621 0.386
## Item_35  0.642 0.413
## Item_36  0.387 0.150
## Item_37  0.699 0.488
## Item_38  0.508 0.258
## Item_39  0.634 0.402
## Item_40  0.563 0.317
## Item_41  0.732 0.535
## Item_42 -0.637 0.406
## Item_43  0.719 0.517
## Item_44  0.579 0.336
## Item_45  0.560 0.314
## Item_46  0.609 0.371
## Item_47  0.661 0.437
## Item_48  0.572 0.327
## Item_49  0.425 0.181
## Item_50  0.442 0.195
## 
## SS loadings:  15.991 
## Proportion Var:  0.32 
## 
## Factor correlations: 
## 
##    F1
## F1  1
plot(unidim)

plot(unidim, type = 'trace', auto.key = FALSE)

# item 42 has backwards discrimination compared to other items
itemplot(unidim, 42)

  1. One item appears to stand out more than the rest for some reason (detectable with trace-line plots); it is possible that the key supplied for that item is incorrect. Replace the scored item with the original data from data, and modify the default itemtype argument to fit a nominal response model for this item only. What do you notice about the probability trace-lines for this item?
newdata <- scored.data
newdata[,42] <- data[,42]

itemtype <- rep('2PL', 50)
itemtype[42] <- 'nominal'
newmod <- mirt(newdata, 1, itemtype=itemtype)
## 
Iteration: 1, Log-Lik: -78758.849, Max-Change: 3.20823
Iteration: 2, Log-Lik: -75291.610, Max-Change: 5.22696
Iteration: 3, Log-Lik: -75097.653, Max-Change: 1.40115
Iteration: 4, Log-Lik: -75061.883, Max-Change: 0.26791
Iteration: 5, Log-Lik: -75055.502, Max-Change: 0.22179
Iteration: 6, Log-Lik: -75051.832, Max-Change: 0.08779
Iteration: 7, Log-Lik: -75050.804, Max-Change: 0.01685
Iteration: 8, Log-Lik: -75048.932, Max-Change: 0.01277
Iteration: 9, Log-Lik: -75047.652, Max-Change: 0.01060
Iteration: 10, Log-Lik: -75044.606, Max-Change: 0.00524
Iteration: 11, Log-Lik: -75044.529, Max-Change: 0.00533
Iteration: 12, Log-Lik: -75044.434, Max-Change: 0.00239
Iteration: 13, Log-Lik: -75044.358, Max-Change: 0.00418
Iteration: 14, Log-Lik: -75044.293, Max-Change: 0.00185
Iteration: 15, Log-Lik: -75044.254, Max-Change: 0.00387
Iteration: 16, Log-Lik: -75044.131, Max-Change: 0.00338
Iteration: 17, Log-Lik: -75044.071, Max-Change: 0.00327
Iteration: 18, Log-Lik: -75044.039, Max-Change: 0.00161
Iteration: 19, Log-Lik: -75044.023, Max-Change: 0.00218
Iteration: 20, Log-Lik: -75044.009, Max-Change: 0.00113
Iteration: 21, Log-Lik: -75043.991, Max-Change: 0.00186
Iteration: 22, Log-Lik: -75043.979, Max-Change: 0.00101
Iteration: 23, Log-Lik: -75043.965, Max-Change: 0.00103
Iteration: 24, Log-Lik: -75043.953, Max-Change: 0.00096
Iteration: 25, Log-Lik: -75043.971, Max-Change: 0.00245
Iteration: 26, Log-Lik: -75043.916, Max-Change: 0.00028
Iteration: 27, Log-Lik: -75043.915, Max-Change: 0.00041
Iteration: 28, Log-Lik: -75043.914, Max-Change: 0.00228
Iteration: 29, Log-Lik: -75043.906, Max-Change: 0.00026
Iteration: 30, Log-Lik: -75043.905, Max-Change: 0.00013
Iteration: 31, Log-Lik: -75043.905, Max-Change: 0.00052
Iteration: 32, Log-Lik: -75043.904, Max-Change: 0.00026
Iteration: 33, Log-Lik: -75043.903, Max-Change: 0.00039
Iteration: 34, Log-Lik: -75043.903, Max-Change: 0.00163
Iteration: 35, Log-Lik: -75043.895, Max-Change: 0.00021
Iteration: 36, Log-Lik: -75043.894, Max-Change: 0.00011
Iteration: 37, Log-Lik: -75043.894, Max-Change: 0.00044
Iteration: 38, Log-Lik: -75043.894, Max-Change: 0.00022
Iteration: 39, Log-Lik: -75043.893, Max-Change: 0.00033
Iteration: 40, Log-Lik: -75043.893, Max-Change: 0.00026
Iteration: 41, Log-Lik: -75043.892, Max-Change: 0.00039
Iteration: 42, Log-Lik: -75043.891, Max-Change: 0.00019
Iteration: 43, Log-Lik: -75043.891, Max-Change: 0.00015
Iteration: 44, Log-Lik: -75043.891, Max-Change: 0.00022
Iteration: 45, Log-Lik: -75043.890, Max-Change: 0.00011
Iteration: 46, Log-Lik: -75043.890, Max-Change: 0.00046
Iteration: 47, Log-Lik: -75043.889, Max-Change: 0.00023
Iteration: 48, Log-Lik: -75043.889, Max-Change: 0.00034
Iteration: 49, Log-Lik: -75043.889, Max-Change: 0.00028
Iteration: 50, Log-Lik: -75043.888, Max-Change: 0.00042
Iteration: 51, Log-Lik: -75043.887, Max-Change: 0.00021
Iteration: 52, Log-Lik: -75043.887, Max-Change: 0.00016
Iteration: 53, Log-Lik: -75043.887, Max-Change: 0.00024
Iteration: 54, Log-Lik: -75043.886, Max-Change: 0.00012
Iteration: 55, Log-Lik: -75043.886, Max-Change: 0.00010
key[42] #1 scored as correct
## [1] 1
# looks as if the 2nd category is empirically higher than all the others (largest a1*ak)
coef(newmod)[[42]]
##        a1 ak0    ak1   ak2  ak3 ak4 d0    d1    d2   d3    d4
## par 0.245   0 10.412 3.652 2.58   4  0 1.497 0.196 0.13 0.112
itemplot(newmod, 42) #2nd category appears to be the correct traceline

  1. Fix the key according to your observations, and create a new dataset called correct.scored.data. Following that, re-estimate the unidimenisonal 2PL model using the new dataset. How do the itemtrace and information curves look now?
key[42] <- 2
correct.scored.data <- key2binary(data, key)
mod <- mirt(correct.scored.data, 1)
## 
Iteration: 1, Log-Lik: -74242.935, Max-Change: 0.54758
Iteration: 2, Log-Lik: -73495.595, Max-Change: 0.15211
Iteration: 3, Log-Lik: -73436.752, Max-Change: 0.07950
Iteration: 4, Log-Lik: -73409.284, Max-Change: 0.05223
Iteration: 5, Log-Lik: -73392.683, Max-Change: 0.04629
Iteration: 6, Log-Lik: -73382.277, Max-Change: 0.03674
Iteration: 7, Log-Lik: -73375.580, Max-Change: 0.02781
Iteration: 8, Log-Lik: -73371.386, Max-Change: 0.02276
Iteration: 9, Log-Lik: -73368.694, Max-Change: 0.01852
Iteration: 10, Log-Lik: -73363.813, Max-Change: 0.00327
Iteration: 11, Log-Lik: -73363.749, Max-Change: 0.00293
Iteration: 12, Log-Lik: -73363.703, Max-Change: 0.00250
Iteration: 13, Log-Lik: -73363.606, Max-Change: 0.00022
Iteration: 14, Log-Lik: -73363.605, Max-Change: 0.00015
Iteration: 15, Log-Lik: -73363.605, Max-Change: 0.00014
Iteration: 16, Log-Lik: -73363.603, Max-Change: 0.00010
Iteration: 17, Log-Lik: -73363.603, Max-Change: 0.00009
itemplot(mod, 42)

  1. How well does this model fit the data, and do the items appear to behave well given the selected itemtypes? Use M2() and itemfit() to determine if the model is behaving well.
M2(mod)
##             M2   df         p RMSEA RMSEA_5    RMSEA_95      SRMSR
## stats 1163.227 1175 0.5909203     0       0 0.004932178 0.01607839
##            TLI CFI
## stats 1.000092   1
(ifit <- itemfit(mod))
##       item    S_X2 df.S_X2 p.S_X2
## 1   Item_1 41.8258      44 0.5652
## 2   Item_2 35.1482      40 0.6882
## 3   Item_3 40.8099      44 0.6091
## 4   Item_4 56.3040      43 0.0840
## 5   Item_5 39.6887      45 0.6958
## 6   Item_6 36.7975      44 0.7710
## 7   Item_7 42.7775      43 0.4809
## 8   Item_8 37.0681      41 0.6460
## 9   Item_9 49.7014      42 0.1934
## 10 Item_10 35.3960      40 0.6774
## 11 Item_11 26.8300      41 0.9570
## 12 Item_12 46.4222      42 0.2950
## 13 Item_13 48.4885      43 0.2612
## 14 Item_14 43.2800      44 0.5024
## 15 Item_15 44.8204      43 0.3954
## 16 Item_16 58.8137      44 0.0669
## 17 Item_17 50.5260      43 0.2006
## 18 Item_18 33.5587      43 0.8486
## 19 Item_19 50.4032      43 0.2039
## 20 Item_20 48.3224      42 0.2327
## 21 Item_21 36.0525      44 0.7973
## 22 Item_22 30.9366      44 0.9316
## 23 Item_23 36.1887      41 0.6841
## 24 Item_24 37.2564      43 0.7179
## 25 Item_25 55.0615      43 0.1027
## 26 Item_26 38.6811      43 0.6590
## 27 Item_27 38.7568      42 0.6141
## 28 Item_28 64.1369      39 0.0068
## 29 Item_29 42.5944      40 0.3601
## 30 Item_30 46.7880      42 0.2823
## 31 Item_31 47.1560      42 0.2699
## 32 Item_32 32.2692      43 0.8843
## 33 Item_33 33.0880      40 0.7724
## 34 Item_34 50.8493      41 0.1393
## 35 Item_35 57.5097      42 0.0558
## 36 Item_36 44.1210      44 0.4665
## 37 Item_37 28.6631      40 0.9091
## 38 Item_38 43.6513      43 0.4436
## 39 Item_39 39.2546      41 0.5484
## 40 Item_40 29.8187      43 0.9365
## 41 Item_41 33.9149      37 0.6145
## 42 Item_42 38.1778      39 0.5072
## 43 Item_43 41.7222      39 0.3533
## 44 Item_44 28.2145      43 0.9601
## 45 Item_45 68.6238      42 0.0059
## 46 Item_46 39.2842      41 0.5471
## 47 Item_47 31.6745      41 0.8520
## 48 Item_48 47.8476      42 0.2474
## 49 Item_49 46.5601      44 0.3675
## 50 Item_50 43.6771      44 0.4854
p.adjust(ifit$p.S_X2, 'fdr')
##  [1] 0.9155263 0.9155263 0.9155263 0.8400000 0.9155263 0.9419512 0.9155263
##  [8] 0.9155263 0.9155263 0.9155263 0.9601000 0.9155263 0.9155263 0.9155263
## [15] 0.9155263 0.8362500 0.9155263 0.9601000 0.9155263 0.9155263 0.9491667
## [22] 0.9601000 0.9155263 0.9203846 0.8558333 0.9155263 0.9155263 0.1700000
## [29] 0.9155263 0.9155263 0.9155263 0.9601000 0.9419512 0.9155263 0.8362500
## [36] 0.9155263 0.9601000 0.9155263 0.9155263 0.9601000 0.9155263 0.9155263
## [43] 0.9155263 0.9601000 0.1700000 0.9155263 0.9601000 0.9155263 0.9155263
## [50] 0.9155263
  1. (EXTRA) We happened to discover that the peculiar item has two distinct slopes in the probability response curves. Large slopes in distractor options provide extra information about lower-ability individuals, in that if they pick this particular distractor we know more information about their particular \(\theta\) location than just that ‘they didn’t know the answer’.
itemtype[42] <- '2PLNRM'
# nest logit models require scoring key
nestmod <- mirt(newdata, 1, itemtype=itemtype, key=key)
## 
Iteration: 1, Log-Lik: -75933.833, Max-Change: 0.54410
Iteration: 2, Log-Lik: -75178.639, Max-Change: 0.17800
Iteration: 3, Log-Lik: -75114.974, Max-Change: 0.07804
Iteration: 4, Log-Lik: -75087.076, Max-Change: 0.07195
Iteration: 5, Log-Lik: -75071.257, Max-Change: 0.04483
Iteration: 6, Log-Lik: -75061.132, Max-Change: 0.03824
Iteration: 7, Log-Lik: -75054.711, Max-Change: 0.02451
Iteration: 8, Log-Lik: -75051.025, Max-Change: 0.02418
Iteration: 9, Log-Lik: -75048.677, Max-Change: 0.01852
Iteration: 10, Log-Lik: -75045.506, Max-Change: 0.00978
Iteration: 11, Log-Lik: -75045.054, Max-Change: 0.00784
Iteration: 12, Log-Lik: -75044.758, Max-Change: 0.00618
Iteration: 13, Log-Lik: -75044.241, Max-Change: 0.00253
Iteration: 14, Log-Lik: -75044.215, Max-Change: 0.00133
Iteration: 15, Log-Lik: -75044.199, Max-Change: 0.00138
Iteration: 16, Log-Lik: -75044.170, Max-Change: 0.00087
Iteration: 17, Log-Lik: -75044.166, Max-Change: 0.00024
Iteration: 18, Log-Lik: -75044.166, Max-Change: 0.00023
Iteration: 19, Log-Lik: -75044.164, Max-Change: 0.00094
Iteration: 20, Log-Lik: -75044.163, Max-Change: 0.00013
Iteration: 21, Log-Lik: -75044.162, Max-Change: 0.00056
Iteration: 22, Log-Lik: -75044.162, Max-Change: 0.00006
itemplot(nestmod, 42)

itemplot(mod, 42, type= 'info', ylim=c(-.1,1.2))

# curve is higher in the lower end for nestlogit models, indicating more info about less able subjects
itemplot(nestmod, 42, type= 'info', ylim=c(-.1,1.2))