Purpose: This study aims to reveal effects of content balancing and item selection method on ability estimation in computerized adaptive tests by comparing Fisher’s maximum information (FMI) and likelihood weighted information (LWI) methods.
Research Methods: Four groups of examinees (250, 500, 750, 1000) and a bank of 500 items with 10 different content domains were generated through Monte Carlo simulations. Examinee ability was estimated by fixing all settings except for the item selection methods mentioned. True and estimated ability (θ) values were compared by dividing examinees into six subgroups. Moreover, the average number of items used was compared.
Findings: The correlations decreased steadily as examinee θ level increased among all examinee groups when LWI was used. FMI had the same trend with the 250 and 500 examinees. Correlations for 750 examinees decreased as θ level increased as well, but they were somewhat steady with FMI. For 1000 examinees, FMI was not successful in estimating examinee θ accurately after θ subgroup 4. Moreover, when FMI was used, θ estimates had less error than LWI. The figures regarding the average items used indicated that LWI used fewer items in subgroups 1, 2, 3 and that FMI used less items in subgroups 4, 5, and 6.
Implications for Research and Practice: The findings indicated that when content balancing is put into use, LWI is more suitable to estimate examinee θ for examinees between -3 and 0 and that FMI is more stable when examinee θ is above 0. An item selection algorithm combining these two item selection methods is recommended.
Keywords: ikelihood weighted information fisher’s maximum information Estimation accuracy