Projects

This page shows all of the projects that are currently part of the Embracing Heterogeneity Project.

  1. Response Time in ILSA

One consequence of a computer based assessment (CBA) platform, adopted by PISA, is that process data (e.g., response times and keystrokes) are easily harvested as part of the data collection process. To that end, operational procedures in PISA currently only use response time data in limited ways – primarily as a means of exploratory analysis (OECD, 2017). However, relatively recent methodological innovations make explicitly modeling response times possible (van der Linden, 2007; van der Linden, Klein Entink, & Fox, 2010). In the current paper, we investigate whether including timing data in models for item parameter estimation offers any advantage in accuracy over currently used methods. Further, we consider whether the inclusion of timing data in the latent regression improves precision about achievement distributions. We rely on a simulation study to answer our research question. We first simulate data according to operationally observed conditions in PISA using the R (R Core Team, 2018) package lsasim (Matta, Rutkowski, Rutkowski, Liaw, & Mughogho, 2017). We then estimate item parameters using cirt, which allows for the inclusion of response times. Finally, we estimate achievement distributions in TAM (Robitzsch, Kiefer, & Wu, 2017). Item parameter and achievement estimates are compared to known, population values. The simulation is supplemented with an empirical example from PISA 2015.

Accurate achievement estimates have clear policy and interpretation implications. As such, this research can potentially improve overall and subpopulation achievement estimates in international assessments, such as PISA and related studies. Further, this study offers one possibility for using these rich data.

 

2.   Multistage Testing

A central challenge in international large-scale assessments is adequately measuring dozens of highly heterogeneous populations, many of which are low performers. To that end, multistage adaptive testing offers one possibility for better assessing across the achievement continuum. This project examines the way that several multistage test design and implementation choices can impact on measurement performance in this setting. To attend to gaps in the knowledge base, we examine design and implementation choices in terms of item and person parameter recovery. Preliminary findings indicate that multistage testing shows promise for extending the scope of international assessments.