Impact of Methodological Choices on the Evaluation of Student Models

Warning

This publication doesn't include Faculty of Education. It includes Faculty of Informatics. Official publication website can be found on muni.cz.
Authors

EFFENBERGER Tomáš PELÁNEK Radek

Year of publication 2020
Type Article in Proceedings
Conference Artificial Intelligence in Education. AIED 2020. Lecture Notes in Computer Science, vol 12163.
MU Faculty or unit

Faculty of Informatics

Citation
Web https://doi.org/10.1007/978-3-030-52237-7_13
Doi http://dx.doi.org/10.1007/978-3-030-52237-7_13
Keywords adaptive learning; student modeling; intelligent tutoring systems; introductory programming
Description The evaluation of student models involves many methodological decisions, e.g., the choice of performance metric, data filtering, and cross-validation setting. Such issues may seem like technical details, and they do not get much attention in published research. Nevertheless, their impact on experiments can be significant. We report experiments with six models for predicting problem-solving times in four introductory programming exercises. Our focus is not on these models per se but rather on the methodological choices necessary for performing these experiments. The results show, particularly, the importance of the choice of performance metric, including details of its computation and presentation.
Related projects:

You are running an old browser version. We recommend updating your browser to its latest version.