Towards Criteria for Valuable Automatic Feedback in Large Programming Classes

Lohr D, Berges MP (2021)


Publication Language: English

Publication Type: Conference contribution, Conference Contribution

Publication year: 2021

Pages Range: 181-186

Event location: Dortmund DE

Abstract

Automatically generated feedback systems and assessments are popular tools to deal with the rapidly increasing number of students, especially in STEM-related subjects such as Computer Science.However, in the development of such systems, the focus is often on compensating resource issues only, and the quality of the systems’ feedback is not considered. To determine what kind of feedback participants of large introductory programming courses (CS1) use and what expectations they have regarding effective feedback on programming difficulties, a survey was conducted in a large introductory programming class (300 students). The results show that during online semester, students made less use of possibility to ask tutors questions in lab sessions. This is in line with statements of experienced tutors who were previously interviewed. Moreover, the results show that it is not sufficient to merely point to errors in program code but that students need more detailed information concerning the underlying cause of the error, especially in automated systems.

Authors with CRIS profile

How to cite

APA:

Lohr, D., & Berges, M.-P. (2021). Towards Criteria for Valuable Automatic Feedback in Large Programming Classes. In Desel, Jörg; Opel, Simone (Eds.), Proceedings of the Hochschuldidaktik Informatik (HDI) 2021 (pp. 181-186). Dortmund, DE.

MLA:

Lohr, Dominic, and Marc-Pascal Berges. "Towards Criteria for Valuable Automatic Feedback in Large Programming Classes." Proceedings of the Hochschuldidaktik Informatik (HDI) 2021, Dortmund Ed. Desel, Jörg; Opel, Simone, 2021. 181-186.

BibTeX: Download