The ForDigitStress Dataset: A Multi-Modal Dataset for Automatic Stress Recognition

Heimerl A, Prajod P, Mertes S, Baur T, Kraus M, Liu A, Risack H, Rohleder N, André E, Becker L (2024)


Publication Type: Journal article

Publication year: 2024

Journal

DOI: 10.1109/TAFFC.2024.3501400

Abstract

We present a multi-modal stress dataset that uses digital job interviews to induce stress. The dataset provides multi-modal data of 40 participants including audio, video (motion capturing, facial landmarks, eye tracking) as well as physiological information (photoplethysmography, electrodermal activity). In addition to that, the dataset contains time-continuous annotations for stress and occurred emotions (e.g., shame, anger, anxiety, and surprise). In order to establish a baseline, five different machine learning classifiers (Support Vector Machine, K-Nearest Neighbors, Random Forest, Feed-forward Neural Network, and Long-Short-Term Memory Network) have been trained and evaluated on the presented dataset for a binary stress classification task. The best-performing classifier has been a Long-Short-Term Memory Network, which achieved an accuracy of 91.7% and an F1-score of 90.2%. The ForDigitStress dataset is freely available to other researchers.

Authors with CRIS profile

Involved external institutions

How to cite

APA:

Heimerl, A., Prajod, P., Mertes, S., Baur, T., Kraus, M., Liu, A.,... Becker, L. (2024). The ForDigitStress Dataset: A Multi-Modal Dataset for Automatic Stress Recognition. IEEE Transactions on Affective Computing. https://doi.org/10.1109/TAFFC.2024.3501400

MLA:

Heimerl, Alexander, et al. "The ForDigitStress Dataset: A Multi-Modal Dataset for Automatic Stress Recognition." IEEE Transactions on Affective Computing (2024).

BibTeX: Download