VitaStress – multimodal vital signs for stress detection
Publication date
2025-05-30
Document type
Conference paper
Author
Organisational unit
Conference
27th International Conference on Human-Computer Interaction (HCII) 2025 ; Gothenburg, Sweden ; June 22–27, 2025
Publisher
Springer Nature
Series or journal
Communications in computer and information science
Periodical volume
2526
Book title
HCI International 2025 Posters
Volume (part of multivolume book)
Part V
First page
205
Last page
215
Peer-reviewed
✅
Part of the university bibliography
✅
Language
English
Abstract
Human-Computer Interaction (HCI) research increasingly focuses on developing systems that can recognize and respond to human stress, a key factor in preventing the negative health effects of prolonged stress exposure. Currently, progress in the domain of automated stress recognition based on multi-modal data shows clear potential but is especially hindered by the lack of available datasets and standardized protocols for data collection. Our research aims to contribute towards filling this gap by employing a framework for conducting experiments and data collection in the affective computing domain, supporting improved reuse and reproducibility of results. Specifically in our analysis, we apply a multi-modal approach integrating physiological signals to conduct and evaluate automated stress recognition. By employing standard classifiers, our study achieved notable results: in a ternary classification setting (distinguishing baseline, physical, and overall stress), we attained an accuracy of 79%, while a binary classification (baseline vs. stress) reached up to 89% accuracy. These findings not only replicate existing research in the stress detection domain but clearly show the advantage of using multi-modal data and also establish a benchmark for future analysis studies.
Version
Published version
Access right on openHSU
