openHSU logo
  • English
  • Deutsch
  • Log In
  • Communities & Collections
  1. Home
  2. Helmut-Schmidt-University / University of the Federal Armed Forces Hamburg
  3. Publications
  4. 3 - Publication references (without full text)
  5. VitaStress – multimodal vital signs for stress detection
 
Options
Show all metadata fields

VitaStress – multimodal vital signs for stress detection

Publication date
2025-05-30
Document type
Conference paper
Author
Schreiber, Paul Vinzenz 
Cinar, Beyza 
Mackert, Lennart
Maleshkova, Maria 
Organisational unit
Data Engineering 
DOI
10.1007/978-3-031-94162-7_20
URI
https://openhsu.ub.hsu-hh.de/handle/10.24405/20290
Conference
27th International Conference on Human-Computer Interaction (HCII) 2025 ; Gothenburg, Sweden ; June 22–27, 2025
Publisher
Springer Nature
Series or journal
Communications in computer and information science
Periodical volume
2526
Book title
HCI International 2025 Posters
Volume (part of multivolume book)
Part V
ISBN
978-3-031-94162-7
First page
205
Last page
215
Peer-reviewed
✅
Part of the university bibliography
✅
  • Additional Information
Language
English
Abstract
Human-Computer Interaction (HCI) research increasingly focuses on developing systems that can recognize and respond to human stress, a key factor in preventing the negative health effects of prolonged stress exposure. Currently, progress in the domain of automated stress recognition based on multi-modal data shows clear potential but is especially hindered by the lack of available datasets and standardized protocols for data collection. Our research aims to contribute towards filling this gap by employing a framework for conducting experiments and data collection in the affective computing domain, supporting improved reuse and reproducibility of results. Specifically in our analysis, we apply a multi-modal approach integrating physiological signals to conduct and evaluate automated stress recognition. By employing standard classifiers, our study achieved notable results: in a ternary classification setting (distinguishing baseline, physical, and overall stress), we attained an accuracy of 79%, while a binary classification (baseline vs. stress) reached up to 89% accuracy. These findings not only replicate existing research in the stress detection domain but clearly show the advantage of using multi-modal data and also establish a benchmark for future analysis studies.
Version
Published version
Access right on openHSU

  • Cookie settings
  • Privacy policy
  • Send Feedback
  • Imprint