Now showing 1 - 3 of 3
  • Publication
    Metadata only
    VitaStress – multimodal vital signs for stress detection
    (Springer Nature, 2025-05-30) ; ;
    Mackert, Lennart
    ;
    Human-Computer Interaction (HCI) research increasingly focuses on developing systems that can recognize and respond to human stress, a key factor in preventing the negative health effects of prolonged stress exposure. Currently, progress in the domain of automated stress recognition based on multi-modal data shows clear potential but is especially hindered by the lack of available datasets and standardized protocols for data collection. Our research aims to contribute towards filling this gap by employing a framework for conducting experiments and data collection in the affective computing domain, supporting improved reuse and reproducibility of results. Specifically in our analysis, we apply a multi-modal approach integrating physiological signals to conduct and evaluate automated stress recognition. By employing standard classifiers, our study achieved notable results: in a ternary classification setting (distinguishing baseline, physical, and overall stress), we attained an accuracy of 79%, while a binary classification (baseline vs. stress) reached up to 89% accuracy. These findings not only replicate existing research in the stress detection domain but clearly show the advantage of using multi-modal data and also establish a benchmark for future analysis studies.
  • Publication
    Metadata only
    TRRRACED – Towards Reproducible, Replicable and Reusable Affective Computing Experiments and Data
    We present TRRRACED – a framework that aims to support Reproducible, Replicable and Reusable Affective Computing Experiments and Data. To the best of our knowledge, TRRRACED is the first framework aiming to provide standardised guidelines that facilitate structural unity of experiments and data in the domain of AC research. To this end, state-of-the-art studies considering physiological signals in relation to affective states were examined regarding (i) stimuli used, (ii) experimental setup and (iii) self-reports. Based on this analysis we employ reproducibility, replicability, and reusability as criteria to develop our framework. We propose a protocol for conducting experiments to enhance replicability, guidelines for annotating data, and a sample dataset.
  • Publication
    Open Access
    Hey CLIP, can you capture semantics in brand names?
    (Tilburg University, 2023) ;
    Cassani, Giovanni
    ;
    Tilburg University
    ;
    Garrido Alhama, Raquel
    Congruence between brand names and other flagships of a brand are important tools to effectively communicate to, and set the right expectations in consumers. In this study, we explore congruence of brand features as a problem that could potentially be evaluated by computational models trained on cross-modal stimuli such as CLIP. As a by-product we explore to what extent these models capture sound symbolic associations in our stimuli set, and analyse the relationship between (sub-) lexical information and shape dimensions in brand logos. Instead of human participants, we employ CLIP- and BERT-based computational models to make decisions about similarity between a multitude of brand names, brand descriptions and brand logos. We show that our results support proposals of statistical co-occurrence as an underlying mechanism of sound symbolic associations, and argue that CLIP can be carefully used as a tool for maneuvering through naming and logo design decisions.