Now showing 1 - 10 of 74
  • Publication
    Metadata only
    Imputing missing multi-sensor data in the healthcare domain: a systematic review
    Chronic diseases, especially diabetes, are burdens for the patient since lifelong management is required, and comorbidities can occur as a consequence of insufficient prevention. Hypoglycemia, a medical condition encountered by diabetic individuals, can result in severe symptoms if untreated, necessitating prompt preventive actions. Continuous health monitoring based on data collected with wearables can enable the early prediction of extreme blood glucose states. However, integrating and using data acquired from various sensors is challenging, especially when it comes to maintaining the quality and quantity of data due to inherent noise and missing values. To this end, the review discusses dataset constraints and highlights the temporal behaviour of prominent features in predicting hypoglycemia. It outlines a framework of preprocessing techniques that could be adopted to mitigate dataset limitations. A thorough analysis of the imputation procedures employed in the reviewed studies is conducted. In addition, machine learning imputation techniques applied in other healthcare domains are studied to investigate if they could be adopted to close the longer gaps of missing values in the datasets involved in the prediction of hypoglycemia. Based on a comprehensive evaluation of imputation techniques, a paradigm, Impute-Paradigm, is proposed and validated through a case study, enabling imputation tailored to variable duration time gaps. After analysing the reviewed studies, we recommend studying the rate of change of individual features and conclude that different time gaps of separate features should be treated with multiple imputation techniques.
  • Publication
    Metadata only
    Breaking free: Decoupling forced systems with Laplace neural networks
    Forecasting the behaviour of industrial robots, power grids or pandemics under changing external inputs requires accurate dynamical models that can adapt to varying signals and capture long-term effects such as delays or memory. While recent neural approaches address some of these challenges individually, their reliance on computationally intensive solvers and their black-box nature limit their practical utility. In this work, we propose Laplace-Net, a decoupled, solver-free neural framework for learning forced and delay-aware dynamical systems. It uses the Laplace transform to (i) bypass computationally intensive solvers, (ii) enable the learning of delays and memory effects and (iii) decompose each system into interpretable control-theoretic components. Laplace-Net also enhances transferability, as its modular structure allows for targeted re-training of individual components to new system setups or environments. Experimental results on eight benchmark datasets–including linear, nonlinear and delayed systems–demonstrate the method’s improved accuracy and robustness compared to state-of-the-art approaches, particularly in handling complex and previously unseen inputs.
  • Publication
    Metadata only
    Presenting DiaData for research on type 1 diabetes
    (arXiv, 2025-08-05) ;
    Type 1 diabetes (T1D) is an autoimmune disorder that leads to the destruction of insulin-producing cells, resulting in insulin deficiency, as to why the affected individuals depend on external insulin injections. However, insulin can decrease blood glucose levels and can cause hypoglycemia. Hypoglycemia is a severe event of low blood glucose levels (≤70 mg/dL) with dangerous side effects of dizziness, coma, or death. Data analysis can significantly enhance diabetes care by identifying personal patterns and trends leading to adverse events. Especially, machine learning (ML) models can predict glucose levels and provide early alarms. However, diabetes and hypoglycemia research is limited by the unavailability of large datasets. Thus, this work systematically integrates 15 datasets to provide a large database of 2510 subjects with glucose measurements recorded every 5 minutes. In total, 149 million measurements are included, of which 4% represent values in the hypoglycemic range. Moreover, two sub-databases are extracted. Sub-database I includes demographics, and sub-database II includes heart rate data. The integrated dataset provides an equal distribution of sex and different age levels. As a further contribution, data quality is assessed, revealing that data imbalance and missing values present a significant challenge. Moreover, a correlation study on glucose levels and heart rate data is conducted, showing a relation between 15 and 55 minutes before hypoglycemia.
  • Publication
    Metadata only
    Evaluating imputation techniques for short-term gaps in heart rate data
    Recent advances in wearable technology have enabled the continuous monitoring of vital physiological signals, essential for predictive modeling and early detection of extreme physiological events. Among these physiological signals, heart rate (HR) plays a central role, as it is widely used in monitoring and managing cardiovascular conditions and detecting extreme physiological events such as hypoglycemia. However, data from wearable devices often suffer from missing values. To address this issue, recent studies have employed various imputation techniques. Traditionally, the effectiveness of these methods has been evaluated using predictive accuracy metrics such as RMSE, MAPE, and MAE, which assess numerical proximity to the original data. While informative, these metrics fail to capture the complex statistical structure inherent in physiological signals. This study bridges this gap by presenting a comprehensive evaluation of four statistical imputation methods, linear interpolation, K Nearest Neighbors (KNN), Piecewise Cubic Hermite Interpolating Polynomial (PCHIP), and B splines, for short term HR data gaps. We assess their performance using both predictive accuracy metrics and statistical distance measures, including the Cohen Distance Test (CDT) and Jensen Shannon Distance (JS Distance), applied to HR data from the D1NAMO dataset and the BIG IDEAs Lab Glycemic Variability and Wearable Device dataset. The analysis reveals limitations in existing imputation approaches and the absence of a robust framework for evaluating imputation quality in physiological signals. Finally, this study proposes a foundational framework to develop a composite evaluation metric to assess imputation performance.
  • Publication
    Metadata only
    Regression-based approach to anxiety estimation of spider phobics during behavioural avoidance tasks
    (arXiv, 2025-07-18) ;
    Schmücker, Vanessa
    ;
    Hildebrand, Anne Sophie
    ;
    Klucken, Tim
    ;
    Phobias significantly impact the quality of life of affected persons. Two methods of assessing anxiety responses are questionnaires and behavioural avoidance tests (BAT). While these can be used in a clinical environment they only record momentary insights into anxiety measures. In this study, we estimate the intensity of anxiety during these BATs, using physiological data collected from unobtrusive, wrist-worn sensors. Twenty-five participants performed four different BATs in a single session, while periodically being asked how anxious they currently are. Using heart rate, heart rate variability, electrodermal activity, and skin temperature, we trained regression models to predict anxiety ratings from three types of input data: (1) using only physiological signals, (2) adding computed features (e.g., min, max, range, variability), and (3) computed features combined with contextual task information. Adding contextual information increased the effectiveness of the model, leading to a root mean squared error (RMSE) of 0.197 and a mean absolute error (MAE) of 0.041. Overall, this study shows, that data obtained from wearables can continuously provide meaningful estimations of anxiety, which can assist in therapy planning and enable more personalised treatment.
  • Publication
    Open Access
    DiaData: an integrated large dataset for type 1 diabetes and hypoglycemia research
    (Universitätsbibliothek der HSU/UniBw H, 2025-06-03) ;
    DiaData integrates 13 different datasets and presents a large continuous glucose monitoring (CGM) dataset comprising data from individuals with Type 1 Diabetes (T1D) across various age groups. The Maindatabase contains CGM measurements of all 1720 subjects. From this, two subsets are extracted: Subdatabase I includes CGM data and demographics of age and sex for 1306 subjects, while Subdatabase II includes CGM and heart rate data for a subset of 51 subjects.
  • Publication
    Metadata only
    VitaStress – multimodal vital signs for stress detection
    (Springer Nature, 2025-05-30) ; ;
    Mackert, Lennart
    ;
    Human-Computer Interaction (HCI) research increasingly focuses on developing systems that can recognize and respond to human stress, a key factor in preventing the negative health effects of prolonged stress exposure. Currently, progress in the domain of automated stress recognition based on multi-modal data shows clear potential but is especially hindered by the lack of available datasets and standardized protocols for data collection. Our research aims to contribute towards filling this gap by employing a framework for conducting experiments and data collection in the affective computing domain, supporting improved reuse and reproducibility of results. Specifically in our analysis, we apply a multi-modal approach integrating physiological signals to conduct and evaluate automated stress recognition. By employing standard classifiers, our study achieved notable results: in a ternary classification setting (distinguishing baseline, physical, and overall stress), we attained an accuracy of 79%, while a binary classification (baseline vs. stress) reached up to 89% accuracy. These findings not only replicate existing research in the stress detection domain but clearly show the advantage of using multi-modal data and also establish a benchmark for future analysis studies.
  • Publication
    Metadata only
    Deep learning-based hypoglycemia classification across multiple prediction horizons
    (2025-03-25) ;
    Daniel Onwuchekwa, Jennifer
    ;
    Type 1 diabetes (T1D) management can be significantly enhanced through the use of predictive machine learning (ML) algorithms, which can mitigate the risk of adverse events like hypoglycemia. Hypoglycemia, characterized by blood glucose levels below 70 mg/dL, is a life-threatening condition typically caused by excessive insulin administration, missed meals, or physical activity. Its asymptomatic nature impedes timely intervention, making ML models crucial for early detection. This study integrates short- (up to 2h) and long-term (up to 24h) prediction horizons (PHs) within a single classification model to enhance decision support. The predicted times are 5-15 min, 15-30 min, 30 min-1h, 1-2h, 2-4h, 4-8h, 8-12h, and 12-24h before hypoglycemia. In addition, a simplified model classifying up to 4h before hypoglycemia is compared. We trained ResNet and LSTM models on glucose levels, insulin doses, and acceleration data. The results demonstrate the superiority of the LSTM models when classifying nine classes. In particular, subject-specific models yielded better performance but achieved high recall only for classes 0, 1, and 2 with 98%, 72%, and 50%, respectively. A population-based six-class model improved the results with at least 60% of events detected. In contrast, longer PHs remain challenging with the current approach and may be considered with different models.