Now showing 1 - 10 of 59
  • Publication
    Open Access
    Newsletter hpc.bw 03/2025
    (Universitätsbibliothek der HSU/UniBw H, 2025-10-17) ; ; ; ; ;
    Newcome, Samuel
    ;
    Lesquoy, Nicolas
    ;
    ; ;
    Zigan, Lars
    ;
    Bause, Markus
    ;
    Breuer, Michael
    ;
    Kramer, Denis
    ;
    Neumann, Philipp
    ;
    Rathmann, Marie
  • Publication
    Open Access
    Toward realistic multiscale simulations of nanoparticle injection devices used for single particle diffractive imaging
    (Universitätsbibliothek der HSU/UniBw H, 2025-07-31) ; ;
    Helmut-Schmidt-Universität/Universität der Bundeswehr Hamburg
    ;
    ;
    Küpper, Jochen
    ;
    Single-particle diffractive imaging (SPI) is a powerful technique used in structural biology and nanoscience to determine the three-dimensional structure of individual nanoparticles, biomolecules, and viruses without the need for crystallization. By exposing freely flowing particles to ultrafast X-ray free-electron laser (XFEL) pulses, SPI captures diffraction patterns that can be reconstructed into high-resolution images. Efficient and accurate modeling and simulation of nanoparticle injection systems are essential for designing and optimizing injectors that deliver high-density, well-collimated particle streams – an important requirement for maximizing hit rates and image quality in SPI experiments. This thesis addresses these challenges by developing and optimizing multiscale simulation methodologies for nanoparticle injection devices, with a particular focus on aerodynamic lens systems (ALS) and its combination with cryogenically cooled buffer-gas cells (BGC). A hybrid molecular-continuum simulation framework, integrating classic Computational Fluid Dynamics (CFD) based on the continuum assumption and the Direct Simulation Monte Carlo (DSMC) method based on the kinetic theory of gases, is employed to accurately capture the carrier gas flow and nanoparticle trajectories across diverse flow regimes. The approach improves the computational efficiency by selectively applying DSMC in regions where molecular-scale effects dominate, while using CFD for low Knudsen number regions. Comprehensive evaluations of drag force models from the literature including molecular drag formulations are conducted, along with the introduction of a relaxation-based correction for highly rarefied, low-speed flows, to enhance particle trajectory predictions, particularly in transitional and rarefied regimes. The framework’s scalability and computational performance are assessed through detailed benchmarking, while sensitivity analyses on DSMC parameters such as particle number, grid size, and time step size further guide efficient model implementation. Key benchmark cases, including gas dynamic nozzles and re-entry vehicles, demonstrate the framework’s versatility in simulating internal and external flows. The ALS configuration highlights the framework’s applicability to injector modeling, where the hybrid DSMC/CFD approach combined with improved drag models achieve excellent agreement with experimental data, outperforming conventional CFD. Further validation against measured beam widths and focus positions is carried out for BGC and combined BGC-ALS setups across different particle sizes and inlet pressures. This validated setup is then used to assess the injector performance, with emphasis on proteinsized nanoparticles, enabling an insightful evaluation of the focusing efficiency and beam quality under realistic SPI conditions. Notably, the BGC-ALS configuration, through cryogenic cooling, enhances the focusing of smaller particles by reducing thermal velocities and suppressing Brownian motion, thereby improving the beam collimation – ideal for SPI experiments. By bridging gaps in current methodologies, validating simulation results against experimental data, and advancing drag force modeling techniques, this thesis establishes a robust foundation for optimizing SPI injector systems and paving the way for future innovations in nanoparticle injection technologies.
  • Publication
    Open Access
    Efficient algorithms for improved information retention in integration of incomplete omics datasets
    (Universitätsbibliothek der HSU/UniBw H, 2025-07-24) ; ;
    Helmut-Schmidt-Universität/Universität der Bundeswehr Hamburg
    ;
    Neumann, Julia E.
    ;
    The acquisition of high-quality data in the biomedical field, particularly in omics studies such as proteomics or transcriptomics, poses a significant challenge due to incomplete measurements during data acquisition or simply small sample sizes. This issue results in datasets with low statistical power that are in addition often compromised by missing values, which impede downstream analysis and the accurate interpretation of biological phenomena. A common approach to mitigate such limitations is data integration, which combines multiple datasets to increase cohort sizes by incorporating data from different studies or laboratories. However, this approach introduces new challenges, notably the so-called batch effect, which introduces internal biases and obscures biological meaning. Moreover, infrequently measured features (e.g., proteins or genes) create additional gaps in the data during integration tasks. As the volume of available biological data continues to expand, there is an increasing need for computational methods capable of efficiently processing and analyzing these growing datasets. Expected future advancements in data acquisition with regards to throughput necessitate the development of computationally efficient and robust algorithms. In addition, to ensure accessibility and broad adoption, it is crucial that bioinformatics tools must be user friendly, allowing researchers with varying levels of technical expertise to effectively utilize them. To this end, an integration and batch effect reduction tool has been developed, called the HarmonizR algorithm. This work features various functionality that has been build to tackle the aforementioned issues. Dataset integration aims for an increase in cohort sizes and sample amounts, which is facilitated by the inclusion of a new unique removal approach. It overcomes prior limitations regarding data retention, greatly increasing HarmonizR's benefits as a pipeline tool when used prior to data analysis by significantly expanding the number of considerable features and data points of any given study. This may be paired with the added functionality of accounting for user-defined experimental information such as treatment-groups (i.e., covariate information) during adjustment, leading to more robust and high-quality results. Regarding computational efficiency, a novel blocking approach exploits the given data structure to brace the algorithm for current and future big data challenges without negatively impacting adjustment quality. Furthermore, the algorithm's batch effect adjustment capabilities are proven effective on various omics types - with a notable extension towards single cell count datasets by employing further adjustment methodology - as well as non-biological data in the form of an attention-deficit/hyperactivity disorder study. To address remaining challenges, the newly developed BERT algorithm introduces a novel architectural approach, offering improvements in information retention and computational efficiency. A comparative analysis of BERT and HarmonizR explores the advantages of BERT in terms of feature/overall data retention and reduced runtimes, providing a valuable complement to the existing framework. Lastly, to enhance accessibility and ease of use, plugins for the popular Perseus software have been created and are described, enabling seamless integration of both algorithms into established bioinformatics workflows, specifically aiding researchers less familiar with the technical aspects of the here shown algorithms and bioinformatics in general. This work advances the field by enabling the adjustment of omics data with missing values for the first time without substantial information loss. As a result, researchers can now confidently merge previously infeasibly large datasets, which unlocks new possibilities for large-scale, multi-cohort studies. These novel capabilities lead towards more comprehensive as well as statistically powerful biomedical analyses.
  • Publication
    Open Access
    Newsletter hpc.bw 02/2025
    (Universitätsbibliothek der HSU/UniBw H, 2025-07-22)
    Homes, Simon F.
    ;
    Polgar, Bertalan
    ;
    ; ;
    Mayr, Matthias
    ;
    ; ; ; ; ;
  • Publication
    Open Access
    Balancing energy and performance: efficient allocation of solver jobs on high-performance computing systems
    (Universitätsbibliothek der HSU/UniBw H, 2025-07-08) ; ;
    Many combinatorial optimization methods and related optimization software, particularly those for mixed-integer programming, exhibit limited scalability when utilizing parallel computing resources, whether across multiple cores or multiple nodes. Nevertheless, high-performance computing (HPC) systems continue to grow in size, with increasing core counts, memory capacity, and power consumption. Rather than dedicating all available resources to a single problem instance, HPC systems can be leveraged to solve multiple optimization instances concurrently – a common requirement in applications such as stochastic optimization, policy design for sequential decision making, parameter tuning, and optimization-as-a-service. In this work, we study strategies for efficiently allocating solver jobs across compute nodes, exploring how to schedule multiple optimization jobs across a given number of cores or nodes. Using metrics from performance monitoring and benchmarking tools as well as metered PDUs, we analyze trade-offs between energy consumption and runtime, providing insights into how to balance computational efficiency and sustainability in large-scale optimization workflows.
  • Publication
    Metadata only
    Node-level performance of adaptive resolution in ls1 mardyn
    (Springer, 2025-07-05) ;
    Hocks, Alex
    ;
    In this work we present a node-level performance analysis of an adaptive resolution scheme (AdResS) implemented in ls1 mardyn . This is relevant in simulations involving a very large number of particles or long timescales, because it lowers the computational effort required to calculate short-range interactions in molecular dynamics. An introduction to AdResS is given, together with an explanation of the coarsening technique used to obtain an effective potential for the coarse molecular model, i.e., the Iterative Boltzmann Inversion (IBI). This is accompanied by details of the implementation in our software package, as well as an algorithmic description of the IBI method and the simulation workflow used to generate results. This will be of interest for practitioners. Results are provided for a pure Lennard-Jones tetrahedral molecule coarsened to a single site, validated by verifying the correct reproduction of structural correlation functions, e.g. the radial distribution function. The performance analysis builds upon a literature-driven methodology, which provides a theoretical estimate for the speedup based on a reference simulation and the size of the full particle region. Additionally, a strong scaling study was performed at node level. In this sense, several configurations with vertical interfaces between the resolution regions are tested, where different resolution widths are benchmarked. A comparison between several linked cell traversal routines, which are provided in ls1 mardyn , was performed to showcase the effect of algorithmic aspects on the adaptive resolution simulation and on the estimated performance.
  • Publication
    Metadata only
    Static load balancing for molecular-continuum flow simulations with heterogeneous particle systems and on heterogeneous hardware
    Load balancing in particle simulations is a well-researched field, but its effect on molecular-continuum coupled simulations is comparatively less explored. In this work, we implement static load balancing into the macro-micro-coupling tool (MaMiCo), a software for molecular-continuum coupling, and demonstrate its effectiveness in two classes of experiments by coupling with the particle simulation software ls1 mardyn. The first class comprises a liquid-vapour multiphase scenario, modelling evaporation of a liquid into vacuum and requiring load balancing due to heterogeneous particle distributions in space. The second class considers execution of molecular-continuum simulations on heterogeneous hardware, running at very different efficiencies. After a series of experiments with balanced and unbalanced setups, we find that, with our balanced configurations, we achieve a reduction in runtime by 44% and 55% respectively.