openHSU logo
  • English
  • Deutsch
  • Log In
  • Communities & Collections
  1. Home
  2. Helmut-Schmidt-University / University of the Federal Armed Forces Hamburg
  3. Publications
  4. 3 - Publication references (without full text)
  5. Qualitative monitoring of the consequences of AI solutions in safety-critical systems
 
Options
Show all metadata fields

Qualitative monitoring of the consequences of AI solutions in safety-critical systems

Publication date
2023
Document type
Konferenzbeitrag
Author
Tappe, Mark 
Kelm, Benjamin
Niggemann, Oliver 
Myschik, Stephan
Organisational unit
Informatik im Maschinenbau 
DTEC.bw 
URL
https://staff.fnwi.uva.nl/b.bredeweg/QR2023/pdf/12Tappe.pdf
URI
https://openhsu.ub.hsu-hh.de/handle/10.24405/20526
Conference
36th International Workshop on Qualitative Reasoning (QR 2023) at the European Conference on Artificial Intelligence (ECAI 2023) ; Kraków, Poland ; October 1st, 2023
Project
Künstliche Intelligenz für die Diagnose der Internationalen Raumstation ISS 
Publisher
Association for the Advancement of Artificial Intelligence
Peer-reviewed
✅
Part of the university bibliography
✅
  • Additional Information
Language
English
Keyword
dtec.bw
Abstract
Today, Cyber-Physical Systems (CPS) are often used in safety-critical situations. More and more, Artificial Intelligence (AI) and especially data-based methods, i.e. Machine Learning (ML), are used to increase the adaptability of systems. This immediately leads to a security risk, since data-based methods usually learn a black-box model (e.g. neural network or reinforcement learning). To still use these AI methods for safety-critical systems, like anomaly detection, optimisation or reconfiguration tasks, a supervision tool is needed.
In order to enable safe operation of data-based ML algorithms and to make statements about the stability of the system we present an implementation of qualitative monitoring of the system behaviour in the context of reconfiguration. This leads to the next problem, as a qualitative state prediction tends to branch infinitely for complex systems. Our approach limits the state prediction to the states with immediate impact. To achieve this goal and to visualise the effects for a supervision task a virtual structure similar to decision trees is implemented to generate an overview of the upcoming predicted system states. In addition, the behaviour of the system variables is extracted from the qualitative states in order to determine the risk of a predicted state.
In summary, this algorithm acts as an independent supervision agent for various AI/ML algorithms and alerts when risks are detected during operation. We can show that different reconfiguration options for a CPS with abnormal behaviour can be successfully evaluated in order to transfer the CPS as safely as possible to a new state.
Version
Published version
Access right on openHSU
Metadata only access

  • Cookie settings
  • Privacy policy
  • Send Feedback
  • Imprint