Now showing 1 - 10 of 22
  • Publication
    Metadata only
    The Semantic Asset Administration Shell
    (Springer, 2019-11)
    Bader, Sebastian R.
    ;
    The disruptive potential of the upcoming digital transformations for the industrial manufacturing domain have led to several reference frameworks and numerous standardization approaches. On the other hand, the Semantic Web community has made significant contributions in the field, for instance on data and service description, integration of heterogeneous sources and devices, and AI techniques in distributed systems. These two streams of work are, however, mostly unrelated and only briefly regard the each others requirements, practices and terminology. We contribute to this gap by providing the Semantic Asset Administration Shell, an RDF-based representation of the Industrie 4.0 Component. We provide an ontology for the latest data model specification, created a RML mapping, supply resources to validate the RDF entities and introduce basic reasoning on the Asset Administration Shell data model. Furthermore, we discuss the different assumptions and presentation patterns, and analyze the implications of a semantic representation on the original data. We evaluate the thereby created overheads, and conclude that the semantic lifting is manageable, also for restricted or embedded devices, and therefore meets the conditions of Industrie 4.0 scenarios.
  • Publication
    Metadata only
    Structuring reference architectures for the industrial Internet of Things
    (Basel, 2019-07-01)
    Bader, Sebastian R.
    ;
    ;
    Lohmann, Steffen
    The ongoing digital transformation has the potential to revolutionize nearly all industrial manufacturing processes. However, its concrete requirements and implications are still not sufficiently investigated. In order to establish a common understanding, a multitude of initiatives have published guidelines, reference frameworks and specifications, all intending to promote their particular interpretation of the Industrial Internet of Things (IIoT). As a result of the inconsistent use of terminology, heterogeneous structures and proposed processes, an opaque landscape has been created. The consequence is that both new users and experienced experts can hardly manage to get an overview of the amount of information and publications, and make decisions on what is best to use and to adopt. This work contributes to the state of the art by providing a structured analysis of existing reference frameworks, their classifications and the concerns they target. We supply alignments of shared concepts, identify gaps and give a structured mapping of regarded concerns at each part of the respective reference architectures. Furthermore, the linking of relevant industry standards and technologies to the architectures allows a more effective search for specifications and guidelines and supports the direct technology adoption.
  • Publication
    Metadata only
    DLUBM: A benchmark for distributed linked data knowledge base systems
    (Springer, 2017-10-21)
    Keppmann, Felix Leif
    ;
    ;
    Harth, Andreas
    Linked Data is becoming a stable technology alternative and is no longer only an innovation trend. More and more companies are looking into adapting Linked Data as part of the new data economy. Driven by the growing availability of data sources, solutions are constantly being newly developed or improved in order to support the necessity for data exchange both in web and enterprise settings. Unfortunately, currently the choice whether to use Linked Data is more an educated guess than a fact-based decision. Therefore, the provisioning of open benchmarking tools and reports, which allow developers to assess the fitness of existing solutions, is key for pushing the development of better Linked Data-based approaches and solutions. To this end we introduce a novel Linked Data benchmark – Distributed LUBM, which enables the reproducible creation and deployment of distributed interlinked LUBM datasets. We provide a system architecture for distributed Linked Data benchmark environments, accompanied by guiding design requirements. We instantiate the architecture with the actual DLUBM implementation and evaluate a Linked Data query engine via DLUBM.
  • Publication
    Metadata only
    On Automating Decentralized Multi-Step Service Combination
    (IEEE, 2017-09-07)
    Philipp, Patrick
    ;
    Rettinger, Achim
    ;
    Information on the Web is heterogeneous and available in constantly increasing quantities. Consequently, there are numerous, partly redundant data analytics services, each optimized for data with certain characteristics. Often, analytics tasks require multiple services to be pipelined to find a solution, where combinations of exchangeable services for single steps might outperform one-service-predictions. This work proposes a Multi-Agent System (MAS) perception of prior setting, where decentralized agents are considered to manage services, having to coordinate their decisions to find a consensus. We, first, propose a supervised method for service accuracy estimation and, therefore, exploit locality-sensitive features of training data. Given a committee of services managed by agents, we develop coordination strategies to handle conflicting confidences and reduce erroneous predictions due to service correlation. We evaluate our approach with Named Entity Recognition (NER)- A nd Named Entity Disambiguation (NED) services on text corpora with heterogeneous characteristics (i.e. news articles and tweets). Our empirical results improve the out-of-the-box performance of the original services.
  • Publication
    Metadata only
    Annotating sBPMN elements with their likelihood of occurrence
    (CEUR, 2017-09-01)
    Weller, Tobias
    ;
    Process Mining is a research discipline that aims to analyze business processes based on event logs. The event logs are among others used to create models for predicting the next activity of a given process instance. Existing models use Bayesian Networks or Markov Chains to predict the next activity in a workow. These models require knowledge about the occurence of activities in the business process, which is usually based on expert knowledge or based on previous workows from event logs. Based on previous work, we will i) represent a business process in sBPMN and extend our annotation tool to ii) compute the likelihood of occurrence of activities in a business process and check for stochastic dependency in a process and iii) use the generated knowledge to annotate the business process.
  • Publication
    Metadata only
    Supporting the dispatching process for maintenance technicians in Industrie 4.0
    (Karlsruher Institut für Technologie, 2017-01-01)
    Bader, Sebastian
    ;
    Vössing, Michael
    ;
    Wolff, Clemens
    ;
    Walk, Jannis
    ;
    The ongoing digitalization of industrial manufacturing has enabled new possibilities for data analysis and the development of new business processes, especially in industrial maintenance. So far, these opportunities have not resulted in tangible business value. One of the main challenges is that downstream processes have not yet been able to manage the inherent complexity of real-world maintenance delivery. In this paper, we present a research agenda towards new business processes through integration procedures for heterogeneous data and analytic components based on "Industrie 4.0" compliant interfaces. We further outline a decision support system for knowledge workers that is able to substantiate tacit knowledge by evaluating how different planning strategies and external effects influence a modelled field service network.
  • Publication
    Metadata only
    Toward cognitive pipelines of medical assistance algorithms
    (Springer, 2016-09-01)
    Philipp, Patrick
    ;
    ;
    Katic, Darko
    ;
    Weber, Christian
    ;
    Götz, Michael
    ;
    Rettinger, Achim
    ;
    Speidel, Stefanie
    ;
    Kämpgen, Benedikt
    ;
    Nolden, Marco
    ;
    Wekerle, Anna Laura
    ;
    Dillmann, Rüdiger
    ;
    Kenngott, Hannes
    ;
    Müller, Beat
    ;
    Studer, Rudi
    Purpose: Assistance algorithms for medical tasks have great potential to support physicians with their daily work. However, medicine is also one of the most demanding domains for computer-based support systems, since medical assistance tasks are complex and the practical experience of the physician is crucial. Recent developments in the area of cognitive computing appear to be well suited to tackle medicine as an application domain. Methods: We propose a system based on the idea of cognitive computing and consisting of auto-configurable medical assistance algorithms and their self-adapting combination. The system enables automatic execution of new algorithms, given they are made available as Medical Cognitive Apps and are registered in a central semantic repository. Learning components can be added to the system to optimize the results in the cases when numerous Medical Cognitive Apps are available for the same task. Our prototypical implementation is applied to the areas of surgical phase recognition based on sensor data and image progressing for tumor progression mappings. Results: Our results suggest that such assistance algorithms can be automatically configured in execution pipelines, candidate results can be automatically scored and combined, and the system can learn from experience. Furthermore, our evaluation shows that the Medical Cognitive Apps are providing the correct results as they did for local execution and run in a reasonable amount of time. Conclusion: The proposed solution is applicable to a variety of medical use cases and effectively supports the automated and self-adaptive configuration of cognitive pipelines based on medical interpretation algorithms.
  • Publication
    Metadata only
    Semantic Technologies for Realising Decentralised Applications for the Web of Things
    (IEEE, 2016-07-02)
    Keppmann, Felix Leif
    ;
    ;
    Harth, Andreas
    The vision of the Internet of Things (IoT) promises the capability of connecting billions of devices, resources and things together. In the realisation of this vision, we are currently neglecting the interoperability between devices that is caused by a heterogeneous landscape of things and which leads to the proliferation of isolated islands of custom IoT solutions. A first step towards enabling some interoperability is to connect things to the Web and to use the Web stack, thereby conceiving the socalled Web of Things (WoT). However, even when a homogeneous access is reached through Web protocols, a common understanding is still missing. In addition, decentralised applications, advocated by the IoT vision, and a-priori unknown requirements of specific integration scenarios demand new concepts for the adaptation of things at runtime. Our work focuses on two main aspects: overcoming not only data but also device and interface heterogeneity, and enabling adaptable and scalable decentralised WoT applications. To this end we present an approach for realising decentralised WoT applications based on three main building blocks: 1) semantics of the devices' capabilities and interfaces, 2) rules to enable embedding controller logic within device's interfaces for supporting a decentralised applications, and 3) support for reconfiguring the controller logic at runtime for customising and adapting the application. We show how our approach can be applied by introducing a reference architecture, provide a thorough evaluation in terms of a proof-of-concept implementation of an example use case, and performance tests.
  • Publication
    Metadata only
    Teaching linked open data using open educational resources
    (Springer, 2016-03-19)
    Mikroyannidis, Alexander
    ;
    Domingue, John
    ;
    ;
    Norton, Barry
    ;
    Simperl, Elena
    Recent trends in online education have seen the emergence of Open Educational Resources (OERs) and Massive Open Online Courses (MOOCs) as an answer to the needs of learners and educators for open and reusable educational material, freely available on the web. At the same time, Big Data and the new analytics and business intelligence opportunities that they offer are creating a growing demand for data scientists possessing skills and detailed knowledge in this area. This chapter presents a methodology for the design and implementation of an educational curriculum about Linked Open Data, supported by multimodal OERs. These OERs have been implemented as a combination of living learning materials and activities (eBook, online courses, webinars, face-to-face training), produced via a rigorous process and validated by the data science community through continuous feedback.
  • Publication
    Metadata only
    Cognitive tools pipeline for assistance of mitral valve surgery
    (Society of Photo-optical Instrumentation Engineers (SPIE), 2016)
    Schoch, Nicolai
    ;
    Philipp, Patrick
    ;
    Weller, Tobias
    ;
    Engelhardt, Sandy
    ;
    Volovyk, Mykola
    ;
    Fetzer, Andreas
    ;
    Nolden, Marco
    ;
    De Simone, Raffaele
    ;
    Wolf, Ivo
    ;
    ;
    Rettinger, Achim
    ;
    Studer, Rudi
    ;
    Heuveline, Vincent
    For cardiac surgeons, mitral valve reconstruction (MVR) surgery is a highly demanding procedure, where an artificial annuloplasty ring is implanted onto the mitral valve annulus to re-enable the valve's proper closing functionality. For a successful operation the surgeon has to keep track of a variety of relevant impact factors, such as patient-individual medical history records, valve geometries, or tissue properties of the surgical target, and thereon-based deduce type and size of the best-suitable ring prosthesis according to practical surgery experience. With this work, we aim at supporting the surgeon in selecting this ring prosthesis by means of a comprehensive information processing pipeline. It gathers all available patient-individual information, and mines this data according to 'surgical rules', that represent published MVR expert knowledge and recommended best practices, in order to suggest a set of potentially suitable annuloplasty rings. Subsequently, these rings are employed in biomechanical MVR simulation scenarios, which simulate the behavior of the patient-specific mitral valve subjected to the respective virtual ring implantation. We present the implementation of our deductive system for MVR ring selection and how it is integrated into a cognitive data processing pipeline architecture, which is built under consideration of Linked Data principles in order to facilitate holistic information processing of heterogeneous medical data. By the example of MVR surgery, we demonstrate the ease of use and the applicability of our development. We expect to essentially support patient-specific decision making in MVR surgery by means of this holistic information processing approach. © 2016 SPIE.