Browsing by Organization "Department Elektrotechnik - Informatik"
Now showing 1 - 10 of 54
- Results Per Page
- Sort Options
- Some of the metrics are blocked by yourconsent settings
Publication Open Access Adaptive time-triggered network-on-chip-based multi-core architecture: enhancing safety and energy efficiencyReal-time computing systems are designed to meet strict timing constraints and respond to events or inputs within specified deadlines. These systems are commonly used in safety-critical applications such as spacecraft, medical devices, industrial control, and automotive systems. Engineers rely on various scheduling techniques to ensure that timing constraints are met. One such technique is static resource allocation in time-triggered systems. Static resource allocation offers valuable advantages in terms of system dependability by minimizing message congestion and contention, enabling efficient resource usage in network-on-chip (NoC) architectures. This is achieved through the pre-allocation of resources and scheduling of tasks, resulting in improved system throughput and reduced jitter. The time-triggered concept in NoC architectures provides precise knowledge about the permitted points in time for message exchanges between cores, serving as a fundamental building block for fault containment, real-time support, and enhanced system performance. While static resource allocation excels in minimizing congestion and contention and contributes to system dependability, it may pose challenges in accommodating dynamic workloads and evolving requirements. Additionally, it can limit the achievement of fault tolerance, a crucial aspect of ensuring safety in safety-critical systems. To address these limitations, this thesis focuses on developing fault tolerance and energy-saving techniques tailored explicitly for NoC-based multi-core architectures to enhance their safety and energy efficiency. The main goal is to incorporate fault tolerance mechanisms, such as adaptation and redundancy, into time-triggered systems without compromising the benefits of static resource allocation. The adaptation technique within the NoC is designed to support multiple schedules, allowing the NoC to switch schedules during run-time in response to context events, such as permanent faults in NoC resources (e.g., routers, links, network interfaces, and cores). By dynamically reconfiguring the schedule upon the occurrence of a permanent fault, the faulty component is effectively isolated, and tasks or messages are redistributed to other available resources. This ensures the system’s operational continuity despite faults that could lead to message corruption, delays, or losses within NoC resources. This adaptation technique improves the system’s safety by providing flexibility in resource allocation without sacrificing the benefits of static resource allocation. Furthermore, this thesis incorporates seamless redundancy techniques to enhance the system’s safety, especially in scenarios involving transient and permanent faults. This technique selectively applies message replication and fusion to safety-critical messages at the network interface, minimizing overhead in non-critical parts of the system. It safeguards critical data from potential failures caused by message corruption, delays, and losses in routers or links during message exchanges. The thesis also focuses on improving energy efficiency in multi-core chips by providing low-power services. By incorporating time-triggered communication into NoC-based multi-core architectures, deterministic communication is achieved by scheduling the message’s injection time and specifying the frequency to be used by each router at different points in time. This predetermined frequency in the schedule allows routers to adjust their frequencies accordingly during their active time and to clock gate the idle routers, enhancing energy efficiency and preserving the deterministic behaviour of the NoC communication. Moreover, the adaptation techniques in the NoC are used to reconfigure the operating frequency of the NoC based on workload or power requirement variations by switching between schedules, further optimizing energy consumption. Integrating features such as time-triggered capability, adaptation, time-triggered frequency scaling, and seamless redundancy mechanisms into NoC-based multi-core architectures represents a significant advancement over the current state of the art. The results of this work have significant implications for applications relying on high-performance, safe, and energy-efficient multi-core systems in various domains, such as healthcare and transportation.Source Type:Doctoral Thesis666 198 - Some of the metrics are blocked by yourconsent settings
Publication Open Access Anomaly detection and event recognition in cars based on multimodal sensor data interpretation(2023)The benefits of analyzing driving behavior extend across various sectors, including insurance, transportation planning, and autonomous vehicle development. Insurance companies can customize policies based on individual risk profiles, promoting safer driving habits. In fleet management, the analysis assists in risk control, regulatory compliance, and enhancing customer satisfaction. Additionally, it plays a pivotal role in detecting and preventing accidents, contributing to safer road environments. Over the past decades, significant advancements in Machine Learning (ML) techniques, particularly in learning relevant features for abstracted data representation, have been observed. The emergence of deep learning, utilizing Deep Neural Networks (DNNs), has further accelerated this trend, showcasing remarkable performance with ample data. This intersection has unlocked new possibilities for studying driving behavior, with ML playing a crucial role in extracting valuable insights from extensive driving data sets. However, applying ML in this domain presents challenges. This outcome can be attributed to a variety of influencing factors. Driving involves a complex blend of cognitive, psychomotor, and perceptual activities that are hard to quantify and model accurately. Therefore, in this work, in-car sensor data is employed, as it is cost-efficient, widely accessible, and provides access to comprehensive vehicle parameters (e.g., speed and acceleration) with real-time data precision. Driving behavior is inherently subjective, exhibiting significant variability among individuals and even within the same individual under different circumstances, which makes the labelling of data difficult and unreliable. To address this problem, this study leverages both supervised and unsupervised machine learning approaches and DNNs to detect all possible abnormal driving patterns in naturalistic driving patterns. Capturing comprehensive real-world driving data that can reflect the full range of these variables is a massive, if not impossible, task. Detailed recording of individual driving behaviors can raise significant privacy concerns, and a true ground truth of dangerous driving behavior raises ethical considerations. This work presents a naturalistic driving data set (performed by drivers, spanning basic to professional skill levels) carefully assembled and supervised by experienced driving instructors. This data set encompasses annotated hazardous driving patterns derived from in-car sensors, which mitigates privacy concerns inherent in radar or visually-based data modalities. Imbalanced data sets, a lack of positive (anomalous) samples, and interpretability issues in complex ML models further complicate the landscape. Therefore, comprehensive feature extraction methods using ML and DNNs are employed in this work to detect accidents within a naturalistic data set. This work aims to address these challenges and gaps in the literature, focusing on anomaly detection and event detection in driving behavior analysis. The investigation revolves around two categories of questions: 1. Utilizing primary in-car sensors using ML approaches: - Is it feasible to use primary in-car sensors with ML approaches to detect abnormal driving patterns? - Is there a benefit in employing unsupervised deep learning models for anomaly detection compared to traditional ML models? - Can the proposed solution be applied to a benchmark driving data set effectively, considering the lack of labeled driving patterns? 2. Detecting real-world accidents based on primary in-car sensor data: - Is it possible to detect real-world accidents using primary in-car sensor data? - What is the best feature extraction method for accident detection, and which features contribute significantly to the classification result? Each of these questions is examined in detail and has led to new insights, using advanced machine learning techniques to manage the complexity of detecting abnormal driving behaviour, including accidents. Chapter 2 unfolds into three significant sections, each offering valuable insights into anomaly detection in driving patterns. The initial segment introduces a foundational PRC framework for anomaly detection, achieving outstanding performance with supervised k-Nearest-Neighbors (kNN) and impressive results with unsupervised Gaussian Mixture Model (GMM). The second section delves into unsupervised anomaly detection using the proposed PRC framework on unlabeled Controller Area Network Bus (CAN-Bus) signal values, emphasizing the effectiveness of Autoencoders (AEs), particularly the noteworthy Long-Short-Term Memory Autoen coders (LSTM-AE), in detecting anomalous driving patterns based on speed and brake signals. The final section presents a benchmarking data set of naturalistic hazardous driving behavior, yielding remarkable results with Handcrafted Features (HC) classified by Support Vector Machine (SVM). In chapter 3, a framework for accident detection using in-car sensors from a naturalistic data set is presented, employing diverse ML approaches. Notably, the combination of Convolutional Neural Network (CNN) features and an SVM classifier stands out, achieving impressive accuracy and showcasing promising performance given the reliance on naturalistic accidents and the limited samples of severe accidents recognized by only four basic in-car sensors. Interpretability studies reveal the complementary nature of traditional feature engineering and DNNs in extracting optimal features from different sensor channels, enhancing overall effectiveness. Despite challenges in data acquisition and dealing with imbalanced data, this work significantly advances exploring and benchmarking various anomalies and accident detection in naturalistic driving data sets. Outstanding results indicate the strong potential for driving behavior analysis using ML and DNNs utilizing in-car sensor data.Source Type:Doctoral Thesis286 239 - Some of the metrics are blocked by yourconsent settings
Publication Open Access Bridging the Gap: Assessing the Realism of Simulators in Human-Computer Interaction(2025-11-07)The "Automation Paradox" in safety-critical domains shifts human operators into supervisory roles, degrading their skills while still requiring them to act as the ultimate cognitive failsafe during critical events. This necessitates new methods to monitor the operator's cognitive state, but research is hampered by a "fidelity gap". High-fidelity simulators are often inaccessible, while accessible low-fidelity tools may not produce generalizable results, creating a need for a new approach to simulator-based research. This dissertation addresses this gap by proposing and validating the concept of psychophysiological realism, a criterion for simulator validity based on whether an environment feels real to an operator's cognitive and autonomic nervous systems. The core methodological contribution is a framework for instrumenting accessible simulation platforms with multimodal sensor suites, turning them into valid scientific instruments for assessing operator state and behavior. The research progressed through four empirical studies. First, the approach's feasibility was established by instrumenting a web-based emergency dispatch simulator (LstSim), showing it could capture psychophysiological responses to stress. This method was then validated by correlating objective respiration data, which increased by 13.08% under high load, with subjective NASA-TLX workload scores, confirming the simulator could reliably measure cognitive load. The framework's scalability was subsequently demonstrated by successfully generalizing the methodology to an Air Traffic Control (ATC) simulation with an expanded, integrated sensor suite. Finally, to assess behavioral fidelity, a study comparing cyclist movement in a virtual reality (VR) simulator to the real world empirically quantified a "fidelity gap". It revealed that while core biomechanics transferred well, significant divergences emerged in higher-order control strategies and emotional responses. Collectively, this research delivers an integrated and validated methodology that connects the objective measurement of an operator's cognitive state within a simulator to the quantitative assessment of that environment's behavioral fidelity. The framework provides the sensory input for developing intelligent adaptive interfaces, enables more rigorous, state-aware training, and supports an evidence-based approach to designing cognitively ergonomic systems in safety-critical domains.Source Type:Doctoral Thesis2 29 - Some of the metrics are blocked by yourconsent settings
Publication Open Access Diagnostic classifiers based on fuzzy Bayesian belief networks and deep neural networks for demand-controlled ventilation and heating systems(2021)Behravan, AliThe building sector and its embedded control systems, especially the Heating, Ventilation, and Air-Conditioning (HVAC) systems, consume a considerable part of the global energy and produce gaseous emissions such as CO2. On the other hand, the air exchange based on natural ventilation is a cost-efficient method to improve indoor air quality, dilute indoor CO2concentration and odors, or remove pollutants or airborne virus particles (e.g., Covid-19) from the building zones. This air exchange during the cold seasons accounts for a heating load for the heating system that causes an increase in energy consumption. Therefore, optimization of HVAC systems to decrease harmful emissions considering potential energy saving is vital. Moreover, if the CO2 generated by human metabolism is not correctly controlled to some limits, it can degrade indoor air quality, reduce the occupants’ efficiency, lead to severe mental problems, or considerably impair the thinking ability. Thus, implementing a robust ventilation control system for the buildings particularly crowded office buildings is momentous. Demand-Controlled Ventilation (DCV) systems are promising solutions that control and optimize the ventilation rates based on thermal comfort and indoor air quality demands with a high potential in energy saving. Many researchers in the literature study DCV systems or adaptive thermal control separately while a comprehensive model containing both DCV and thermal control strategies is missing. Therefore, this thesis contains the combination of the DCV and heating systems with embedded sensors and actuators with the fault injection capabilities in a simulation framework to study such a complex system due to its numerous functions, inputs, and outputs for an in-depth assessment of the involved components’ functionality and effective parameters, especially in case of component failures. Indoor air quality and comfort parameters in an office building can be monitored and controlled in real-time for various architectures based on a high-level specification of the building characteristics. The developed model is scalable based on the modular composability scheme. The user can generate different types of buildings with various architectures with many rooms and floors. The system model, fault injection capabilities, and diagnostic modules are automatically extended. The high complexity of the DCV and heating systems with their many components makes them error-prone, more susceptible to faults, and more fragile. Faults in system components such as sensors and actuators can result in different types of failures and severe implications on efficiency with discomfort and performance degradation of occupants, energy waste, shortened component lifetime, and increased maintenance costs. Failure detection and fault diagnosis is the combination of system failure detection, which is the implication of the fault in a component of a system, with fault diagnosis that is finding the type, severity, time of occurrence, and locality of faults. The state-of-the-art of fault diagnosis methods for building energy systems, e.g., HVAC systems, contains data-driven and knowledge-driven diagnostic methods with corresponding strengths and shortcomings. The knowledge-driven methods are mainly based on expert knowledge and simulate the diagnostic thoughts of domain experts with the argumentation of uncertainties, diagnosis of different fault severities, and understandability. But they need a higher and time-consuming effort to deeply understand the causal relationships among system inputs, faults, and symptoms. Moreover, the knowledge-based methods still lack automatic strategies to improve efficiency and they are less accurate than the data-driven methods. The data-driven methods, on the other hand, depend on similarities and patterns with high sensitivity to any change of patterns and more accuracy than the knowledge-driven methods. However, the data-driven methods require a huge amount of data for training the neural network for fault classification and they cannot provide the reason behind the results. In addition, the data-driven strategies indicate black boxes with low understandability. The research gap filled by this thesis is therefore the combination of knowledge-driven and data-driven fault diagnosis in DCV and heating systems to gain advantages from both categories. The diagnostic method presented in this thesis involves an automatic strategy with low expert effort without necessitation of in-depth understanding of the causal relationships compared to existing knowledge-driven methods with high understandability and high accuracy compared to the existing data-driven methods. The fault diagnosis method in this thesis combines a data-driven classifier with knowledge-driven inference, e.g., fuzzy logic and a Bayesian Belief Network (BBN) to provide an automatic diagnostic classifier that can diagnose any stuck-at or constant-valued faults in sensors and actuators. The combination of BBN and fuzzy logic itself analyzes the dependencies of the system signals based on the mutual information theory. In offline mode, a Relation-Direction Probability (RDP) table for each fault class is computed and stored in an offline fault library. The online mode determines the similarities between the real case RDP in the runtime and the offline library’s RDPs. On the other hand, a data-driven strategy is specifically established using deep neural networks to compare and evaluate the performance of the presented composed diagnostic classifier. The data-driven classifier uses observed signals from faulty and healthy operations of the system to train and evaluate the performance of the designed neural network model. The diagnostic technique in this thesis is independent of the historical data, independent of the expert knowledge, and computing-resource efficient. For the evaluation, four types of stuck-at faults at different components such as temperature sensor, CO2 sensor, heater actuator, and damper actuator with various fault values at different instants of time were investigated. A fault injection framework artificially injects the faults to serve the diagnostic classifiers, e.g., training the models and evaluations. Results show the combined classifier introduced in this thesis has comparable performance to the data-driven method while advantaging the strengths of knowledge-driven methods.Source Type:Doctoral Thesis764 343 - Some of the metrics are blocked by yourconsent settings
Publication Open Access Entwurf, Modellbildung und Realisierung einer Asynchronmaschine mit Zahnspulenwicklung im Stator und Rotor(2023)The general objective of the present thesis is to take a detailed look at a tooth wound induction machine in the stator and rotor. Special emphasis is placed on the consideration of a multiphase and tooth wound rotor design and its associated advantages. The starting point for the design and further consideration is the classical analytical machine consideration on the power system. With the help of the resulting air gap values of the magnetomotive force and magnetic flux density as well as the analytical equivalent circuit which uses the electrical terminals of the machine as a reference, the machine behavior is generally modeled on the resulting torque. The length of the active part, which can be significantly greater for a single-tooth winding induction machine if the overall length of the machine is constant, is worked out as an important design criterion. In addition, the connection of the rotor winding, which has a major influence on the resulting torque curve of the machine, is distinguished between a m-phase star connection and a short-circuited toothed coil, and their advantages and disadvantages are worked out. Referring to the identified analytical machine design, some promising machine designs are calculated in more detail. These machine designs are examined in detail for their torque behavior over speed and torque ripple with the aid of the numerical finite element method. After these detailed calculations, a promising machine design was selected and the design, calculation and realization of a prototype machine were documented in detail. The realized prototype machine is measured at important operating points and the measurement results as well as their deviations from the calculated and simulated values are discussed. In addition, the prototype machine is compared theoretically with two reference machines which have a classic distributed winding design with a squirrel-cage rotor.Source Type:Doctoral Thesis294 114 - Some of the metrics are blocked by yourconsent settings
Publication Open Access Evolutionary algorithm for scheduling real-time applications in system of systems(2022)In recent years, systems engineering and management have evolved from developing distributed systems to the integration of complex adaptive systems and the advent of Systemsof- Systems (SoS). SoS emerge from the collaboration of multiple systems with operational and managerial independency in order to accomplish a higher goal. SoS have been successfully deployed in different domains such as enterprise systems and smart cities. However, there is a critical challenge that must be tackled in order to adopt SoS in safety-relevant embedded applications: reliability and real-time capability are today not addressed in SoS. An open research challenge is the development of a distributed embedded system architecture for constantly evolving and dynamic SoS with support for verifiable real-time and reliability properties. The system architecture needs to support reliable closed loop control with stringent real-time requirements for applications. Most of the existing scheduling solutions are developed for monolithic systems or complex systems with centralized authorities, which may violate the restrictions of SoS and not be able to satisfy its requirements. In this thesis, we develop an efficient heuristic approach for scheduling SoS applications with real-time and fault-tolerance requirements. In order to respect the SoS architectural restrictions, we model the scheduling decisions at two levels using a Genetic Algorithm (GA) optimizer as a solver, which iteratively interact to reach a feasible and efficient schedule for the SoS. The computational results show improvement in the average transmission makespan of SoS applications compared to the state-of-the-art scheduling solutions up to 31 percent in different scale scenarios. This work also investigates the capability of our scheduling approach in computing timetriggered schedules for a sequence of incrementally added SoS applications in a real-time SoS network. In this regard, a heuristic approach is developed at both scheduling levels to improve the schedulability of our algorithm by efficiently sparing free time slots on resources for the upcoming applications. Testing the schedulability and timeliness of the new incremental scheduler on a set of applications shows improvements in schedulability of up to 50 percent. Furthermore, we design a fault-tolerant scheduling approach for real-time SoS applications to tolerate permanent faults. Accordingly, fault-tolerance techniques such as re-execution and replication are integrated into our two-level GA scheduling algorithm to enhance the reliability of the system in combination with satisfying deadline constraints. The reliability is improved on average by 15 percent compared to the non fault-tolerant scheduler in different scenarios.Source Type:Doctoral Thesis317 409 - Some of the metrics are blocked by yourconsent settings
Publication Open Access Execution environment for integrated real-time systems based on software-defined networking(2019)Today there exists a wide range of industrial systems that are based on federated architectures, which means that the each computing node in the system is exclusively assigned to one function. Due to the increasing computing capability of a single processor and the increasing amount of computing processors on a single platform, extensive research on integrating multiple functions with different criticality levels on a shared platform was carried out. For example, in the avionic domain, the development trend has moved from federated to integrated architectures. The ARINC 653 standard was released, which defines the execution environment for hosting several avionic software functions within a single computing node. ARINC 653 was successfully implemented (e.g., Airbus A380) and achieved its primary goals (cost and weight reduction, enabling modular certification). However, the existing execution environments based on an integrated architecture support only static system configurations. In specific domains like the railway industry, dynamic system adaptation is required during runtime, which affects both the application execution environment and the data communication mechanisms. In this dissertation, our focus is on an execution environment based on an integrated architecture, which guarantees the safe integration of mixed-criticality applications and also addresses the system reconfiguration problem. In order to close the research gap, we introduce an execution environment for integrated real-time applications by leveraging the Software-Defined Networking (SDN) paradigm. We extend the temporal and spatial isolation mechanisms from the application layer to the execution environment, so that the integrated applications share the computing node without interference. For the data communication of the integrated applications, we propose a virtual switch supporting temporal and spatial isolation between data flows and leverage the SDN paradigm to address the reconfiguration requirements of data flows. Besides, we also address the controlled import and export of messages between data flows in the proposed virtual switch. For the deterministic communication requirements of hard real-time applications, we propose a virtual switch that is IEEE 802.1Qbv and IEEE 802.1Qci capable according to the Time Sensitive Networking (TSN) standard, in order to close the research gap of virtual switching guaranteeing bounded delay with low jitter in an integrated architecture. In the proof-of-concept implementations, we demonstrate the non-interference between applications in the execution environment by fault injection. In our virtual switch demonstrators, we evaluate the fundamental isolation mechanisms and determinism of message switching, while measuring the caused overhead for message transmission as well as controlled data exchange, where the measured overhead in the proposed virtual switch is less than 10 μs.Source Type:Doctoral Thesis552 318 - Some of the metrics are blocked by yourconsent settings
Publication Open Access F-number and focal length of light field systems: a comparative study of field of view, light efficiency, signal to noise ratio, and depth of field(2022)The paper discusses the light efficiency and signal-to-noise ratio (SNR) of light field imaging systems in comparison to classical 2D imaging, which necessitates the definition of focal length and f-number. A comparison framework between 2D imaging and arbitrary light field imaging systems is developed and exemplified for the kaleidoscopic and the afocal light field imaging architectures. Since the f-number, in addition to the light efficiency of the system, is conceptually linked to the depth-of-field, an appropriate depth-of-field interpretation for light field systems is discussed as well.Source Type:Article296 111 - Some of the metrics are blocked by yourconsent settings
Publication Open Access Fault diagnosis services and realistic fault models for HVAC systems(2023)Heating, Ventilation, and Air-Conditioning (HVAC) systems are large-scale distributed systems comprising distributed components, including controllers, sensors, and actuators that must be coordinated to establish the intended behavior. Therefore, HVAC systems are subject to single and multiple faults affecting the electronics, potentially causing high energy consumption, occupant discomfort, degraded indoor air quality, thermal conditions, and risk to critical infrastructures. In addition, in large-scale critical infrastructures, HVAC systems serve an essential role in emergencies. Emergency reactions demand realtime response, consistency, and fault tolerance. Fault tolerance is essential for both operational faults and design faults. In the development phase of fault-tolerant systems, simulation is a common technique to obtain insights into system functionality, performance, and dependability. It saves time, reduces cost and avoids risks of carrying out tests in the presence of faults in real-world systems. As a result, fault injection in simulation environments is an effective experimental method to validate and evaluate the dependability of HVAC systems. Fault injection in a simulation offers high controllability and observability. It is thus ideal for an early dependability analysis and fault-tolerance evaluation. HVAC systems in critical infrastructures are safety-relevant systems that should guarantee adequate ventilation and air conditions for occupants. Accordingly, in this thesis, a simulation-based fault injection framework with a combination of two techniques, simulator command and simulation code modification with realistic fault patterns is proposed and introduced as a generic and extendable framework. The fault-injection framework is integrated and connected to simulation models of other electronic components via the connection of ports. The fault injection framework is developed in a component-based structure, implemented and simulated in MATLAB/Simulink using Stateflow diagrams with healthy and faulty system states. To determine the fault attributes and the fault location, an automated fault injection algorithm is proposed and integrated with a system-model generation algorithm. The system structure is adaptable and its parameters such as the number of floors and the number of rooms on each floor are defined based on the system requirements. An automated single/multiple fault injection algorithm triggers faults and supports a comprehensive range of faults with corresponding fault attributes including the fault type, time, location, persistence, duration, interarrival time and occurrence incidence. To validate the fault injection framework, a scenario-based approach is used to study the system impact and quality of the services. Each scenario consists of multiple events and subevents that result in multiple fault injections. The fault injection framework considers a realistic fault model adding white noise with Gaussian distribution as signal uncertainties and it supports reproducibility for a set of specific fault scenarios and for random fault injection scenarios. The framework incorporates a multi-dimensional fault model and provides compatibility to a wide range of other simulation components. The experimental results of single and multiple fault injection components show the correctness, the system behavior, accuracy, and other system parameters, such as the heater energy consumption and heater duty cycle in the presence of different fault cases. The experimental results serve as a quantitative evaluation of key performance indicators (KPI) such as energy efficiency, air quality, and thermal comfort. For example, combining a CO2 sensor fault with a heater actuator fault impacts energy consumption significantly by more than 70%. Furthermore, in this thesis a novel and generic fault diagnostic technique based on the Fuzzy Bayesian Belief Network (FBBN) construction is integrated with a simulated system model as a monitoring approach to determine the causes of faulty operations based on system observations and measurements. A data-driven classifier algorithm is also proposed to be combined with knowledge-driven methods, including fuzzy theory and Bayesian belief networks, enabling accurate fault diagnosis in HVAC systems. In this thesis, the data-driven approach reduces time consumption through automation and classification based on automated ranking methods. The fuzzy theory relies on reasoning about the uncertainties and divides the system attributes into several subdomains to facilitate the probability calculations for continuous system attributes via proper likelihood membership functions based on the system specifications. The probabilities are used to construct the Bayesian belief network based on the correlations of the fuzzified system attributes using mutual information theory. Mutual information for all pairs of fuzzified subdomains must be calculated and a positive value of the mutual information is an indicator of a strong dependency between two subdomains. Eventually, fault injection supports the fault diagnosis technique to define different fault cases and produce the faulty output data as a time series, including all healthy and faulty system measurements. The FBBN algorithm specifies the stringent relations, direction, and probability features of all fuzzified subdomains using the produced time-series by injecting the different fault cases. The hybrid fault diagnostic technique uses a data-driven classifier in combination with fuzzy logic theory and a Bayesian Belief Network in offline and online modes. Offline mode trains an offline library based on relation-direction-probability relationships of subdomains. Online mode determined the most similar faults in the offline library with actual fault cases based on the correlation of system attributes and the ranking method. The results show high accuracy of diagnosing permanent stuck-at fault in different HVAC system components.Source Type:Doctoral Thesis356 179 - Some of the metrics are blocked by yourconsent settings
Publication Open Access Fault injection framework for time-triggered systems(2020)This thesis presents a methodology and tool for verifying and validating the integrated system behaviour of time-triggered Ethernet networks. The determinism and sufficient bandwidth provided by time-triggered Ethernet network make it appealing for building safety-critical systems in different domains such as railway, aviation, health, and automobile. Many applications in these domains impose stringent dependability requirements. Therefore, verification and validation are often required at most stages of the development process when designing these systems. Due to the complexity of time-triggered network protocols, design engineers mostly employ formal methods and simulations as the verification and validation techniques. However, these methods mainly verify and validate only certain functions of the time-triggered protocol and not the integrated system behaviour. The reasons stem from the downsides of these approaches. The formal method suffers from a state-space explosion when modelling complex systems, and simulators do not sufficiently model certain complex functionality. Simulators also require cross-verification from a physical network to gain better confidence. Since evaluating the physical realisation of time-triggered Ethernet networks results in the best confidence levels, this work then focuses on the use of fault injection on physical devices for this purpose. This work proposes a novel and topology independent cut-through fault injection framework that can be used to evaluate the integrated system behaviour of time-triggered Ethernet networks. This work also describes a technique that can be used for failure detection in time-triggered networks during the synchronisation startup before the establishment of global time. It furthermore presents a discussion of experimental procedure(s) and results that demonstrate the use of the fault injection framework for the evaluation of a selection of different use cases. The Experiments carried out herein confirms how the novel fault injection framework surpasses other time-triggered Ethernet frameworks by satisfying a set of collective requirements which mainly include low-intrusiveness, portability, and the abstraction of fault injection component from the network under test.Source Type:Doctoral Thesis774 804

