Tools and Methods for Analysis, Debugging, and Performance Improvement of Equation-Based Models

Tools and Methods for Analysis, Debugging, and Performance Improvement of Equation-Based Models
Author: Martin Sjölund
Publisher: Linköping University Electronic Press
Total Pages: 243
Release: 2015-05-11
Genre: Debugging in computer science
ISBN: 9175190710

Equation-based object-oriented (EOO) modeling languages such as Modelica provide a convenient, declarative method for describing models of cyber-physical systems. Because of the ease of use of EOO languages, large and complex models can be built with limited effort. However, current state-of-the-art tools do not provide the user with enough information when errors appear or simulation results are wrong. It is of paramount importance that such tools should give the user enough information to correct errors or understand where the problems that lead to wrong simulation results are located. However, understanding the model translation process of an EOO compiler is a daunting task that not only requires knowledge of the numerical algorithms that the tool executes during simulation, but also the complex symbolic transformations being performed. As part of this work, methods have been developed and explored where the EOO tool, an enhanced Modelica compiler, records the transformations during the translation process in order to provide better diagnostics, explanations, and analysis. This information is used to generate better error-messages during translation. It is also used to provide better debugging for a simulation that produces unexpected results or where numerical methods fail. Meeting deadlines is particularly important for real-time applications. It is usually essential to identify possible bottlenecks and either simplify the model or give hints to the compiler that enable it to generate faster code. When profiling and measuring execution times of parts of the model the recorded information can also be used to find out why a particular system model executes slowly. Combined with debugging information, it is possible to find out why this system of equations is slow to solve, which helps understanding what can be done to simplify the model. A tool with a graphical user interface has been developed to make debugging and performance profiling easier. Both debugging and profiling have been combined into a single view so that performance metrics are mapped to equations, which are mapped to debugging information. The algorithmic part of Modelica was extended with meta-modeling constructs (MetaModelica) for language modeling. In this context a quite general approach to debugging and compilation from (extended) Modelica to C code was developed. That makes it possible to use the same executable format for simulation executables as for compiler bootstrapping when the compiler written in MetaModelica compiles itself. Finally, a method and tool prototype suitable for speeding up simulations has been developed. It works by partitioning the model at appropriate places and compiling a simulation executable for a suitable parallel platform.

Thermal Issues in Testing of Advanced Systems on Chip

Thermal Issues in Testing of Advanced Systems on Chip
Author: Nima Aghaee Ghaleshahi
Publisher: Linköping University Electronic Press
Total Pages: 219
Release: 2015-09-23
Genre:
ISBN: 9176859495

Many cutting-edge computer and electronic products are powered by advanced Systems-on-Chip (SoC). Advanced SoCs encompass superb performance together with large number of functions. This is achieved by efficient integration of huge number of transistors. Such very large scale integration is enabled by a core-based design paradigm as well as deep-submicron and 3D-stacked-IC technologies. These technologies are susceptible to reliability and testing complications caused by thermal issues. Three crucial thermal issues related to temperature variations, temperature gradients, and temperature cycling are addressed in this thesis. Existing test scheduling techniques rely on temperature simulations to generate schedules that meet thermal constraints such as overheating prevention. The difference between the simulated temperatures and the actual temperatures is called temperature error. This error, for past technologies, is negligible. However, advanced SoCs experience large errors due to large process variations. Such large errors have costly consequences, such as overheating, and must be taken care of. This thesis presents an adaptive approach to generate test schedules that handle such temperature errors. Advanced SoCs manufactured as 3D stacked ICs experience large temperature gradients. Temperature gradients accelerate certain early-life defect mechanisms. These mechanisms can be artificially accelerated using gradient-based, burn-in like, operations so that the defects are detected before shipping. Moreover, temperature gradients exacerbate some delay-related defects. In order to detect such defects, testing must be performed when appropriate temperature-gradients are enforced. A schedule-based technique that enforces the temperature-gradients for burn-in like operations is proposed in this thesis. This technique is further developed to support testing for delay-related defects while appropriate gradients are enforced. The last thermal issue addressed by this thesis is related to temperature cycling. Temperature cycling test procedures are usually applied to safety-critical applications to detect cycling-related early-life failures. Such failures affect advanced SoCs, particularly through-silicon-via structures in 3D-stacked-ICs. An efficient schedule-based cycling-test technique that combines cycling acceleration with testing is proposed in this thesis. The proposed technique fits into existing 3D testing procedures and does not require temperature chambers. Therefore, the overall cycling acceleration and testing cost can be drastically reduced. All the proposed techniques have been implemented and evaluated with extensive experiments based on ITC’02 benchmarks as well as a number of 3D stacked ICs. Experiments show that the proposed techniques work effectively and reduce the costs, in particular the costs related to addressing thermal issues and early-life failures. We have also developed a fast temperature simulation technique based on a closed-form solution for the temperature equations. Experiments demonstrate that the proposed simulation technique reduces the schedule generation time by more than half.

Content Ontology Design Patterns: Qualities, Methods, and Tools

Content Ontology Design Patterns: Qualities, Methods, and Tools
Author: Karl Hammar
Publisher: Linköping University Electronic Press
Total Pages: 261
Release: 2017-09-06
Genre:
ISBN: 917685454X

Ontologies are formal knowledge models that describe concepts and relationships and enable data integration, information search, and reasoning. Ontology Design Patterns (ODPs) are reusable solutions intended to simplify ontology development and support the use of semantic technologies by ontology engineers. ODPs document and package good modelling practices for reuse, ideally enabling inexperienced ontologists to construct high-quality ontologies. Although ODPs are already used for development, there are still remaining challenges that have not been addressed in the literature. These research gaps include a lack of knowledge about (1) which ODP features are important for ontology engineering, (2) less experienced developers' preferences and barriers for employing ODP tooling, and (3) the suitability of the eXtreme Design (XD) ODP usage methodology in non-academic contexts. This dissertation aims to close these gaps by combining quantitative and qualitative methods, primarily based on five ontology engineering projects involving inexperienced ontologists. A series of ontology engineering workshops and surveys provided data about developer preferences regarding ODP features, ODP usage methodology, and ODP tooling needs. Other data sources are ontologies and ODPs published on the web, which have been studied in detail. To evaluate tooling improvements, experimental approaches provide data from comparison of new tools and techniques against established alternatives. The analysis of the gathered data resulted in a set of measurable quality indicators that cover aspects of ODP documentation, formal representation or axiomatisation, and usage by ontologists. These indicators highlight quality trade-offs: for instance, between ODP Learnability and Reusability, or between Functional Suitability and Performance Efficiency. Furthermore, the results demonstrate a need for ODP tools that support three novel property specialisation strategies, and highlight the preference of inexperienced developers for template-based ODP instantiation---neither of which are supported in prior tooling. The studies also resulted in improvements to ODP search engines based on ODP-specific attributes. Finally, the analysis shows that XD should include guidance for the developer roles and responsibilities in ontology engineering projects, suggestions on how to reuse existing ontology resources, and approaches for adapting XD to project-specific contexts.

Distributed Moving Base Driving Simulators

Distributed Moving Base Driving Simulators
Author: Anders Andersson
Publisher: Linköping University Electronic Press
Total Pages: 60
Release: 2019-04-30
Genre:
ISBN: 9176850900

Development of new functionality and smart systems for different types of vehicles is accelerating with the advent of new emerging technologies such as connected and autonomous vehicles. To ensure that these new systems and functions work as intended, flexible and credible evaluation tools are necessary. One example of this type of tool is a driving simulator, which can be used for testing new and existing vehicle concepts and driver support systems. When a driver in a driving simulator operates it in the same way as they would in actual traffic, you get a realistic evaluation of what you want to investigate. Two advantages of a driving simulator are (1.) that you can repeat the same situation several times over a short period of time, and (2.) you can study driver reactions during dangerous situations that could result in serious injuries if they occurred in the real world. An important component of a driving simulator is the vehicle model, i.e., the model that describes how the vehicle reacts to its surroundings and driver inputs. To increase the simulator realism or the computational performance, it is possible to divide the vehicle model into subsystems that run on different computers that are connected in a network. A subsystem can also be replaced with hardware using so-called hardware-in-the-loop simulation, and can then be connected to the rest of the vehicle model using a specified interface. The technique of dividing a model into smaller subsystems running on separate nodes that communicate through a network is called distributed simulation. This thesis investigates if and how a distributed simulator design might facilitate the maintenance and new development required for a driving simulator to be able to keep up with the increasing pace of vehicle development. For this purpose, three different distributed simulator solutions have been designed, built, and analyzed with the aim of constructing distributed simulators, including external hardware, where the simulation achieves the same degree of realism as with a traditional driving simulator. One of these simulator solutions has been used to create a parameterized powertrain model that can be configured to represent any of a number of different vehicles. Furthermore, the driver's driving task is combined with the powertrain model to monitor deviations. After the powertrain model was created, subsystems from a simulator solution and the powertrain model have been transferred to a Modelica environment. The goal is to create a framework for requirement testing that guarantees sufficient realism, also for a distributed driving simulation. The results show that the distributed simulators we have developed work well overall with satisfactory performance. It is important to manage the vehicle model and how it is connected to a distributed system. In the distributed driveline simulator setup, the network delays were so small that they could be ignored, i.e., they did not affect the driving experience. However, if one gradually increases the delays, a driver in the distributed simulator will change his/her behavior. The impact of communication latency on a distributed simulator also depends on the simulator application, where different usages of the simulator, i.e., different simulator studies, will have different demands. We believe that many simulator studies could be performed using a distributed setup. One issue is how modifications to the system affect the vehicle model and the desired behavior. This leads to the need for methodology for managing model requirements. In order to detect model deviations in the simulator environment, a monitoring aid has been implemented to help notify test managers when a model behaves strangely or is driven outside of its validated region. Since the availability of distributed laboratory equipment can be limited, the possibility of using Modelica (which is an equation-based and object-oriented programming language) for simulating subsystems is also examined. Implementation of the model in Modelica has also been extended with requirements management, and in this work a framework is proposed for automatically evaluating the model in a tool.

Scalable and Efficient Probabilistic Topic Model Inference for Textual Data

Scalable and Efficient Probabilistic Topic Model Inference for Textual Data
Author: Måns Magnusson
Publisher: Linköping University Electronic Press
Total Pages: 75
Release: 2018-04-27
Genre:
ISBN: 9176852881

Probabilistic topic models have proven to be an extremely versatile class of mixed-membership models for discovering the thematic structure of text collections. There are many possible applications, covering a broad range of areas of study: technology, natural science, social science and the humanities. In this thesis, a new efficient parallel Markov Chain Monte Carlo inference algorithm is proposed for Bayesian inference in large topic models. The proposed methods scale well with the corpus size and can be used for other probabilistic topic models and other natural language processing applications. The proposed methods are fast, efficient, scalable, and will converge to the true posterior distribution. In addition, in this thesis a supervised topic model for high-dimensional text classification is also proposed, with emphasis on interpretable document prediction using the horseshoe shrinkage prior in supervised topic models. Finally, we develop a model and inference algorithm that can model agenda and framing of political speeches over time with a priori defined topics. We apply the approach to analyze the evolution of immigration discourse in the Swedish parliament by combining theory from political science and communication science with a probabilistic topic model. Probabilistiska ämnesmodeller (topic models) är en mångsidig klass av modeller för att estimera ämnessammansättningar i större corpusar. Applikationer finns i ett flertal vetenskapsområden som teknik, naturvetenskap, samhällsvetenskap och humaniora. I denna avhandling föreslås nya effektiva och parallella Markov Chain Monte Carlo algoritmer för Bayesianska ämnesmodeller. De föreslagna metoderna skalar väl med storleken på corpuset och kan användas för flera olika ämnesmodeller och liknande modeller inom språkteknologi. De föreslagna metoderna är snabba, effektiva, skalbara och konvergerar till den sanna posteriorfördelningen. Dessutom föreslås en ämnesmodell för högdimensionell textklassificering, med tonvikt på tolkningsbar dokumentklassificering genom att använda en kraftigt regulariserande priorifördelningar. Slutligen utvecklas en ämnesmodell för att analyzera "agenda" och "framing" för ett förutbestämt ämne. Med denna metod analyserar vi invandringsdiskursen i Sveriges Riksdag över tid, genom att kombinera teori från statsvetenskap, kommunikationsvetenskap och probabilistiska ämnesmodeller.

Companion Robots for Older Adults

Companion Robots for Older Adults
Author: Sofia Thunberg
Publisher: Linköping University Electronic Press
Total Pages: 175
Release: 2024-05-06
Genre:
ISBN: 9180755747

This thesis explores, through a mixed-methods approach, what happens when companion robots are deployed in care homes for older adults by looking at different perspectives from key stakeholders. Nine studies are presented with decision makers in municipalities, care staff and older adults, as participants, and the studies have primarily been carried out in the field in care homes and activity centres, where both qualitative (e.g., observations and workshops) and quantitative data (surveys) have been collected. The thesis shows that companion robots seem to be here to stay and that they can contribute to a higher quality of life for some older adults. It further presents some challenges with a certain discrepancy between what decision makers want and what staff might be able to facilitate. For future research and use of companion robots, it is key to evaluate each robot model and potential use case separately and develop clear routines for how they should be used, and most importantly, let all stakeholders be part of the process. The knowledge contribution is the holistic view of how different actors affect each other when emerging robot technology is introduced in a care environment. Den här avhandlingen utforskar vad som händer när sällskapsrobotar införs på omsorgsboenden för äldre genom att titta på perspektiv från olika intressenter. Nio studier presenteras med kommunala beslutsfattare, vårdpersonal och äldre som deltagare. Studierna har i huvudsak genomförts i fält på särskilda boenden och aktivitetscenter där både kvalitativa- (exempelvis observationer och workshops) och kvantitativa data (enkäter) har samlats in. Avhandlingen visar att sällskapsrobotar verkar vara här för att stanna och att de kan bidra till en högre livskvalitet för vissa äldre. Den visar även på en del utmaningar med en viss diskrepans mellan vad beslutsfattare vill införa och vad personalen har möjlighet att utföra i sitt arbete. För framtida forskning och användning av sällskapsrobotar är det viktigt att utvärdera varje robotmodell och varje användningsområde var för sig och ta fram tydliga rutiner för hur de ska användas, och viktigast av allt, låta alla intressenter vara en del av processen. Kunskapsbidraget med avhandlingen är en helhetssyn på hur olika aktörer påverkar varandra när ny robotteknik introduceras i en vårdmiljö

Studying Simulations with Distributed Cognition

Studying Simulations with Distributed Cognition
Author: Jonas Rybing
Publisher: Linköping University Electronic Press
Total Pages: 115
Release: 2018-03-20
Genre:
ISBN: 9176853489

Simulations are frequently used techniques for training, performance assessment, and prediction of future outcomes. In this thesis, the term “human-centered simulation” is used to refer to any simulation in which humans and human cognition are integral to the simulation’s function and purpose (e.g., simulation-based training). A general problem for human-centered simulations is to capture the cognitive processes and activities of the target situation (i.e., the real world task) and recreate them accurately in the simulation. The prevalent view within the simulation research community is that cognition is internal, decontextualized computational processes of individuals. However, contemporary theories of cognition emphasize the importance of the external environment, use of tools, as well as social and cultural factors in cognitive practice. Consequently, there is a need for research on how such contemporary perspectives can be used to describe human-centered simulations, re-interpret theoretical constructs of such simulations, and direct how simulations should be modeled, designed, and evaluated. This thesis adopts distributed cognition as a framework for studying human-centered simulations. Training and assessment of emergency medical management in a Swedish context using the Emergo Train System (ETS) simulator was adopted as a case study. ETS simulations were studied and analyzed using the distributed cognition for teamwork (DiCoT) methodology with the goal of understanding, evaluating, and testing the validity of the ETS simulator. Moreover, to explore distributed cognition as a basis for simulator design, a digital re-design of ETS (DIGEMERGO) was developed based on the DiCoT analysis. The aim of the DIGEMERGO system was to retain core distributed cognitive features of ETS, to increase validity, outcome reliability, and to provide a digital platform for emergency medical studies. DIGEMERGO was evaluated in three separate studies; first, a usefulness, usability, and facevalidation study that involved subject-matter-experts; second, a comparative validation study using an expert-novice group comparison; and finally, a transfer of training study based on self-efficacy and management performance. Overall, the results showed that DIGEMERGO was perceived as a useful, immersive, and promising simulator – with mixed evidence for validity – that demonstrated increased general self-efficacy and management performance following simulation exercises. This thesis demonstrates that distributed cognition, using DiCoT, is a useful framework for understanding, designing and evaluating simulated environments. In addition, the thesis conceptualizes and re-interprets central constructs of human-centered simulation in terms of distributed cognition. In doing so, the thesis shows how distributed cognitive processes relate to validity, fidelity, functionality, and usefulness of human-centered simulations. This thesis thus provides a new understanding of human-centered simulations that is grounded in distributed cognition theory.

Parameterized Verification of Synchronized Concurrent Programs

Parameterized Verification of Synchronized Concurrent Programs
Author: Zeinab Ganjei
Publisher: Linköping University Electronic Press
Total Pages: 192
Release: 2021-03-19
Genre:
ISBN: 9179296971

There is currently an increasing demand for concurrent programs. Checking the correctness of concurrent programs is a complex task due to the interleavings of processes. Sometimes, violation of the correctness properties in such systems causes human or resource losses; therefore, it is crucial to check the correctness of such systems. Two main approaches to software analysis are testing and formal verification. Testing can help discover many bugs at a low cost. However, it cannot prove the correctness of a program. Formal verification, on the other hand, is the approach for proving program correctness. Model checking is a formal verification technique that is suitable for concurrent programs. It aims to automatically establish the correctness (expressed in terms of temporal properties) of a program through an exhaustive search of the behavior of the system. Model checking was initially introduced for the purpose of verifying finite‐state concurrent programs, and extending it to infinite‐state systems is an active research area. In this thesis, we focus on the formal verification of parameterized systems. That is, systems in which the number of executing processes is not bounded a priori. We provide fully-automatic and parameterized model checking techniques for establishing the correctness of safety properties for certain classes of concurrent programs. We provide an open‐source prototype for every technique and present our experimental results on several benchmarks. First, we address the problem of automatically checking safety properties for bounded as well as parameterized phaser programs. Phaser programs are concurrent programs that make use of the complex synchronization construct of Habanero Java phasers. For the bounded case, we establish the decidability of checking the violation of program assertions and the undecidability of checking deadlock‐freedom. For the parameterized case, we study different formulations of the verification problem and propose an exact procedure that is guaranteed to terminate for some reachability problems even in the presence of unbounded phases and arbitrarily many spawned processes. Second, we propose an approach for automatic verification of parameterized concurrent programs in which shared variables are manipulated by atomic transitions to count and synchronize the spawned processes. For this purpose, we introduce counting predicates that related counters that refer to the number of processes satisfying some given properties to the variables that are directly manipulated by the concurrent processes. We then combine existing works on the counter, predicate, and constrained monotonic abstraction and build a nested counterexample‐based refinement scheme to establish correctness. Third, we introduce Lazy Constrained Monotonic Abstraction for more efficient exploration of well‐structured abstractions of infinite‐state non‐monotonic systems. We propose several heuristics and assess the efficiency of the proposed technique by extensive experiments using our open‐source prototype. Lastly, we propose a sound but (in general) incomplete procedure for automatic verification of safety properties for a class of fault‐tolerant distributed protocols described in the Heard‐Of (HO for short) model. The HO model is a popular model for describing distributed protocols. We propose a verification procedure that is guaranteed to terminate even for unbounded number of the processes that execute the distributed protocol.

Analysis, Design, and Optimization of Embedded Control Systems

Analysis, Design, and Optimization of Embedded Control Systems
Author: Amir Aminifar
Publisher: Linköping University Electronic Press
Total Pages: 183
Release: 2016-02-18
Genre: Control systems
ISBN: 917685826X

Today, many embedded or cyber-physical systems, e.g., in the automotive domain, comprise several control applications, sharing the same platform. It is well known that such resource sharing leads to complex temporal behaviors that degrades the quality of control, and more importantly, may even jeopardize stability in the worst case, if not properly taken into account. In this thesis, we consider embedded control or cyber-physical systems, where several control applications share the same processing unit. The focus is on the control-scheduling co-design problem, where the controller and scheduling parameters are jointly optimized. The fundamental difference between control applications and traditional embedded applications motivates the need for novel methodologies for the design and optimization of embedded control systems. This thesis is one more step towards correct design and optimization of embedded control systems. Offline and online methodologies for embedded control systems are covered in this thesis. The importance of considering both the expected control performance and stability is discussed and a control-scheduling co-design methodology is proposed to optimize control performance while guaranteeing stability. Orthogonal to this, bandwidth-efficient stabilizing control servers are proposed, which support compositionality, isolation, and resource-efficiency in design and co-design. Finally, we extend the scope of the proposed approach to non-periodic control schemes and address the challenges in sharing the platform with self-triggered controllers. In addition to offline methodologies, a novel online scheduling policy to stabilize control applications is proposed.

Orchestrating a Resource-aware Edge

Orchestrating a Resource-aware Edge
Author: Klervie Toczé
Publisher: Linköping University Electronic Press
Total Pages: 122
Release: 2024-09-02
Genre:
ISBN: 9180757480

More and more services are moving to the cloud, attracted by the promise of unlimited resources that are accessible anytime, and are managed by someone else. However, hosting every type of service in large cloud datacenters is not possible or suitable, as some emerging applications have stringent latency or privacy requirements, while also handling huge amounts of data. Therefore, in recent years, a new paradigm has been proposed to address the needs of these applications: the edge computing paradigm. Resources provided at the edge (e.g., for computation and communication) are constrained, hence resource management is of crucial importance. The incoming load to the edge infrastructure varies both in time and space. Managing the edge infrastructure so that the appropriate resources are available at the required time and location is called orchestrating. This is especially challenging in case of sudden load spikes and when the orchestration impact itself has to be limited. This thesis enables edge computing orchestration with increased resource-awareness by contributing with methods, techniques, and concepts for edge resource management. First, it proposes methods to better understand the edge resource demand. Second, it provides solutions on the supply side for orchestrating edge resources with different characteristics in order to serve edge applications with satisfactory quality of service. Finally, the thesis includes a critical perspective on the paradigm, by considering sustainability challenges. To understand the demand patterns, the thesis presents a methodology for categorizing the large variety of use cases that are proposed in the literature as potential applications for edge computing. The thesis also proposes methods for characterizing and modeling applications, as well as for gathering traces from real applications and analyzing them. These different approaches are applied to a prototype from a typical edge application domain: Mixed Reality. The important insight here is that application descriptions or models that are not based on a real application may not be giving an accurate picture of the load. This can drive incorrect decisions about what should be done on the supply side and thus waste resources. Regarding resource supply, the thesis proposes two orchestration frameworks for managing edge resources and successfully dealing with load spikes while avoiding over-provisioning. The first one utilizes mobile edge devices while the second leverages the concept of spare devices. Then, focusing on the request placement part of orchestration, the thesis formalizes it in the case of applications structured as chains of functions (so-called microservices) as an instance of the Traveling Purchaser Problem and solves it using Integer Linear Programming. Two different energy metrics influencing request placement decisions are proposed and evaluated. Finally, the thesis explores further resource awareness. Sustainability challenges that should be highlighted more within edge computing are collected. Among those related to resource use, the strategy of sufficiency is promoted as a way forward. It involves aiming at only using the needed resources (no more, no less) with a goal of reducing resource usage. Different tools to adopt it are proposed and their use demonstrated through a case study.