Download PDFOpen PDF in browserHigh-Performance Real-Time Systems Design from Cloud to Embedded EdgeEasyChair Preprint 80645 pages•Date: May 24, 2022AbstractComputer Systems are rapidly evolving and moving from being designed as fixed function general purpose, real-time, high performance systems to increasingly software defined where predictability in case of resource sharing becomes a central problem to solve, more so when the workloads are mixed critical in nature. Co-location of multiple workloads on a single computer system can improve utilisation of system resources, enabling resource re-use (e. g. IO devices, hardware accelerators, etc.) and improve the efficiency of data sharing across workloads. However, co-location also comes at the cost of potential performance degradation due to interference on shared resources, and increased uncertainty which in the case of mixed critical workloads seen in automotive and industrial segments becomes a major bottleneck to address. The advent of larger integrated platforms which will run real-time workloads alongside general purpose workloads will require the ability to provision resources in a quantifiable and predictable way and the ability to deliver dynamic workloads in the cloud native design paradigm will be a key differentiator in next generation high performance embedded computing systems. In this work, we'll cover the impact of shared resources interference on heterogeneous compute platforms, and we'll define the terminology and the principles which we envision will enable deterministic and predictable execution of critical and real-time applications on high performance Arm-based platforms. We will also cover system software architectures that are being envisioned in initiatives like SOAFEE (Scalable Open Architecture For Embedded Edge) to address the need of enabling mixed critical workloads and the orchestration of it from cloud to embedded edge. Keyphrases: ARM, QoS, SOAFEE, high performance, mixed-criticality, real-time
|