Design and Implementation of a Light-Weight Container Runtime for Real-Time Applications

From industrial automation, to automotive assistance, to experiments in space, there are numerous fields where multiple applications with different real-time requirements are located in close proximity and/or need to re-deployable. Containerization can help deploy run multiple applications on the same hardware, at the same or different times. While this is an established technique esp. for web applications in data centers, there are some significant challenges when applying it to real time applications on lower end hardware. We propose a statically configured container runtime that:

The system is composed of standard Linux and other open source components where possible, e.g. systemd-nspawn for process isolation (possibly with some patches), file hard links to reduce storage overhead, various mount options, ZeroMQ backed by shared memory for RPC, etc. A PREEMPT_RT-patched Linux kernel is used as a basis to provide real-time scheduling. Proper configuration of process priorities based on the container declarations is (presumably) important for sufficiently deterministic timing. Dual-kernel or hypervisor based approaches may later be added to satisfy applications with tighter requirements. Research question(s): (Will probably need to be based on shortcomings with existing approaches early on.) What are the main (technical) challenges why such systems aren’t (widely) used yet? What are the easiest (/ most standard) approaches to tackle those?