Linux driver VM optimizations
This issue will cover the optimizations that can be done to the Linux driver VMs.
Linux driver VMs are VMs created to offer driver support for Redox, see how it works:
- The Linux VM access the device though PCI passthrough (the host system doesn't need a driver, similar to Qubes OS).
- A Redox bridge program will run on Linux userspace, this bridge will communicate with some Redox host scheme based on the driver type (audio: for sound cards, video: for GPUs).
- The Linux VM will communicate with the bridge and the Redox host will control the device without native drivers.
This is a guest-to-host communication using VirtIO interfaces, because of the virtualization some overhead will exist.
Most operating systems speed up their VMs with a type-2 hypervisor running on the kernel (KVM and Hyper-V, for example), we will use Revirt-U to do that, but more optimizations can be done to reduce CPU cycles and memory usage.
-
Build a separated bridge for each device type or Linux device system. -
drmd
- GPUs. -
netd
- network devices (Ethernet). -
fsd
- filesystems. -
wifid
- Wi-Fi adapters. -
audiod
- sound devices. -
inputd
- mouse/keyboards/gamepads/joysticks. -
sensord
- sensor drivers.
The userspace part can be minimal or even empty, depending on whether a kernel module is needed/beneficial for communication with the bridged device. Userspace can be empty besides init, which might not have to do anything useful either depending on much how much of the bridging is done by a kernel module.
-
Use a real-time scheduler.
Linux CFS have a multitasking design in mind, as our Linux driver VMs only run the Redox bridge, as well as a few background kthreads, so we can use a monotasking low-latency scheduler to have maximum performance.