Open izgzhen opened 7 years ago
@izgzhen, it might be possible to do something like this with Nix and our own hydra server (CI system using Nix). Example hydra, http://hydra.nixos.org/
NixOS has a test framework that can be used to spawn networks of virtual machines. We can load the Xen kernel into one of these machines, mount the /nix/store
to it, and attempt to run the examples from the store. The problem with this is that once we enter dom0
we lose control of the VM with the NixOS test runner due to nested virtualization. So to get around this limitation, we could make a systemd unit on that VM that runs on system load and calls sudo xl create Hello.config
etc, and writes tests results to a mounted disk for parsing later. It's a less than satisfying option, but probably the best setup for automated tests in HaLVM 2.x.x
under Xen.
In HaLVM 3.0
, if we target Qemu/KVM w/ Solo5 we might have better options in regards to automated testing w/ NixOS, since the NixOS test-runner uses Qemu.
In regards to packaging, Nix can also be used to build rpm / deb packages. This could probably shed some of the build code in the Makefile that we have.
NixOS has a test framework that can be used to spawn networks of virtual machines. We can load the Xen kernel into one of these machines, mount the /nix/store to it, and attempt to run the examples from the store.
Sounds good. Is that test framework based on some VM software as well?
The problem with this is that once we enter dom0 we lose control of the VM with the NixOS test runner due to nested virtualization.
Can you elaborate on this?
NixOS has a test framework that can be used to spawn networks of virtual machines. We can load the Xen kernel into one of these machines, mount the /nix/store to it, and attempt to run the examples from the store.
Sounds good. Is that test framework based on some VM software as well?
Yes, seems to be based on Qemu
.
The problem with this is that once we enter dom0 we lose control of the VM with the NixOS test runner due to nested virtualization.
Can you elaborate on this?
Sure, it seems the Qemu
serial console doesn't understand the Hypercalls necessary to communicate with a guest, so it's hitting a brick wall. Ideally, it would be nice if Qemu
could expose a mocked Hypercall interface for guests, so we could just run unikernels on it directly w/o a hypervisor. I don't think this currently exists, but would love to be proved wrong.
We need xen to build the package and test the examples. I am not sure if AWS or something will support this infrastructure.