intel / ccloudvm

Configurable Cloud VM is a small command line tool for automatically creating development and demo environments for complex projects. The tool sets up these development environments inside a virtual machine which it automatically creates on the user’s host computer. This avoids polluting the user’s host machine with components from the chosen development environment and provides a clean, predictable and repeatable environment in which this development environment can run.
Apache License 2.0
32 stars 19 forks source link

Add a way to boot with a custom kernel/initrd #105

Open kaccardi opened 5 years ago

kaccardi commented 5 years ago

It would be nice to be able to use ccloudvm for easy kernel testing/development. This would mean being able to replace the kernel in whatever base image you have - for example, something like this:

ccloudvm start fedora28 --kernel=~/src/mylinux/vmlinux --initrd=~/path/to/initrd Where initrd was optional.

markdryan commented 5 years ago

@kaccardi I had a play around with this on the command line this morning and I was able to get it to work, with the caveat that it worked best when using the kernel and initrd of the original cloud image. I need to provide an --initrd and an --append="root=/dev/vda" to get this to work (with xenial).

So I created a VM the normal way, shut it down and then started the instance by invoking the qemu command directly using the rootfs created by "ccloudvm create" but with an external kernel and initrd. The initrd and kernel I used were compatible with the cloudimage, which I guess they need to be, otherwise you won't be able to load the modules in /lib/modules. I, for example, wasn't able to load the cdrom driver when using a new kernel version which upset cloud-init.

So this should be fairly easy to add. I'm just wondering what the advantage over simply building and installing a new kernel from inside a VM and then rebooting it would be. Testing multiple kernels without having to maintain multiple VMs I guess.

Also, what would the standard workflow be?

  1. ccloduvm create xenial -name kernel_test
  2. ccloudvm stop kernel_test
  3. Download xenial kernel sources and build on host
  4. ccloudvm start kernel_test --kernel=new-kernel --initrd=initrd --append="root=/dev/vda1 ro"

I'm also wondering what to do about the kernel parameter. Is it okay for ccloudvm to always specify --append="root=/dev/vda1 ro" or would it be better to require the user to provide this information when booting his/her own kernel.

kaccardi commented 5 years ago

@markdryan I think one thing that could be done to address the modules issue would be to be able to provide a directory from the host to mount under /lib/modules/uname -r

The reason I would like it vs. building and installing the new kernel from inside the VM is because it's faster to just build it on the host and use the VM just for testing the change. And yes, that way I don't have to multiple VMs laying around to test different kernels and compare. My workflow would be for replacing a distro kernel with an upstream one, but I don't think that matters really, it would be essentially the same. Yes, it would be nice to be able to append kernel parameters in addition to "root=/dev/vda1 ro"

markdryan commented 5 years ago

I think one thing that could be done to address the modules issue would be to be able to provide a directory from the host to mount under /lib/modules/uname -r

The mounting is done with 9P. Doesn't that require the guest kernel to load a module? Maybe 9P could be compiled directly into the kernel under test.