Open krisnova opened 1 year ago
I think this example should give us what we need to run a simple linux kernel and schedule auraed
as /bin/init
So here is where I think we start.
try_from
function hereIt looks like we can pass Boot Arguments and Init Arguments to the linux loader crate which gives us the ability to define our init process similar to any bootloader.
We can hook in here and generate the string to boot a nested auraed as a guest for a pod.
I was going to take a shot at this. Wondering, though, if it makes sense to just implement the VmsService and then build the PodSandbox
stuff on top. This keeps the scope somewhat contained and we need it anyways. Happy to create a new issue for that work, and link that issue here. Thoughts?
Issue for VmsService which we can then leverage for the "Pod Sandbox": https://github.com/aurae-runtime/aurae/issues/439
Can we maybe create a good abstraction so we can replace the virtualization implementation later on? I have great sympathy for Firecracker as this is used in production by AWS. When I look at the current state of the aurae project, I think we should try to not get distracted by implementing/extending a hypervisor.
I think staying out of the hyper visor details is a good move for right now -- I do think it should remain compiled into the auraed binary -- but ideally we should be able to consider other hypervisor implementations at compile time
The more I look at the FC code, the more I do not want to implement our own hypervisor :) I will create an RFC once I have better organized my thoughts around this topic. I'm currently exploring Dragonball, which might or might not suit our needs better. https://github.com/kata-containers/kata-containers/tree/main/src/dragonball
Can we maybe create a good abstraction so we can replace the virtualization implementation later on?
This is what kata containers does as well, they abstract the hypervisor and make it pluggable.
@JeroenSoeters what do you think about using cloud-hypervisor for this? I think we should create a nice interface and then write an implementation, that leverages cloud-hypervisor underneath. This way we could replace cloud-hypervisor with something else later on. Also, I'd like to have support for classical VMs - which would be a problem with firecracker, as it just supports a very limited set of (virtual) hardware.
Last time I looked at this cloud-hypervisor seemed like the best choice yea because of what you mention as well as vhost-net
support. I had started some of that work around an interface, I believe the next step was creating TUN/TAP devices from out networking code.
looks like we've started landing on cloud-hypervisor (which is good).
once that's in place we should circle back to the Pod service per the original issue.
We need to form an opinion on which virtualization library to use, as mentioned in #433.
Options that I am aware of:
After we establish a way of running a virtualized workload we need to replace the current pod sandbox implementation detail with two things:
init
crate that allows us to detect if virtualization is possible at runtime.