Open yisun-git opened 5 years ago
Hi @yisun-git, thank you for providing this additional detail on the VMM/hypervisor crates!
Some thoughts:
I like the VMM and the VM traits you've outlined and the functionality they provide. I see these working nicely with the VCPU trait and providing a nice hierarchy of functionality and solid building blocks for building hypervisors.
I see dynamic dispatch being utilized in the create
functions of Hypervisor
and Vm
. I think these might be better served using static dispatch and trait generics, rather than "box"ing the references to the other trait implementations. The overhead introduced by dynamic dispatch and the virtual functions may not be necessary, and as you said, there will only be one type or implementation of hypervisor <-> VM <-> VCPU (as built at the highest level through conditional compilation) for each scenario. A statically-dispatched return value of a Trait generic can be achieved with the impl Trait
syntax. For example:
pub trait Vcpu {
// …
}
pub trait Vm {
pub fn create_vcpu(&self, id: u8) -> impl Vcpu;
// …
}
You've created the issue here due to its connection to the Vcpu crate, but you can also feel free to move the discussion to the rust-vmm community issues to ensure that a larger audience sees it as well.
Hi, @jennymankin,
Thanks for your comments! The suggestion to convert dynamic dispatch to static dispatch is good. Let me have a try!
For separate crate, there are dependencies I think. I.e. Hypervisor depends on Vm trait, Vm depends on Vcpu trait. Is that possible a concrete VMM (e.g. Firecracker/crosvm) only implements part of these traits but not all of them? So I am not sure if we should separate hypervisor/vm/vcpu traits. How do you think?
Thanks a lot for your open mind on this issue! I will raise this issue to community to see if other guys have any comments.
Hi @yisun-git, yup, you are correct that the Hypervisor crate would depend on the Vm crate, and the Vm crate on the Vcpu crate. But I can still see VMM implementations implementing lower-level traits but not the higher-level ones. For example, I would as a first pass convert the libwhp
Hyper-V crate to only use the Vcpu
to start, since its reference hypervisor code is implemented differently than all the functionality provided by Firecracker/crosvm. Other projects as well might find the lower-level crates useful as well, as building blocks without pulling in the whole thing.
Moved this discussion over to https://github.com/rust-vmm/community/issues/50! Thanks :)
Hi, @jennymankin
I did a quick test for "impl Trait" but it seems it cannot work for my requirements.
First, I want to declare a field with generic type in struct Vmm. struct Vmm { hyp: Hypervisor, ("impl Hypervisor" is tried too) }
But this is not allowed with below error:
error 277| the size for values of type (dyn hypervisor::Hypervisor + 'static)
cannot be known at compilation time
Second, it seems the "impl Trait" cannot be used in trait definition. pub trait Hypervisor { fn create_vm(&self) -> impl Vm; }
Error reported:
error 562| impl Trait
not allowed outside of function and inherent method return types
Any suggestion? Thanks!
Addressing the second error, since getting the Vm and Hypervisor abstractions working is a prerequisite for the Vmm crate (and I actually need to think more about the Vmm crate in general):
Ah right, I forgot about the limitation about using an impl Trait
in a Trait definition. Maybe an associated type is appropriate here:
(Also note that each of these create_*
functions should probably return a Result<thing>
rather than just a thing
), so I've prototyped both here)
Trait definition:
pub trait Vm {
type VcpuType;
fn create_vcpu2(&self, id:u8) -> Self::VcpuType;
fn create_vcpu3(&self, id:u8) -> Result<Self::VcpuType>;
Vm trait implementation for kvm-ioctls:
impl Vm for VmFd {
type VcpuType = VcpuFd;
fn create_vcpu2(&self, id:u8) -> VcpuFd {
let vcpu = unsafe { File::from_raw_fd(0) }; // yeah these two lines are nonsense, just want to make it build :)
let kvm_run_ptr = KvmRunWrapper::mmap_from_fd(&vcpu, self.run_size).unwrap();
new_vcpu(vcpu, kvm_run_ptr)
}
fn create_vcpu3(&self, id:u8) -> Result<VcpuFd> {
let vcpu = unsafe { File::from_raw_fd(0) }; // yeah these two lines are nonsense, just want to make it build :)
let kvm_run_ptr = KvmRunWrapper::mmap_from_fd(&vcpu, self.run_size)?;
Ok(new_vcpu(vcpu, kvm_run_ptr))
}
}
This works under the assumption that a given implementation of the Vm trait would only want to implement a single type of Vcpu--that a Vm trait implementation would not want to interchange different types of Vcpus. Without thinking too deeply, I think this is a reasonable assumption, in that a KVM Vm will want a KVM Vcpu, a Hyper-V Vm will want a Hyper-V Vcpu, etc.
However, if something more generic is desired, there is a Rust RFC for expanding the impl trait
concept to be used as definitions to associated types. It looks like there is active, recent development on this work and even a PR up, so if looks like something that'd be useful long-term it might not be far away:
https://github.com/rust-lang/rfcs/pull/2515
(The comments/details toward the end of the PR drive home the syntax and use cases)
Hi, @jennymankin
Thanks for the ideas! But there is one problem not solved. In vmm/src/vstate.rs (firecracker), the struct Vcpu is declared as below. The VcpuFd is a kvm specific struct.
pub struct Vcpu {
...
fd: VcpuFd,
...
}
The main purpose of hypervisor abstraction is to make upper layer be hypervisor agnostic. So I re-design the struct Vcpu as below. Then, the Vmm crate only knows the Vcpu trait but not kvm specific VcpuFd so that Vmm crate does not need include kvm-ioctls crate.
pub struct GuestVcpu {
...
fd: Box<Vcpu>,
...
}
Per your sample codes, I do not see how to avoid including kvm specific thing, i.e. VcpuFd. Maybe something I missed?
I think you're right. To add a VcpuGuest
-like wrapper around the thing that implements Vcpu
functionality (the VcpuFd
, for example) struct, the dynamic dispatch by way of a Box<>
is the only way to make a hypervisor-agnostic VcpuGuest
wrapper struct.
But I've stewed over this and I don't think a hypervisor-agnostic solution needs this extra VcpuGuest
wrapper.
The reason is that there's a difference between what the hypervisor-agnostic crates (like a VMM, VM, and VCPU) provide, and the implementation of each of those traits. For example, a KVM implementation of the traits vs. a Hyper-V implementation of the crates. And then at a level higher than those, you have a specific implementation of the whole stack, for example Firecracker.
And this struct Vcpu
case is interesting, because the struct Vcpu
wrapper around the VcpuFd
is a Firecracker thing--an extra layer--and not a KVM thing (eg, it's not part of the usual VMM -> VM -> VCPU stack). You can see in all the examples provided in the kvm-ioctls
crate that the VcpuFd can be used directly after creating a VM.
/// # extern crate kvm_ioctls;
/// # use kvm_ioctls::{Kvm, VmFd, VcpuFd};
/// let kvm = Kvm::new().unwrap();
/// let vm = kvm.create_vm().unwrap();
/// // Create one vCPU with the ID=0.
/// let vcpu = vm.create_vcpu(0);
So while the building blocks are present for a hypervisor-agnostic VMM/VM/VCPU are there, Firecracker adds its own layers. Since Firecracker itself is implementing the VMM, at some point in the hierarchy there is/was always going to have to be a declaration of a hard implementation of the traits. That is, Firecracker was going to have create to an instance of the KVM implementation of the VMM, VM, and VCPU traits. To support another hypervisor (say Hyper-V), this would have to be conditionally compiled, and Firecracker would also (for example) create an instance of the Hyper-V implementation of the VMM, VM, and VCPU crates.
So I think it's fair to also have the Vcpu
struct also be conditionally compiled to contain either a KVM-like VcpuFd
or another implementation of a Vcpu like the Hyper-V one. In short, no extra level of abstraction is required.
Let me know if this is unclear, I'm not sure if I'm explaining clearly what I mean :)
Proposal
vmm-vcpu has made Vcpu handling be hypervisor agnostic. But there are still some works to do to make whole rust-vmm be hypervisor agnostic. So here is a proposal to extend vmm-vcpu to Hypervisor crate to make rust-vmm be hypervisor agnostic.
Short Description
Hypervisor crate abstracts different hypervisors interfaces (e.g. kvm ioctls) to provide unified interfaces to upper layer. The concrete hypervisor (e.g. Kvm/ HyperV) implements the traits to provide hypervisor specific functions.
The upper layer (e.g. Vmm) creates Hypervisor instance which links to the running hypervisor. Then, it calls running hypervisor interfaces through Hypervisor instance to make the upper layer be hypervisor agnostic.
Why is this crate relevant to the rust-vmm project?
Rust-vmm should be workable for all hypervisors, e.g. KVM/HyperV/etc. So the hypervisor abstraction crate is necessary to encapsulate the hypervisor specific operations so that the upper layer can simplify the implementations to be hypervisor agnostic.
Design
Relationships of crates
Compilation arguments
Create concrete hypervisor instance for Hypervisor users (e.g. Vmm) through compilation argument. Because only one hypervisor is running for cloud scenario.
Hypervisor crate
This crate itself is simple to expose three public traits Hypervisor, Vm and Vcpu. This crate is used by KVM/HyperV/etc. The interfaces defined below are used to show the mechanism. They are got from Firecracker. They are more Kvm specific. We may change them per requirements.
Note: The Vcpu part refers the [1] and [2] with some changes.
[1] While the data types themselves (VmmRegs, SpecialRegisters, etc) are exposed via the trait with generic names, under the hood they can be kvm_bindings data structures, which are also exposed from the same crate via public redefinitions:
Sample codes to show how it works
Kvm crate
Below are sample codes in Kvm crate to show how to implement above traits.
Vmm crate
Below are sample codes in Vmm crate to show how to work with Hypervisor crate.
When start Vmm, create concrete hypervisor instance according to compilation argument. Then, set it to Vmm and start the flow: create guest vm -> create guest vcpus -> run.
References: [1] https://github.com/rust-vmm/community/issues/40 [2] https://github.com/rust-vmm/vmm-vcpu