Closed vireshk closed 1 year ago
Adding the
MmapInternal
trait generates a lot of duplicated code across all platforms, and I think it's making the code harder to maintain in the long run. I would suggest to instead declare axen
feature, and define functions that are xen specific in the existing mmap implementation, without altering existing APIs.The
translate
function should only be declared for Xen because it results in a no-op for the other ones. Thebuild
function onMmapRegionBuilder
can have an additional call to the neededioctl
that is only called under thexen
feature.
I certainly think we should gate this on a xen feature flag but I suspect we still want to build binaries that can work in both (for example distros that support Xen and KVM don't want to have different userspaces). Do we need a common feature flag that we use across all the crates? I think the vhost-user crate wants to export some flags to the backends to indicate what that build of the library (and its dependent vm-memory) can support to the other end of the vhost-user link.
Adding the
MmapInternal
trait generates a lot of duplicated code across all platforms, and I think it's making the code harder to maintain in the long run. I would suggest to instead declare axen
feature, and define functions that are xen specific in the existing mmap implementation, without altering existing APIs.
Hi @andreeaflorescu
IIUC, you are suggesting to either build standard unix mapping support or Xen, but not both, right ?
We have two entities that need this support, xen-vhost-frontend and vhost-device crate. xen-vhost-frontend will always need the xen specific stuff, so it may work for it. But I am not sure what's the best way to go forward for vhost-device.
We always wanted the backends to be hypervisor agnostic and needing to build them differently based on which hypervisor we are using them for, looks a bit tricky and against that idea. Also based on hypervisor, there is almost nothing that will change in vhost-device crate, apart from Cargo.toml which will select Xen specific feature for vhost and vm-memory crates and the code remains same otherwise.
And of course what @stsquad commented is a valid concern too.
I think that the interface can stay the same on both platforms, can't it? Is there anything Xen specific that needs to be sent as a parameter?
I think that the interface can stay the same on both platforms, can't it? Is there anything Xen specific that needs to be sent as a parameter?
The xen specific data is sent by the frontend while sending the regions over the unix socket. The vhost-device backends don't do anything special (apart from setting XEN_MMAP feature, which is enabled for ever now), but they need to be built against a vm-memory crate that supports either xen or unix type mapping, isn't it ?
We had a chat about this offline together with @stsquad @vireshk @lauralt.
Here are the meeting notes:
Changes needed for Xen
Status
Superseeded by: https://github.com/rust-vmm/vm-memory/pull/231
This pull request adds support to allow Xen specific memory mappings to be performed for specific memory regions.
The changes to the vhost-user protocol were shared here and are already Acked, though they aren't applied yet.
https://lore.kernel.org/all/cover.1678351495.git.viresh.kumar@linaro.org/