Closed rbradford closed 3 years ago
Consensus in #100 seems to be that this is the right approach. It would be nice if we could drop our fork of vm-memory.
Now the vm-memory create has atomic interfaces for atomic access. Is it suitable to split existing vm-memory interfaces into byte stream access and object access? Current the read_obj()/write_obj() could be classified as object access, and all other existing interfaces of GuestMemory are byte stream accesses. So instead of heuristic algorithm, we may use: 1) copy_slice() for read_obj()/write_obj() with correctly aligned address 2) revert to original slice.write(buf) for other access methods.
Now the vm-memory create has atomic interfaces for atomic access. Is it suitable to split existing vm-memory interfaces into byte stream access and object access? Current the read_obj()/write_obj() could be classified as object access, and all other existing interfaces of GuestMemory are byte stream accesses. So instead of heuristic algorithm, we may use:
1. copy_slice() for read_obj()/write_obj() with correctly aligned address 2. revert to original slice.write(buf) for other access methods.
Sorry @jiangliu I don't really follow what you are saying. Could you elaborate more, giving some concrete examples? I think you're saying that we need to check everywhere that the code is being used to ensure we're using the correct API?
Now the vm-memory create has atomic interfaces for atomic access. Is it suitable to split existing vm-memory interfaces into byte stream access and object access? Current the read_obj()/write_obj() could be classified as object access, and all other existing interfaces of GuestMemory are byte stream accesses. So instead of heuristic algorithm, we may use:
1. copy_slice() for read_obj()/write_obj() with correctly aligned address 2. revert to original slice.write(buf) for other access methods.
Sorry @jiangliu I don't really follow what you are saying. Could you elaborate more, giving some concrete examples? I think you're saying that we need to check everywhere that the code is being used to ensure we're using the correct API?
Sorry for my poor english. What I means is 1) revert #95 2) only use copy_slice()(introduced by #95 ) for read_obj()/write_obj().
Now the vm-memory create has atomic interfaces for atomic access. Is it suitable to split existing vm-memory interfaces into byte stream access and object access? Current the read_obj()/write_obj() could be classified as object access, and all other existing interfaces of GuestMemory are byte stream accesses. So instead of heuristic algorithm, we may use:
1. copy_slice() for read_obj()/write_obj() with correctly aligned address 2. revert to original slice.write(buf) for other access methods.
Sorry @jiangliu I don't really follow what you are saying. Could you elaborate more, giving some concrete examples? I think you're saying that we need to check everywhere that the code is being used to ensure we're using the correct API?
Sorry for my poor english. What I means is
1. revert #95 2. only use copy_slice()(introduced by #95 ) for read_obj()/write_obj().
Ah, cool, I get it. I think that might solve our problem as we use the .write_slice()/.read_slice()
entry points on our critical paths. Are you going to propose a patch?
Now the vm-memory create has atomic interfaces for atomic access. Is it suitable to split existing vm-memory interfaces into byte stream access and object access? Current the read_obj()/write_obj() could be classified as object access, and all other existing interfaces of GuestMemory are byte stream accesses. So instead of heuristic algorithm, we may use:
1. copy_slice() for read_obj()/write_obj() with correctly aligned address 2. revert to original slice.write(buf) for other access methods.
Sorry @jiangliu I don't really follow what you are saying. Could you elaborate more, giving some concrete examples? I think you're saying that we need to check everywhere that the code is being used to ensure we're using the correct API?
Sorry for my poor english. What I means is
1. revert #95 2. only use copy_slice()(introduced by #95 ) for read_obj()/write_obj().
Ah, cool, I get it. I think that might solve our problem as we use the
.write_slice()/.read_slice()
entry points on our critical paths. Are you going to propose a patch?
We are on Chinese National Holiday this week, so it would be great if you could help on this:)
Hi, sorry for the late review. The changes look good! #95 was necessary because we were using read/write_obj
as if they have atomic access semantics (which is required in several places, such as virtio queue handling). The recently published v0.3.0
adds the load
and store
methods to Bytes
, which are guaranteed to perform atomic accesses (with a user-specified ordering).
At this point we can go back to a world where read/write_obj
do not have to provide any guarantees w.r.t. concurrent access, which simplifies the code and revers the performance impact. However, we have to ensure (to the extent possible) that vm-memory
users switch to the new methods for accesses that need to be atomic first. We can use this patch while the transition is going on. Does that sound reasonable?
Hi again! Should we merge this? Also, what do you think about reverting #95 entirely at some point in the future? (including for read/write_obj
since they are not actually supposed to be used for atomic accesses; we need to clarify that as part of the interface documentation as well)
It sounds reasonable to merge this PR as a transient solution to avoid regressions. And eventually we should revert to copy_from_slice
.
I was wondering when would be the right time to revert #95 ? Perhaps we can use the recent jump to 0.3.x as an opportunity to do it now? Users needing the old behavior can stay at 0.2.2.
I think that reverting should take place as part of a subsequent series (i.e. 0.4.x), so that ppl can use 0.3.x to transition to the new atomic methods while the old read/write_obj
behaviour remains in place as well. Afterwards, the main concerns are that consumers which still rely on the old semantics should be careful not to inadvertently upgrade to the newer versions, and we'll prob have to backport stuff for a while. I guess we should start by engaging the known customers of vm-memory
(and also open an issue about it) to see how fast we can move forward. I'll start doing that (+ double check once more and merge this PR if there are no further comments) by tomorrow.
@alexandruag Sounds like a sensible strategy to me.
Where small objects are those objects that are less then the native data width for the platform. This ensure that volatile and alignment safe read/writes are used when updating structures that are sensitive to this such as virtio devices where the spec requires writes to be atomic.
Fixes: cloud-hypervisor/cloud-hypervisor#1258 Fixes: #100
Signed-off-by: Rob Bradford robert.bradford@intel.com