Closed alicefr closed 5 months ago
/cc @mhenriks @awels @acardace @vladikr
Hopefully, this time will be the good one :)
@awels @mhenriks could you please check the part related to the datavolumes and tell me if you agree with the limitations
Are you planning to show how migration status is reported?
Great question by @mhenriks - Yes. How do we track progress?
Are you planning to show how migration status is reported?
The status is the status of the VM migration. I don't think we show today the percentage of the completion in the VMI migration object, but we could add it.
The status is the status of the VM migration.
How do we know which migration is related to a spec change?
I don't think we show today the percentage of the completion in the VMI migration object, but we could add it.
Maybe we do not want to solve this today (we could not solve it in the past 3 years or so ;) )
The status is the status of the VM migration.
How do we know which migration is related to a spec change?
I could put somehow this information in the VMI migration objct, like a label. However, I don't think we have this possibility for cpu and memory hotplug either. @acardace am I correct?
Re-reading this proposal to some extent, so:
Re-reading this proposal to some extent, so:
- vol changes, LM is triggered --> here we really migh want to set an annotation to express the "cause". this is something that we should use in other areas as well (virtctl triggered --> cause virtctl, eviction triggered --> cause eviction, …)
Can we target this for a second iteration? I could add it to the implementation phase. We might want to spent more thoughts to the other cases where we trigger the migration like updates and node eviction. I won't use an annotation but here rather a condition in the status of the VMI migration object
- I suppose that if the LM fails, then this will be expressed in the condition? This design should define the condition which is being used.
I can add a condition on the VM that express if the volume update was successful. However, here, there are 2 types conditions:
HotVCPUChange
and HotMemoryChange
@fabiand @mhenriks I was suggesting here to use a label because it is something we can filter. With the new proposal with the update strategy it isn't straightforward to identify which migration is associated to the volume update.
I think it would be helpful for this doc to provide an example of how a user can monitor the status of volume migration i.e. find the matching migration object and interpret the status
@mhenriks I added an example how we can use a label to get the corresponding VMI migration. The information regarding the update and storage migration can be inferred from the status of the VMI migration object,
@fabiand @mhenriks @acardace are you satisfied with the current state of the design? I'd like to start implementing it.
/approve
I am, thanks Alice for the hard work and the patience especially.
/lgtm
forgot I don't have approval rights here.
woot. :tada:
/lgtm
/lgtm
@vladikr @fabiand this proposal has already 2 lgtm. Would you like to have another look and eventually approve it?
It looks good to me already. Let's wait for Fabian's comments to be resolved and we can get this in.
I tried to addressed the last comments, PTAL, thanks!
Thanks lgtm!
/approve Looks good to me as well. Thank you @alicefr !
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: acardace, vladikr
The full list of commands accepted by this bot can be found here.
The pull request process is described here
/lgtm
Third attempt and version for the volume migration
After several discussions, the API and solution diverged quite a lot from the original idea and CRD proposed in https://github.com/kubevirt/community/pull/245. Therefore, I decided to close the previous proposal and open a new PR.
This version introduces a new update strategy for volume under the VM spec and tries to follow the same flow as the cpu and memory hotplug described in the cpu-hotplug proposal
I think the most controversial part of this proposal is the data volumes restriction. In the previous proposal, it was the controller that was getting rid of the datavolume parts. Now, we want that only the user touches the VM spec, then the DV declarations need to be removed manually. Please, consider that we plan to build an overlaying tool that should simplify the use of this API.