Open erictune opened 7 years ago
Possible approach 1:
pod-2
via DNS, then sync with it, then exit.pod-1
is later created, sync the other way, and scale back down.Advantages of this approach:
Drawbacks to this approach:
Possible approach 2:
pod-32jdg
pod-32jdg
, then it creates a replacement such as pod-m2k87
. It uses the same labels. pod-32jdg
exits gracefully when migration is done.Advantages of this approach:
Drawbacks to this approach:
Approach 1 seems like a custom strategy: https://github.com/kubernetes/kubernetes/issues/14510
cc: @kubernetes/sig-apps-feature-requests
This proposal can simplify both the approaches. I believe future Operators can become lightweight if we allow more elaborate cleanup mechanism.
Ref #3949
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten /remove-lifecycle stale
@erictune have you tried the approach 2? Actually i'm investigating on the similar method now to migrate runv containters from one node to another.
A user wants to extend Kubernetes to allow for application-specific migration in response to pod deletion events, whenever possible.
pod-1
.pod-1
, then a replica,pod-2
should be created.pod-1
is actually terminated, it will discoverpod-2
and they will do an application-level handoff of state.This issue is created to suggest possible ways to implement this pattern.