Open heprotecbuthealsoattac opened 4 years ago
Have you tried setting WaitForFirstConsumer on your StorageClass? https://kubernetes.io/docs/concepts/storage/storage-classes/#volume-binding-mode It should prevent your situation where a volume provisioned by kubernetes resides in one zone and scheduler schedules it in another.
@wongma7 - This still wont the problem of existing pod moving to a worker node in another AZ ( this may happen for several reasons - upgrading, Crashing due to OOM etc).
Any updates on this. Is this being looked into?
Community Note
Tell us about your request Currently when using EBS for PVs the Stateful Set (or at least each of its pods) is bound to only one availability zone. Would it be possible to "migrate" the EBS volumes across availability zones in case a new pod is scheduled in a different az?
Which service(s) is this request for? EKS (PVC, PV, StorageClass)
Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard? My application requires a persistent local filesystem. Right now I'm using Stateful Sets to achieve that, kubernetes wise it works well. However in case in which the node my pod runs on gets terminated it stops working as expected. Let's go through an example:
Are you currently working around this issue? EFS storage class is AWS' recommendation, however NFS in this case is being used for single machine just to work around the az limitation. So in this case EFS' multi az capability is being used. To paraphrase Larry Wall's words: It's like trying to club someone to death with a loaded Uzi.