Open sebgl opened 3 years ago
Just in case someone encounters this like I did, using AKS, Azure managed Kubernetes, here's my work around in rough steps. So far I've only tried to apply new disk size while ECK was running which led to the above error, but I think you can avoid these errors by changing the Elasticsearch CRD desired disk size at step 6 only instead.
kubectl scale statefulset elastic-operator -n elastic-system --replicas 0
--cascade=orphan
to stop it from removing the Pods.
kubectl delete statefulset my-nodeSet --cascade=orphan
It's important that you do NOT delete any Services associated with the StatefulSet.Hopefully this works for others. I am running Elasticsearch 6.8.13 with ECK 1.7.0 in this scenario.
Hello there, I just ran into this issue as well, as I wanted to resize a disk to 4Ti. This failed, as the storage-class I am using was using cachingmode=ReadOnly, which is only supported <4Ti (4095Gi is okay...)
Resetting to the old value or 4095Gi does not work.
Is the above workaround still the way to go, @sebgl?
The following situation happened to an Azure user:
At this point it's pretty hard for the user to figure out a way to get back to a clean state, since the StatefulSet has been recreated by ECK with the new size already. Should the operator catch the PVC event error and allow a downsize to go through in that particular case?