Open ialidzhikov opened 3 years ago
I wrote a small script to detect the affected clusters and their owners:
#/bin/bash -e -x pipefail
shoot_list=$(kubectl get shoots -o json -A | jq -r '.items[] | select(.spec.provider.type == "azure") | select(.spec.provider.workers[] | .volume.type == "standard") | [.metadata.namespace,.metadata.name,.metadata.creationTimestamp] | @json' | sed s/garden-//g | uniq)
for a in $shoot_list; do
echo "---"
echo "Project: $(echo $a | jq -r .[0])"
echo "Shoot: $(echo $a | jq -r .[1])"
echo "Owner: $(kubectl get project $(echo $a | jq -r .[0]) -o json | jq -r .spec.owner.name)"
echo "Created at $(echo $a | jq -r .[2])"
done
/assign
/remove lifecycle/rotten
https://github.com/gardener-attic/gardener-extensions/pull/401 introduces the following code fragment for backwards-compatibility reasons:
https://github.com/gardener/gardener-extension-provider-azure/blob/b4c364899891e03886d607f2578409e256bbd9f4/pkg/controller/worker/machines.go#L306-L315
Prior to https://github.com/gardener-attic/gardener-extensions/pull/401, the azure machines were always with the default os disk belonging to the requested machine type.
We already have in place the validation that prevents creation of a new cluster with volume type !=
["Standard_LRS", "StandardSSD_LRS", "Premium_LRS"]
.We still have a small number of legacy Shoots which still use
.spec.provider.workers[].machine.volume.type=standard
. We should actively ping them to migrate (a rolling update of the Nodes will be caused by change ofvolume.type
).After that we can clean up the above explicit check for
["Standard_LRS", "StandardSSD_LRS", "Premium_LRS"]
./kind cleanup /platform azure /priority normal /cc @dkistner