esierra
Sep 12th at 11:53 PM
i think the flag --vm-type is not overriding as expected (edited)
esierra
Sep 12th at 11:56 PM
when I change that parameter to standard in /etc/kubernetes/azure.json, it starts working properly (edited)
esierra
Sep 13th at 6:25 PM
Here's the proof of what I mentioned:
csi-azuredisk-controller v1.30.3
azuredisk configuration:
Args:
--v=5
--endpoint=$(CSI_ENDPOINT)
--metrics-address=0.0.0.0:29604
--disable-avset-nodes=false
--vm-type=standard
--drivername=disk.csi.azure.com
--cloud-config-secret-name=azure-cloud-provider
--cloud-config-secret-namespace=kube-system
--custom-user-agent=
--user-agent-suffix=OSS-helm
--allow-empty-cloud-config=false
--vmss-cache-ttl-seconds=-1
--enable-traffic-manager=false
--traffic-manager-port=7788
--enable-otel-tracing=false
--check-disk-lun-collision=true
logs from azuredisk:
I0913 09:38:05.003789 1 azuredisk.go:226] override VMType(vmss) in cloud config as standard
since vmType is set to vmss in azure.json
I0913 09:38:05.003809 1 azuredisk.go:245] cloud: AzurePublicCloud, location: westeurope, rg: esierra-dev-vms, VMType: standard, PrimaryScaleSetName: , PrimaryAvailabilitySetName: , DisableAvailabilitySetNodes: false
And the error:
E0913 09:42:01.872893 1 azure_controller_common.go:459] error of getting data disks for node worker1-md-2-ltm2g-mmm6j: not a vmss instance
It seems that the override value of VMType is not being taken into account and the flow continues with a vmss VMType, used in the following function: https://github.com/kubernetes-sigs/azuredisk-csi-driver/blob/48761a631c9149f198258[…]c/vendor/sigs.k8s.io/cloud-provider-azure/pkg/provider/azure.go
What happened:
esierra Sep 12th at 11:53 PM i think the flag --vm-type is not overriding as expected (edited)
esierra Sep 12th at 11:56 PM when I change that parameter to standard in /etc/kubernetes/azure.json, it starts working properly (edited)
esierra Sep 13th at 6:25 PM Here's the proof of what I mentioned: csi-azuredisk-controller v1.30.3 azuredisk configuration: Args: --v=5 --endpoint=$(CSI_ENDPOINT) --metrics-address=0.0.0.0:29604 --disable-avset-nodes=false --vm-type=standard --drivername=disk.csi.azure.com --cloud-config-secret-name=azure-cloud-provider --cloud-config-secret-namespace=kube-system --custom-user-agent= --user-agent-suffix=OSS-helm --allow-empty-cloud-config=false --vmss-cache-ttl-seconds=-1 --enable-traffic-manager=false --traffic-manager-port=7788 --enable-otel-tracing=false --check-disk-lun-collision=true logs from azuredisk: I0913 09:38:05.003789 1 azuredisk.go:226] override VMType(vmss) in cloud config as standard since vmType is set to vmss in azure.json I0913 09:38:05.003809 1 azuredisk.go:245] cloud: AzurePublicCloud, location: westeurope, rg: esierra-dev-vms, VMType: standard, PrimaryScaleSetName: , PrimaryAvailabilitySetName: , DisableAvailabilitySetNodes: false And the error: E0913 09:42:01.872893 1 azure_controller_common.go:459] error of getting data disks for node worker1-md-2-ltm2g-mmm6j: not a vmss instance It seems that the override value of VMType is not being taken into account and the flow continues with a vmss VMType, used in the following function: https://github.com/kubernetes-sigs/azuredisk-csi-driver/blob/48761a631c9149f198258[…]c/vendor/sigs.k8s.io/cloud-provider-azure/pkg/provider/azure.go
NEW
esierra Sep 13th at 6:27 PM
What you expected to happen:
How to reproduce it:
Anything else we need to know?:
Environment:
kubectl version
):uname -a
):/kind bug