Closed liudalibj closed 6 months ago
Hi @liudalibj .
Actually, that is mustache templating and it won't work with all the braces and reference with config file, for the time being please replace it with the default cpu and memory values which is already specified there. We will raise a PR to fix it with default values.
Thanks the reply @GunaKKIBM , I tried to replace all {{}} with empty string so that use the default values of cpu and memory.
pattern="\{\{[^}]*\}\}"
sed -E -i "s/$pattern//g" "$filepath"
The csi-driver pods are created with the fixed deployment yaml file. But the ibm-vpc-block-csi containers are failed to start.
kube-system ibm-vpc-block-csi-controller-bdfdf4657-qgdvr 5/6 CrashLoopBackOff 11 (4m32s ago) 36m
kube-system ibm-vpc-block-csi-node-tfpb6 2/3 CrashLoopBackOff 11 (4m43s ago) 36m
kube-system ibm-vpc-block-csi-node-tnnsb 2/3 CrashLoopBackOff 11 (4m37s ago) 36m
the controller logs:
root@liudali-csi-amd64-node-0:~# kubectl logs -n kube-system ibm-vpc-block-csi-controller-bdfdf4657-qgdvr -c iks-vpc-block-driver
{"level":"info","timestamp":"2023-06-16T05:57:31.418Z","caller":"cmd/main.go:87","msg":"IBM CSI driver version","name":"ibm-vpc-block-csi-driver","CSIDriverName":"IBM VPC block driver","DriverVersion":"vpcBlockDriver-"}
{"level":"info","timestamp":"2023-06-16T05:57:31.418Z","caller":"cmd/main.go:88","msg":"Controller Mutex Lock enabled","name":"ibm-vpc-block-csi-driver","CSIDriverName":"IBM VPC block driver","LockEnabled":false}
{"level":"info","timestamp":"2023-06-16T05:57:31.419Z","caller":"ibmcloudprovider/volume_provider.go:50","msg":"NewIBMCloudStorageProvider-Reading provider configuration...","name":"ibm-vpc-block-csi-driver","CSIDriverName":"IBM VPC block driver"}
{"level":"error","timestamp":"2023-06-16T05:57:31.428Z","caller":"config/config.go:172","msg":"Failed to parse config","name":"ibm-vpc-block-csi-driver","CSIDriverName":"IBM VPC block driver","error":"toml: line 9 (last key \"vpc\"): expected a top-level item to end with a newline, comment, or EOF, but got '0' instead"}
{"level":"error","timestamp":"2023-06-16T05:57:31.428Z","caller":"config/config.go:62","msg":"Error parsing config","name":"ibm-vpc-block-csi-driver","CSIDriverName":"IBM VPC block driver","error":"toml: line 9 (last key \"vpc\"): expected a top-level item to end with a newline, comment, or EOF, but got '0' instead"}
{"level":"error","timestamp":"2023-06-16T05:57:31.428Z","caller":"ibmcloudprovider/volume_provider.go:54","msg":"Error loading configuration","name":"ibm-vpc-block-csi-driver","CSIDriverName":"IBM VPC block driver"}
{"level":"fatal","timestamp":"2023-06-16T05:57:31.428Z","caller":"cmd/main.go:97","msg":"Failed to instantiate IKS-Storage provider","name":"ibm-vpc-block-csi-driver","CSIDriverName":"IBM VPC block driver","error":"toml: line 9 (last key \"vpc\"): expected a top-level item to end with a newline, comment, or EOF, but got '0' instead"}
It seems that the master code failed to deal the file from https://github.com/kubernetes-sigs/ibm-vpc-block-csi-driver/blob/master/deploy/kubernetes/driver/kubernetes/slclient_Gen2.toml?
Check the latest config format https://github.com/IBM/secret-utils-lib/blob/master/secrets/storage-secret-store/slclient.toml, maybe we should use
...
[VPC]
...
?
csi-containers started with slclient_Gen2.toml as follow:
[VPC]
iam_client_id = "bx"
iam_client_secret = "bx"
g2_token_exchange_endpoint_url = "https://iam.cloud.ibm.com"
g2_riaas_endpoint_url = "https://eu-gb.iaas.cloud.ibm.com"
g2_resource_group_id = "0013399570e34af29d51788b600fc617"
g2_api_key = "myapikey"
provider_type = "g2"
root@liudali-csi-amd64-node-0:~/ibm-vpc-block-csi-driver/scripts# kubectl get po -A |grep vpc-block
kube-system ibm-vpc-block-csi-controller-bdfdf4657-m4n7v 6/6 Running 0 3m34s
kube-system ibm-vpc-block-csi-node-9dqkd 3/3 Running 0 3m32s
kube-system ibm-vpc-block-csi-node-phrfb 2/3 CrashLoopBackOff 5 (33s ago) 3m34s
root@liudali-csi-amd64-node-0:~/ibm-vpc-block-csi-driver/scripts#
there is one CrashLoopBackOff
is expected.
I am sorry, looks like I am missing something. Why is that one CrashLoopBackOff expected?
I am sorry, looks like I am missing something. Why is that one CrashLoopBackOff expected?
Sorry for make you confuse @GunaKKIBM I have two work nodes one arch is amd64 and another one arch is s390x, so the csi-node for s390x one failed to start is expected.
@liudalibj
Very soon, we will be supporting multiarch images, may be in a couple of weeks and will update more on the in the docs. For now, You might need to build one and create manifest for the same for the time being.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
When I follow https://github.com/kubernetes-sigs/ibm-vpc-block-csi-driver/blob/master/README.md#deploy-csi-driver-on-your-cluster to install ibm-vpc-block-csi-driver to my IKS cluster.
I meet follow error:
I tried to fix the issue by replace the
patches:
topatchesStrategicMerge:
in deploy/kubernetes/driver/kubernetes/overlays/stage/kustomization.yamlThe error message is changed to:
I tried to fix the new issue by add namespace
namespace: kube-system
to controller-server-images.yaml and node-server-images.yaml underdeploy/kubernetes/driver/kubernetes/overlays/stage
the new error looks like:I tried to fix the new issue by update the
StatefulSet
toDeployment
ofdeploy/kubernetes/driver/kubernetes/overlays/stage/controller-server-images.yaml
The new error looks like:Try to find the built out yaml file, find that the resources section broken the deploy, there are many {{kube-system.addon-vpc-block-csi-driver-configmap.xxx}} need be replaced?
I install
kustomize
by commandthe version is 'v5.0.3'
The go path