Closed NormJohnIV closed 3 years ago
Except for the comment above, code looks good.
@enderm re-test based on Norm's information above.
hm, things failed for me on a brand-new cluster. After deleting the driver, the helm install works. Also, after doing a "baseline,uninstall" I can successfully install the efs-csi driver. BUT, against a brand-new cluster, I still get the error. Looks like the csi driver is being installed by default with eks, at least when using the module that we use in viay4-iac-aws Here is what I have:
$ kubectl get csidrivers.storage.k8s.io -o yaml apiVersion: v1 items:
correction: ansible 'uninstall, baseline' does actually NOT remove the pre-installed csi driver.
@enderm : From further testing, we do not actually need the aws efs csi driver. So, it has been removed. You should be able to deploy fine without it now.
sigh, no luck with this latest one either. Against a fresh cluster: TASK [efs-provisioner : Add helm repos] **** changed: [localhost] Wednesday 09 December 2020 01:38:07 +0000 (0:00:00.734) 0:00:04.704 ****
TASK [Deploy efs-provisioner] **
fatal: [localhost]: FAILED! => {"changed": false, "command": "/usr/local/bin/helm --kubeconfig /tmp/ansible.0nz3ad0w/.kube --namespace=efs-provisioner --version=0.12.0 show chart stable/efs-provisioner", "msg": "Failure when executing Helm command. Exited 1.\nstdout: \nstderr: Error: failed to download \"stable/efs-provisioner\" at version \"0.12.0\" (hint: running helm repo update
may help)\n", "stderr": "Error: failed to download \"stable/efs-provisioner\" at version \"0.12.0\" (hint: running helm repo update
may help)\n", "stderr_lines": ["Error: failed to download \"stable/efs-provisioner\" at version \"0.12.0\" (hint: running helm repo update
may help)"], "stdout": "", "stdout_lines": []}
@enderm are you running from within the docker container? If so, can you jump into the container and try the helm command by hand and see if the helm repo might be the problem. Thx.
yes, I'm using the container.
Running the helm command manually gets the same result:
$ /usr/local/bin/helm --kubeconfig /tmp/ansible.uy2bcx18/.kube --namespace=efs-provisioner --version=0.12.0 show chart stable/efs-provisioner
Error: failed to download "stable/efs-provisioner" at version "0.12.0" (hint: running helm repo update
may help)
and: $ helm repo update Error: no repositories found. You must add one before updating
The repo is added only when the tool runs. By default, the container has no repos. Doing some searching, we see that the efs provisioner from the new helm stable repo is now deprecated. The only active efs-provisioner is provided by banzai cloud at version 0.0.2. Therefore, I will look into switch over to using it.
The banzai cloud version is quite outdated and not a "proper" helm chart. So, The better option would be to package the tgz to the latest stable version in this repo.
EFS helm chart has been added to the efs-provisioner role's files.
TASK [Deploy efs-provisioner] ** fatal: [localhost]: FAILED! => {"changed": false, "command": "/usr/local/bin/helm --kubeconfig /tmp/ansible.okd7poq7/.kube repo update", "msg": "Failure when executing Helm command. Exited 1.\nstdout: \nstderr: Error: no repositories found. You must add one before updating\n", "stderr": "Error: no repositories found. You must add one before updating\n", "stderr_lines": ["Error: no repositories found. You must add one before updating"], "stdout": "", "stdout_lines": []}
I disabled update_repo_cache on the efs-provisioner since there is no helm repo in the docker container when it runs and the chart is local anyways.
looks good now.
@enderm That error states you already had a csidriver installed that is missing the helm annotations. I believe this is an artifact from the previous install attempts with the fault helm manifest. This can be cleared by running "kubectl delete csidrivers.storage.k8s.io efs.csi.aws.com". Also, delete secret "sh.helm.release.v1.aws-efs-csi-driver.v1" if it exists in the kube-system namespace