admirito / gvm-containers

Greenbone Vulnerability Management Containers
85 stars 57 forks source link

Error: failed pre-install: timed out waiting for the condition #27

Closed itsec207 closed 3 years ago

itsec207 commented 3 years ago

When I tried to install gvm by helm I have some issues and after that installation failed. I don't know why?

Command: helm install gvm ./gvm-*.tgz --namespace wazuh--timeout 15m --set gvmd-db.postgresqlPassword="mypassword"

-installed on namespace which I created previosly

Output:

Error: failed pre-install: timed out waiting for the condition

admirito commented 3 years ago

There is a pre-install hook that will download the gvm feeds before chart installation that seems to be failing or at least to slow to be finished before 15 minute timeout (maybe the internet connection is slow?).

You can increase the timeout in the command line e.g. 30m instead of 15m, but it is better to investigate the problem during the chart installation by looking at the logs of the pre-installation hook pod:

kubectl get pod

kubectl logs -f <gvm-feeds-sync pod name found in the previous command>

The warning skipped value for extraEnv: Not a table is related to the default values in the pstgresql chart and is irrelevant. (Just a warning that you can ignore)

admirito commented 3 years ago

Well, it seems that there is a problem with pre-installation hooks for GVM feeds sync and the pre-installation hook pod will be in the pending state forever. Can you try again with the new patch in #28 @itsec207?

itsec207 commented 3 years ago

Hello,

From logs I saw information " Warning FailedScheduling 51s (x10 over 9m15s) default-scheduler 0/1 nodes are available: 1 persistentvolumeclaim "gvm" not found." I saw that https://github.com/admirito/gvm-containers/releases/download/chart-1.0.1/gvm-1.0.1.tgz not exist

admirito commented 3 years ago

Well, I didn't release the binary yet, but you can create the chart manually or use this binary that I just made: gvm-1.0.1.tar.gz

helm install ./gvm-1.0.1.tar.gz ...
Simon3 commented 3 years ago

With the 1.0.1 chart, the gvm-feeds-sync pod is stuck in ContainerCreating state (fresh install):

Events:
  Type     Reason              Age                  From                                       Message
  ----     ------              ----                 ----                                       -------
  Warning  FailedScheduling    12m (x3 over 12m)    default-scheduler                          0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
  Normal   Scheduled           12m                  default-scheduler                          Successfully assigned gvm/gvm-feeds-sync-5dls2 to gke-kube-1-n2-pool-29a1f059-5v0g
  Warning  FailedAttachVolume  12m                  attachdetach-controller                    Multi-Attach error for volume "pvc-b0b0559b-a024-4659-9e57-4d6d3a6bbba4" Volume is already used by pod(s) gvm-openvas-75465d577-864jl, gvm-gvmd-68f48968b7-96wfc
  Warning  FailedMount         5m45s (x3 over 10m)  kubelet, gke-kube-1-n2-pool-29a1f059-5v0g  Unable to attach or mount volumes: unmounted volumes=[data-volume], unattached volumes=[run-dir data-volume default-token-6xh96]: timed out waiting for the condition
  Warning  FailedMount         3m29s                kubelet, gke-kube-1-n2-pool-29a1f059-5v0g  Unable to attach or mount volumes: unmounted volumes=[data-volume], unattached volumes=[data-volume default-token-6xh96 run-dir]: timed out waiting for the condition
  Warning  FailedMount         74s                  kubelet, gke-kube-1-n2-pool-29a1f059-5v0g  Unable to attach or mount volumes: unmounted volumes=[data-volume], unattached volumes=[default-token-6xh96 run-dir data-volume]: timed out waiting for the condition

It might be because I'm on a multi-nodes cluster.

ReadWriteOnce – the volume can be mounted as read-write by a single node

But we have 3 different pods accessing the same volume (same PVC): openvas-deployment, gvmd-deployment, feeds-sync-hook.

Edit: indeed that was the problem, specifying a required node affinity solved my problem. If it's intended, maybe it should at least be documented that all the pods need to be scheduled on the same node for the chart to work.

admirito commented 3 years ago

I have merged the 3926879 that fixes the timeout issue and moved the multi-nodes problem to #34. Thank you @Simon3 for your feedback.

jorotg commented 2 years ago

@admirito , I just now installed the helm chart by cloning git@github.com:admirito/gvm-containers.git and I have the same error.

gvm-gvmd-95bdd85f6-69lc6 0/2 ContainerCreating 0 23m

23m Warning FailedAttachVolume pod/gvm-gvmd-95bdd85f6-69lc6 Multi-Attach error for volume "pvc-8b3800fc-6df6-4c79-b3dc-ccb00791bc9d" Volume is already used by pod(s) gvm-openvas-667d4657f4-598tm 36s Warning FailedMount pod/gvm-gvmd-95bdd85f6-69lc6 Unable to attach or mount volumes: unmounted volumes=[data-volume], unattached volumes=[run-dir data-volume kube-api-access-6vngr]: timed out waiting for the condition 5m10s Warning FailedMount pod/gvm-gvmd-95bdd85f6-69lc6 Unable to attach or mount volumes: unmounted volumes=[data-volume], unattached volumes=[kube-api-access-6vngr run-dir data-volume]: timed out waiting for the condition 9m43s Warning FailedMount pod/gvm-gvmd-95bdd85f6-69lc6 Unable to attach or mount volumes: unmounted volumes=[data-volume], unattached volumes=[data-volume kube-api-access-6vngr run-dir]: timed out waiting for the condition "Edit: indeed that was the problem, specifying a required node affinity solved my problem. If it's intended, maybe it should at least be documented that all the pods need to be scheduled on the same node for the chart to work."

How did you exactly specify this affinity?