admirito / gvm-containers

Greenbone Vulnerability Management Containers
86 stars 58 forks source link

Default Scan Configs Not Found #48

Open ghost opened 2 years ago

ghost commented 2 years ago

Versions:

Storage: Rook-Ceph Cluster running CephFS

After updating all the feeds (Under Administration > Feed Status is all current), I'm not able to add my own scan (Configuration > Scan Configs) due to the default scans not being there. I get an error message when creating my own scan:

Failed to find config 'd21f6c81-2b88-4ac1-b7b4-a2a9f2ad4663'

Here is my "stack" status:

NAME                                   READY   STATUS    RESTARTS   AGE
openvas-gvm-gsad-7bb9878549-kzxhd      1/1     Running   0          16m
openvas-gvm-gvmd-574594678d-4fxrn      2/2     Running   0          16m
openvas-gvm-openvas-697bddb54b-7pl7f   5/5     Running   0          16m
openvas-gvmd-db-0                      1/1     Running   0          10h
openvas-openvas-redis-master-0         1/1     Running   0          10h

Here is what my persistent storage looks like:

NAME                                        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data-openvas-gvmd-db-0                      Bound    pvc-ce0112d6-02d4-4d90-a4a0-c686c74577d9   8Gi        RWO            cephfs         10h
openvas-gvm                                 Bound    pvc-2428fc36-f004-4f0a-9fea-fb7cc732eb9f   5Gi        RWX            cephfs         10h
redis-data-openvas-openvas-redis-master-0   Bound    pvc-3c9a7e91-4769-4618-b03d-4ff89089f687   8Gi        RWO            cephfs         10h

I do see some chatter under the openvas project on something similar, but that turns out to be a redis issue.

Here are my redis-connector logs:

2021/12/14 13:57:23 socat[1] N listening on AF=1 "/run/redis/redis.sock"
2021/12/14 13:57:24 socat[1] N accepting connection from AF=1 "<anon>" on AF=1 "/run/redis/redis.sock"
2021/12/14 13:57:24 socat[1] N forked off child process 9
2021/12/14 13:57:24 socat[1] N listening on AF=1 "/run/redis/redis.sock"
2021/12/14 13:57:24 socat[9] N opening connection to AF=2 10.23.3.149:6379
2021/12/14 13:57:24 socat[9] N successfully connected from local address AF=2 10.100.3.190:48860
2021/12/14 13:57:24 socat[9] N starting data transfer loop with FDs [6,6] and [5,5]
2021/12/14 13:57:35 socat[1] N accepting connection from AF=1 "<anon>" on AF=1 "/run/redis/redis.sock"
2021/12/14 13:57:35 socat[1] N forked off child process 10
2021/12/14 13:57:35 socat[1] N listening on AF=1 "/run/redis/redis.sock"
2021/12/14 13:57:35 socat[10] N opening connection to AF=2 10.23.3.149:6379
2021/12/14 13:57:35 socat[10] N successfully connected from local address AF=2 10.100.3.190:48886
2021/12/14 13:57:35 socat[10] N starting data transfer loop with FDs [6,6] and [5,5]
2021/12/14 14:06:10 socat[10] N socket 1 (fd 6) is at EOF
2021/12/14 14:06:10 socat[10] N socket 1 (fd 6) is at EOF
2021/12/14 14:06:10 socat[10] N socket 2 (fd 5) is at EOF
2021/12/14 14:06:10 socat[10] N exiting with status 0
2021/12/14 14:06:10 socat[1] N childdied(): handling signal 17
2021/12/14 14:06:10 socat[1] N accepting connection from AF=1 "<anon>" on AF=1 "/run/redis/redis.sock"
2021/12/14 14:06:10 socat[1] N forked off child process 11
2021/12/14 14:06:10 socat[1] N listening on AF=1 "/run/redis/redis.sock"
2021/12/14 14:06:10 socat[11] N opening connection to AF=2 10.23.3.149:6379
2021/12/14 14:06:10 socat[11] N successfully connected from local address AF=2 10.100.3.190:50358
2021/12/14 14:06:10 socat[11] N starting data transfer loop with FDs [6,6] and [5,5]
2021/12/14 14:06:10 socat[1] N accepting connection from AF=1 "<anon>" on AF=1 "/run/redis/redis.sock"
2021/12/14 14:06:10 socat[1] N forked off child process 12
2021/12/14 14:06:10 socat[1] N listening on AF=1 "/run/redis/redis.sock"
2021/12/14 14:06:10 socat[12] N opening connection to AF=2 10.23.3.149:6379
2021/12/14 14:06:10 socat[12] N successfully connected from local address AF=2 10.100.3.190:50360
2021/12/14 14:06:10 socat[12] N starting data transfer loop with FDs [6,6] and [5,5]
2021/12/14 14:06:10 socat[11] N socket 1 (fd 6) is at EOF
2021/12/14 14:06:10 socat[11] N socket 1 (fd 6) is at EOF
2021/12/14 14:06:10 socat[11] N socket 2 (fd 5) is at EOF
2021/12/14 14:06:10 socat[11] N exiting with status 0
2021/12/14 14:06:10 socat[12] N childdied(): handling signal 17

So it looks like we have connectivity here. Let me know if there is anything else needed to be seen for debugging?

pabloalbea commented 2 years ago

Same problem here. All feeds updates well but Scan configs and NVTs are empty. One thing that caught my attention is that the version of que gsad container (21.4.3) is not the same as the gvmd and openvas container (21.4.4).

pabloalbea commented 2 years ago

Solved with: kubectl -n gvm edit deployments.apps gvm-gvmd And changing: "- UNIX-LISTEN:/run/ospd/ospd.sock,fork" with "- UNIX-LISTEN:/run/ospd/ospd-openvas.sock,fork"

ghost commented 2 years ago

Solved with: kubectl -n gvm edit deployments.apps gvm-gvmd And changing: "- UNIX-LISTEN:/run/ospd/ospd.sock,fork" with "- UNIX-LISTEN:/run/ospd/ospd-openvas.sock,fork"

Unfortunately, that did not work in my case.

I did spend some time on this yesterday and found that I was running out of resources on my node and my pods were getting re-assigned in the middle of a sync process for the feeds. The pod for GVM takes a bunch of RAM, so node resources can max out quickly if you don't have enough. Once I had plenty of resources, I ran the sync job and all was good.