Closed Kars-de-Jong closed 4 years ago
Did you run greenbone-nvt-sync by uncommenting nvt-sync container in the docker-compose.yml at least once? If yes, what is the output of docker exec gvmcontainers_gvmd_1 ls /var/lib/openvas/plugins
? It shoud print the name of the downloaded nasl
files.
The greenbone-nvt-sync binary is not available in the gvmd container but the openvassd volumes (e.g. /var/lib/openvas which contains the NVTs) should be mounted in the gvmd container.
Yes, I uncommented both entries in docker-compose.yml
.
I just repeated from scratch this morning.
The nvt-sync container ran fine, and the command shows a long list of nasl
files.
The cert-scap-sync container failed the first time:
cert-scap-sync_1 | dfn-cert-2019.xml
1,389,110 100% 1.22MB/s 0:00:01 (xfr#20, to-chk=4/25)
cert-scap-sync_1 | sha1sums
1,193 100% 13.09kB/s 0:00:00 (xfr#21, to-chk=3/25)
cert-scap-sync_1 | sha256sums
1,697 100% 18.62kB/s 0:00:00 (xfr#22, to-chk=2/25)
cert-scap-sync_1 | sha256sums.asc
819 100% 8.99kB/s 0:00:00 (xfr#23, to-chk=1/25)
cert-scap-sync_1 | timestamp
13 100% 0.14kB/s 0:00:00 (xfr#24, to-chk=0/25)
cert-scap-sync_1 |
cert-scap-sync_1 | sent 543 bytes received 59,166,276 bytes 1,300,369.65 bytes/sec
cert-scap-sync_1 | total size is 59,150,238 speedup is 1.00
cert-scap-sync_1 | rsync: failed to connect to feed.openvas.org (89.146.224.58): Connection refused (111)
cert-scap-sync_1 | rsync error: error in socket IO (code 10) at clientserver.c(125) [Receiver=3.1.2]
Running it a second time did seem to work. Then I had to wait a long time for the downloaded data to be processed:
gvmcontainers_cert-scap-sync_1 exited with code 0
gvmd_1 | md manage: INFO:2019-05-27 08h22.56 utc:389: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2011.xml
gvmd_1 | md manage: INFO:2019-05-27 08h49.04 utc:389: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2007.xml
gvmd_1 | md manage: INFO:2019-05-27 08h49.20 utc:389: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2017.xml
gvmd_1 | md manage: INFO:2019-05-27 09h03.43 utc:389: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2018.xml
gvmd_1 | md manage: INFO:2019-05-27 09h30.10 utc:389: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2014.xml
gvmd_1 | md manage: INFO:2019-05-27 09h31.40 utc:389: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2012.xml
gvmd_1 | md manage: INFO:2019-05-27 09h33.29 utc:389: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2013.xml
gvmd_1 | md manage: INFO:2019-05-27 09h35.11 utc:389: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2016.xml
gvmd_1 | md manage: INFO:2019-05-27 09h36.31 utc:389: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2006.xml
gvmd_1 | md manage: INFO:2019-05-27 09h36.50 utc:389: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2005.xml
gvmd_1 | md manage: INFO:2019-05-27 09h37.04 utc:389: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2008.xml
gvmd_1 | md manage: INFO:2019-05-27 09h37.36 utc:389: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2015.xml
gvmd_1 | md manage: INFO:2019-05-27 09h38.23 utc:389: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2003.xml
gvmd_1 | md manage: INFO:2019-05-27 09h38.27 utc:389: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2002.xml
gvmd_1 | md manage: INFO:2019-05-27 09h38.39 utc:389: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2004.xml
gvmd_1 | md manage: INFO:2019-05-27 09h38.49 utc:389: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2019.xml
gvmd_1 | md manage: INFO:2019-05-27 09h42.18 utc:389: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2010.xml
gvmd_1 | md manage: INFO:2019-05-27 09h44.46 utc:389: Updating /var/lib/gvm/scap-data/nvdcve-2.0-2009.xml
gvmd_1 | md manage: INFO:2019-05-27 09h45.44 utc:389: Updating OVAL data
gvmd_1 | md manage: INFO:2019-05-27 09h46.46 utc:389: Updating /var/lib/gvm/scap-data/oval/5.10/org.mitre.oval/c/oval.xml
gvmd_1 | md manage: INFO:2019-05-27 09h46.46 utc:389: Updating /var/lib/gvm/scap-data/oval/5.10/org.mitre.oval/m/oval.xml
gvmd_1 | md manage: INFO:2019-05-27 09h46.46 utc:389: Updating /var/lib/gvm/scap-data/oval/5.10/org.mitre.oval/v/family/ios.xml
gvmd_1 | md manage: INFO:2019-05-27 09h46.46 utc:389: Updating /var/lib/gvm/scap-data/oval/5.10/org.mitre.oval/v/family/pixos.xml
gvmd_1 | md manage: INFO:2019-05-27 09h46.46 utc:389: Updating /var/lib/gvm/scap-data/oval/5.10/org.mitre.oval/p/oval.xml
gvmd_1 | md manage: INFO:2019-05-27 09h48.42 utc:389: Updating /var/lib/gvm/scap-data/oval/5.10/org.mitre.oval/i/oval.xml
gvmd_1 | md manage: INFO:2019-05-27 09h48.43 utc:389: Updating /var/lib/gvm/scap-data/oval/5.10/org.mitre.oval/v/family/macos.xml
gvmd_1 | md manage: INFO:2019-05-27 09h48.43 utc:389: Updating /var/lib/gvm/scap-data/oval/5.10/org.mitre.oval/v/family/unix.xml
gvmd_1 | md manage: INFO:2019-05-27 09h48.48 utc:389: Updating /var/lib/gvm/scap-data/oval/5.10/org.mitre.oval/v/family/windows.xml
gvmd_1 | md manage: INFO:2019-05-27 09h48.54 utc:389: Updating user OVAL definitions.
gvmd_1 | md manage: INFO:2019-05-27 09h48.54 utc:389: Updating CVSS scores and CVE counts for CPEs
gvmd_1 | md manage: INFO:2019-05-27 09h49.19 utc:389: Updating CVSS scores for OVAL definitions
gvmd_1 | md manage: INFO:2019-05-27 09h49.20 utc:389: Updating placeholder CPEs
gvmd_1 | md manage: INFO:2019-05-27 09h49.27 utc:389: sync_scap: Updating SCAP info succeeded
After that docker stats
seemed to indicate everything was done so I connected to the web interface.
I think the log entry about not finding /usr/sbin/greenbone-nvt-sync
is created when you select "Extras" -> "Feed Status" from the web interface. It also doesn't show the NVTs there, only the SCAP and CERT feeds.
After this I created targets for the Docker network and a Discovery scan and started it. Same result, the scanner sits idle after some very brief activity:
gvmd_1 | md manage:WARNING:2019-05-27 09h53.08 UTC:2860: Failed to execute /usr/sbin/greenbone-nvt-sync: Failed to execute child process “/usr/sbin/greenbone-nvt-sync” (No such file or directory)
gvmd_1 | event target:MESSAGE:2019-05-27 09h57.19 UTC:2978: Target Internal (79811743-e3d3-4d28-9026-4c5418014fe9) has been created by admin
gvmd_1 | event task:MESSAGE:2019-05-27 09h57.51 UTC:3042: Status of task (546debed-a18a-445b-93e0-56a2d6fb54eb) has changed to New
gvmd_1 | event task:MESSAGE:2019-05-27 09h57.51 UTC:3042: Task Internal - Discovery (546debed-a18a-445b-93e0-56a2d6fb54eb) has been created by admin
gvmd_1 | event task:MESSAGE:2019-05-27 09h57.55 UTC:3060: Status of task Internal - Discovery (546debed-a18a-445b-93e0-56a2d6fb54eb) has changed to Requested
gvmd_1 | event task:MESSAGE:2019-05-27 09h57.55 UTC:3060: Task Internal - Discovery (546debed-a18a-445b-93e0-56a2d6fb54eb) has been requested to start by admin
openvassd_1 | sd main:MESSAGE:2019-05-27 09h57.55 utc:696: Starts a new scan. Target(s) : 172.17.0.0/24, with max_hosts = 20 and max_checks = 4
openvassd_1 | sd main:MESSAGE:2019-05-27 09h57.55 utc:698: Testing 172.17.0.2 (Vhosts: 1d8a7dbdd9b3) [698]
openvassd_1 | sd main:MESSAGE:2019-05-27 09h57.55 utc:703: Testing 172.17.0.7 [703]
openvassd_1 | sd main:MESSAGE:2019-05-27 09h57.55 utc:700: Testing 172.17.0.4 [700]
openvassd_1 | sd main:MESSAGE:2019-05-27 09h57.55 utc:702: Testing 172.17.0.6 [702]
openvassd_1 | sd main:MESSAGE:2019-05-27 09h57.55 utc:699: Testing 172.17.0.3 [699]
openvassd_1 | sd main:MESSAGE:2019-05-27 09h57.55 utc:704: Testing 172.17.0.8 [704]
openvassd_1 | sd main:MESSAGE:2019-05-27 09h57.55 utc:697: Testing 172.17.0.1 [697]
openvassd_1 | sd main:MESSAGE:2019-05-27 09h57.55 utc:708: Testing 172.17.0.12 [708]
openvassd_1 | sd main:MESSAGE:2019-05-27 09h57.55 utc:710: Testing 172.17.0.14 [710]
openvassd_1 | sd main:MESSAGE:2019-05-27 09h57.55 utc:709: Testing 172.17.0.13 [709]
openvassd_1 | sd main:MESSAGE:2019-05-27 09h57.55 utc:706: Testing 172.17.0.10 [706]
openvassd_1 | sd main:MESSAGE:2019-05-27 09h57.55 utc:701: Testing 172.17.0.5 [701]
openvassd_1 | sd main:MESSAGE:2019-05-27 09h57.55 utc:705: Testing 172.17.0.9 [705]
openvassd_1 | sd main:MESSAGE:2019-05-27 09h57.55 utc:707: Testing 172.17.0.11 [707]
gvmd_1 | event task:MESSAGE:2019-05-27 09h57.56 UTC:3064: Status of task Internal - Discovery (546debed-a18a-445b-93e0-56a2d6fb54eb) has changed to Running
openvassd_1 | sd main:MESSAGE:2019-05-27 09h58.00 utc:698: Finished testing 172.17.0.2. Time : 4.23 secs
openvassd_1 | sd main:MESSAGE:2019-05-27 09h58.02 utc:707: The remote host 172.17.0.11 is dead
openvassd_1 | sd main:MESSAGE:2019-05-27 09h58.02 utc:709: The remote host 172.17.0.13 is dead
openvassd_1 | sd main:MESSAGE:2019-05-27 09h58.02 utc:706: The remote host 172.17.0.10 is dead
openvassd_1 | sd main:MESSAGE:2019-05-27 09h58.02 utc:700: The remote host 172.17.0.4 is dead
openvassd_1 | sd main:MESSAGE:2019-05-27 09h58.02 utc:709: Finished testing 172.17.0.13. Time : 6.41 secs
openvassd_1 | sd main:MESSAGE:2019-05-27 09h58.02 utc:707: Finished testing 172.17.0.11. Time : 6.41 secs
openvassd_1 | sd main:MESSAGE:2019-05-27 09h58.02 utc:706: Finished testing 172.17.0.10. Time : 6.42 secs
openvassd_1 | sd main:MESSAGE:2019-05-27 09h58.02 utc:710: The remote host 172.17.0.14 is dead
openvassd_1 | sd main:MESSAGE:2019-05-27 09h58.02 utc:700: Finished testing 172.17.0.4. Time : 6.44 secs
openvassd_1 | sd main:MESSAGE:2019-05-27 09h58.02 utc:701: Finished testing 172.17.0.5. Time : 6.44 secs
openvassd_1 | sd main:MESSAGE:2019-05-27 09h58.02 utc:697: Finished testing 172.17.0.1. Time : 6.45 secs
openvassd_1 | sd main:MESSAGE:2019-05-27 09h58.02 utc:705: The remote host 172.17.0.9 is dead
openvassd_1 | sd main:MESSAGE:2019-05-27 09h58.02 utc:710: Finished testing 172.17.0.14. Time : 6.47 secs
openvassd_1 | sd main:MESSAGE:2019-05-27 09h58.02 utc:708: The remote host 172.17.0.12 is dead
openvassd_1 | sd main:MESSAGE:2019-05-27 09h58.02 utc:703: The remote host 172.17.0.7 is dead
openvassd_1 | sd main:MESSAGE:2019-05-27 09h58.02 utc:705: Finished testing 172.17.0.9. Time : 6.50 secs
openvassd_1 | sd main:MESSAGE:2019-05-27 09h58.02 utc:708: Finished testing 172.17.0.12. Time : 6.52 secs
openvassd_1 | sd main:MESSAGE:2019-05-27 09h58.02 utc:703: Finished testing 172.17.0.7. Time : 6.54 secs
openvassd_1 | sd main:MESSAGE:2019-05-27 09h58.02 utc:704: Finished testing 172.17.0.8. Time : 6.57 secs
openvassd_1 | sd main:MESSAGE:2019-05-27 09h58.02 utc:699: Finished testing 172.17.0.3. Time : 6.57 secs
openvassd_1 | sd main:MESSAGE:2019-05-27 09h58.02 utc:702: Finished testing 172.17.0.6. Time : 6.61 secs
The command docker-compose top
shows this:
gvmcontainers_gsad_1
UID PID PPID C STIME TTY TIME CMD
------------------------------------------------------------------------------------------------------------------------------
root 10947 10913 0 10:01 ? 00:00:01 gsad -f --listen=0.0.0.0 --port=80 --http-only --mlisten=gvmd --mport=9390
gvmcontainers_gvm-postgres_1
UID PID PPID C STIME TTY TIME CMD
---------------------------------------------------------------------------------------------------
999 6447 10258 0 11:57 ? 00:00:01 postgres: gvmduser gvmd 172.17.0.6(58096) idle
999 6672 10258 0 11:57 ? 00:00:00 postgres: gvmduser gvmd 172.17.0.6(58144) idle
999 10258 10209 0 10:01 ? 00:00:01 postgres
999 11007 10258 0 10:01 ? 00:00:03 postgres: checkpointer process
999 11008 10258 0 10:01 ? 00:00:17 postgres: writer process
999 11009 10258 0 10:01 ? 00:00:05 postgres: wal writer process
999 11010 10258 0 10:01 ? 00:00:00 postgres: autovacuum launcher process
999 11011 10258 0 10:01 ? 00:00:03 postgres: stats collector process
999 11012 10258 0 10:01 ? 00:00:00 postgres: bgworker: logical replication launcher
999 11217 10258 0 10:01 ? 00:00:02 postgres: gvmduser gvmd 172.17.0.6(49210) idle
gvmcontainers_gvmd_1
UID PID PPID C STIME TTY TIME CMD
------------------------------------------------------------------------------------------------------------------------------
root 6446 10640 0 11:57 ? 00:00:00 gvmd: OTP: Handling scan bfee1f75-e470-4b71-bae4-eba982b3ca42
root 6662 10640 0 11:57 ? 00:00:00 gvmd: Reloading NVTs
root 6667 6662 0 11:57 ? 00:00:00 gvmd: Updating NVT cache
root 10640 10572 0 10:01 ? 00:00:05 gvmd: Waiting for incoming connections
root 11211 10640 0 10:01 ? 00:00:00 gpg-agent --homedir /var/lib/gvm/gvmd/gnupg --use-standard-socket --daemon
gvmcontainers_openvassd_1
UID PID PPID C STIME TTY TIME CMD
-----------------------------------------------------------------------------------------------
root 6442 10253 0 11:57 ? 00:00:00 openvassd: Serving /var/run/openvassd.sock
root 10253 10204 0 10:01 ? 00:00:00 openvassd: Waiting for incoming connections
gvmcontainers_redis_1
UID PID PPID C STIME TTY TIME CMD
-------------------------------------------------------------------
999 10621 10555 0 10:01 ? 00:00:11 redis-server *:0
I've had tasks that ran for single hosts, but failed (got stuck at 1%) when a whole class C subnet was given as target (although it was very sparse, i.e. few IPs actually in use. Fixed by increasing "databases" to 128 in the redis config. Interestingly enough, the previous Openvas version (9) was able to work through the same target with the default 16 databases setting in redis... different redis version, though.
You may want to try again with GVM 11, as I just now updated the repo with it.
I've created a simple "Discovery" scan which scans the internal Docker network, using the "Discovery" configuration. After starting, it scans a few IPs and then it seems to stop. The web interface shows a progress of 1% (I left it running overnight). The logs of gvmd:
The logs of openvassd:
The output of
docker-compose top
:One thing I also noticed in the gvmd logs was this line:
This is probably because this command is not located in the gvmd container but in the openvassd container.