immauss / openvas

Containers for running the Greenbone Vulnerability Manager. Run as a single container with all services or separate single applications containers via docker-compose.
GNU Affero General Public License v3.0
337 stars 97 forks source link

Upgrading to IMMauss/Openvas from DeineAgenturUG/gvm #211

Open markdesilva opened 11 months ago

markdesilva commented 11 months ago

Hi,

Might not be the best topic for discussion, but some of us have come from using https://github.com/DeineAgenturUG/greenbone-gvm-openvas-for-docker/ image and I was wondering if there was a way to retain our old reports, notes, overrides and customization without having to start everything from scratch.

My apologies if my question has offended.

Thank you.

Regards, Mark

immauss commented 11 months ago

No offense taken ...

Happy to see you moving over in fact.

You "should" be able to do a DB restoral.

Should being the optimal word here. I know this will get the majority of it.

However, there "may" be some bits that are stored outside of the DB.

In my implementation, I've done my best to make sure all of those bits are stored on the volume if you chose to use an external volume making the backup restore pretty easy. There is a procedure for restoring from an external DB in the docs that should get you though that part of it. You might need to piece meal the rest if there is something outside the DB from the old install that you need.

Happy to help you through. Please feel free to post any additional questions here, and I'll do my best to help you get there.

the most import thing though.

BACKUP BACKUP BACKUP !!

Don't trust anything until you have proven it and make sure you have multiple copies of the original before you start anything.

And please let me know how it goes. It would be great to add a section to the docs on migrating from the old to new.

-Scott

markdesilva commented 11 months ago

Many thanks for your reply Scott.

I will attempt to backup and restore the db and document what I do and share it here if I can get it done.

Thank you.

markdesilva commented 11 months ago

Hi Scott,

I have spent the whole day testing and trying to get everything in working order and do backup before attempting to restore the db from my DeineAgenturUG/gvm image.

2 questions I have:

1) What is the password for the database and can I change it using the environment variables from the docker command?

2) Is there an image for the stand alone scanner that so that I can add scanners to the main scanner and is there a script I have to run to add a scanner if there is one?

Q2 is a feature similar to what Securecompliance and DeineAgenturUG had. I thought I recalled seeing a scanner image for yours but I might have been mistaken, can't find it now.

Many thanks for your patience and assistance and for your work on this.

best regards, Mark

markdesilva commented 11 months ago

Hi,

Reporting back on migrating from the DAUG/gvm to IMMauss/Openvas. Turns out to be simpler than expected.

DAUG/gvm uses a separate volume for the psql db which can be eg: /storage/database. Turns out if you move that entire database directory to /var/lib/docker/volumes/openvas/_data/database and change the ownership as necessary (see the user:group of the original database directory) then restart the container, gvmd picks everything up, though it takes some time to migrate the database.

Previous reports, overrides, notes, users, etc are all there and speed of new scans is just like the DAUG/gvm container. Everything works a treat EXCEPT I can't get the mail working. Pretty sure its a firewall thing that I'm not seeing, so I need to work that out. Other than that I need to add scanners which I don't see an image for. Do I need to redeploy the whole openvas container on different machines and add them?

I encountered these "errors" but they don't seem to affect the operations. :

Now the long part, migrating the databse. md manage-Message: 19:02:14.605: cleanup_old_sql_functions: cleaning up SQL functions now included in pg-gvm extension 2023-08-04 11:30:17.715 UTC [893] ERROR: canceling autovacuum task 2023-08-04 11:30:17.715 UTC [893] CONTEXT: while vacuuming index "cpes_uuid_key" of relation "scap2.cpes" automatic vacuum of table "gvmd.scap2.cpes" 2023-08-04 11:30:21.264 UTC [957] WARNING: skipping "pg_toast_1260_index" --- only superuser can analyze it 2023-08-04 11:30:21.264 UTC [957] WARNING: skipping "pg_toast_1262_index" --- only superuser can analyze it 2023-08-04 11:30:21.264 UTC [957] WARNING: skipping "pg_toast_2964_index" --- only superuser can analyze it 2023-08-04 11:30:21.264 UTC [957] WARNING: skipping "pg_toast_6000_index" --- only superuser can analyze it 2023-08-04 11:30:21.264 UTC [957] WARNING: skipping "pg_toast_2396_index" --- only superuser can analyze it 2023-08-04 11:30:21.264 UTC [957] WARNING: skipping "pg_toast_3592_index" --- only superuser can analyze it 2023-08-04 11:30:21.264 UTC [957] WARNING: skipping "pg_toast_6100_index" --- only superuser can analyze it 2023-08-04 11:30:21.264 UTC [957] WARNING: skipping "pg_toast_1213_index" --- only superuser can analyze it 2023-08-04 11:30:21.264 UTC [957] WARNING: skipping "pg_authid_rolname_index" --- only superuser can analyze it 2023-08-04 11:30:21.264 UTC [957] WARNING: skipping "pg_authid_oid_index" --- only superuser can analyze it 2023-08-04 11:30:21.264 UTC [957] WARNING: skipping "pg_auth_members_role_member_index" --- only superuser can analyze it 2023-08-04 11:30:21.264 UTC [957] WARNING: skipping "pg_auth_members_member_role_index" --- only superuser can analyze it 2023-08-04 11:30:21.264 UTC [957] WARNING: skipping "pg_database_datname_index" --- only superuser can analyze it 2023-08-04 11:30:21.265 UTC [957] WARNING: skipping "pg_database_oid_index" --- only superuser can analyze it 2023-08-04 11:30:21.265 UTC [957] WARNING: skipping "pg_shdescription_o_c_index" --- only superuser can analyze it 2023-08-04 11:30:21.265 UTC [957] WARNING: skipping "pg_shdepend_depender_index" --- only superuser can analyze it 2023-08-04 11:30:21.265 UTC [957] WARNING: skipping "pg_shdepend_reference_index" --- only superuser can analyze it 2023-08-04 11:30:21.265 UTC [957] WARNING: skipping "pg_tablespace_oid_index" --- only superuser can analyze it 2023-08-04 11:30:21.265 UTC [957] WARNING: skipping "pg_tablespace_spcname_index" --- only superuser can analyze it 2023-08-04 11:30:21.265 UTC [957] WARNING: skipping "pg_db_role_setting_databaseid_rol_index" --- only superuser can analyze it 2023-08-04 11:30:21.266 UTC [957] WARNING: skipping "pg_shseclabel_object_index" --- only superuser can analyze it 2023-08-04 11:30:21.266 UTC [957] WARNING: skipping "pg_replication_origin_roiident_index" --- only superuser can analyze it 2023-08-04 11:30:21.266 UTC [957] WARNING: skipping "pg_replication_origin_roname_index" --- only superuser can analyze it 2023-08-04 11:30:21.266 UTC [957] WARNING: skipping "pg_subscription_oid_index" --- only superuser can analyze it 2023-08-04 11:30:21.266 UTC [957] WARNING: skipping "pg_subscription_subname_index" --- only superuser can analyze it 2023-08-04 11:30:21.266 UTC [957] WARNING: skipping "pg_authid" --- only superuser can analyze it 2023-08-04 11:30:21.267 UTC [957] WARNING: skipping "pg_toast_1260" --- only superuser can analyze it 2023-08-04 11:30:21.267 UTC [957] WARNING: skipping "pg_toast_1262" --- only superuser can analyze it 2023-08-04 11:30:21.267 UTC [957] WARNING: skipping "pg_toast_2964" --- only superuser can analyze it 2023-08-04 11:30:21.267 UTC [957] WARNING: skipping "pg_toast_6000" --- only superuser can analyze it 2023-08-04 11:30:21.267 UTC [957] WARNING: skipping "pg_toast_2396" --- only superuser can analyze it 2023-08-04 11:30:21.267 UTC [957] WARNING: skipping "pg_toast_3592" --- only superuser can analyze it 2023-08-04 11:30:21.267 UTC [957] WARNING: skipping "pg_toast_6100" --- only superuser can analyze it 2023-08-04 11:30:21.267 UTC [957] WARNING: skipping "pg_toast_1213" --- only superuser can analyze it 2023-08-04 11:30:21.268 UTC [957] WARNING: skipping "pg_database" --- only superuser can analyze it 2023-08-04 11:30:21.268 UTC [957] WARNING: skipping "pg_db_role_setting" --- only superuser can analyze it 2023-08-04 11:30:21.268 UTC [957] WARNING: skipping "pg_tablespace" --- only superuser can analyze it 2023-08-04 11:30:21.268 UTC [957] WARNING: skipping "pg_auth_members" --- only superuser can analyze it 2023-08-04 11:30:21.268 UTC [957] WARNING: skipping "pg_shdepend" --- only superuser can analyze it 2023-08-04 11:30:21.268 UTC [957] WARNING: skipping "pg_shdescription" --- only superuser can analyze it 2023-08-04 11:30:21.268 UTC [957] WARNING: skipping "pg_replication_origin" --- only superuser can analyze it 2023-08-04 11:30:21.269 UTC [957] WARNING: skipping "pg_shseclabel" --- only superuser can analyze it 2023-08-04 11:30:21.269 UTC [957] WARNING: skipping "pg_subscription" --- only superuser can analyze it 2023-08-04 11:31:25.127 UTC [958] ERROR: canceling autovacuum task 2023-08-04 11:31:25.127 UTC [958] CONTEXT: while vacuuming index "dfn_cert_advs_idx" of relation "cert.dfn_cert_advs" automatic vacuum of table "gvmd.cert.dfn_cert_advs"

I'm thinking that these errors are because I didn;t do a dump and restore, but as they don't affect operations, I'm inclined to leave it as it is.

Thank you!

best regards, Mark

immauss commented 11 months ago

Mark, That's great news. Sorry I didn't get back to you quicker on the first round of questions. For future reference, gvmd accesses postgresql via socket, so it doesn't use a password. I recently saw something about gvmd being able to connect via tcp, but the last I looked, there was no way to set a password for it, so it was not a viable option. I 'think' that may have changed, but have not a chance to dig into it yet. If it ever does, it will make the multi container implementation much easier.

 The scanner is already setup in the container. If you need it separate, it "should" be possible to setup an external scanner, but I've not yet done any testing with it. The mulit container implementation uses the same image for each service, but runs a different script on startup. So ... you could feasibly start a container with just the scanner but giving it the scanner option on the command line. I'd have to go back and look at a few pieces in detail to be sure that would work though.  To make things more complicated, Greenbone added the notusscanner with 21.4. It does all the local checks on machines and openvas handles the network scans. (At least that's my understanding of it.) So you would need both for complete scans. To be honest, this would probably require a bit more than that. 

As for those errors, if they continue on future startups, let me know via seperate issue so we can track them down. I've put a lot of time into making sure DB version upgrades go smoothly, if I've missed something, I want to track it down.

markdesilva commented 11 months ago

I'm going to try to create a new volume for the databases with

--volume /storage/database:/data/database

and see if that also works. I think it should.

I noticed some issues with the ssl certs as well, but I want to confirm with a fresh try.

DAUG/gvm uses a separate volume for the psql db which can be eg: /storage/database. Turns out if you move that entire database directory to /var/lib/docker/volumes/openvas/_data/database and change the ownership as necessary (see the user:group of the original database directory) then restart the container, gvmd picks everything up, though it takes some time to migrate the database.

markdesilva commented 11 months ago

Thank you Scott!

The scanner is already setup in the container. If you need it separate, it "should" be possible to setup an external scanner, but I've not yet done any testing with it. The mulit container implementation uses the same image for each service, but runs a different script on startup. So ... you could feasibly start a container with just the scanner but giving it the scanner option on the command line. I'd have to go back and look at a few pieces in detail to be sure that would work though. To make things more complicated, Greenbone added the notusscanner with 21.4. It does all the local checks on machines and openvas handles the network scans. (At least that's my understanding of it.) So you would need both for complete scans. To be honest, this would probably require a bit more than that.

I see, let me also try to play around with it and see if I can cobble something together.

As for those errors, if they continue on future startups, let me know via seperate issue so we can track them down. I've put a lot of time into making sure DB version upgrades go smoothly, if I've missed something, I want to track it down

Will do, I'm pretty sure these errors happen because I didn't do the dump then restore, but I will confirm.

Thank you again!

immauss commented 10 months ago

@markdesilva Is all well? Any more issues with the upgrade? I

markdesilva commented 10 months ago

@markdesilva Is all well? Any more issues with the upgrade?

Hi Scott,

Been testing it for the last week(?) all seems good! I did two set of scheduled scans end of last week, doing one today and another on Monday.

I am actually writing a sort of "guide" for the upgrade. Will post here for you to review and see if you want to add it to your documentation.

Thank you once again!

markdesilva commented 10 months ago

Here are the steps I took to migrate from DeineAgenturUG/gvm to IMMauss/openvas, but first a shoutout to @deineagenturug as well for his work on his image and also for helping many of us before.

I will refer to DeineAgenturUG/gvm as "dauggvm" and IMMauss/openvas as "immovas" for ease of reference.

For dauggvm, the database is kept in a separate volume as defined by the user. In my case, it was kept in /storage/database.

In immovas, the database is kept in the container volume /var/lib/docker/volumes/openvas/_data/database

1) Start immovas with the basic options and some name eg: "ovas" (we will remove this container "ovas" immafter migrating the db). You will now have /var/lib/docker/volumes/openvas/_data/ and there will be some folders in there including a database folder

2) To start migration, stop dauggvm and tar up the database in /storage/database (or where ever you had your dauggvm database volume) - you only want the database folder and everything in it:

cd /storage; tar zcf database dauggvm-db.tgz

3) copy dauggvm-db.tgz to /var/lib/docker/volumes/openvas/_data/

cp /storage/dauggvm-db.tgz /var/lib/docker/volumes/openvas/_data/

4) check the ownership of the immovas database folder:

cd /var/lib/docker/volumes/openvas/_data; ls -aild database

You should see something like this:

drwxr-x--- 19 _apt kvm 4096 Aug 12 08:00 database

The permissions in the host will be different from when in the container, but you need to make sure they are the same for the database you are migrating over.

5) Stop the "ovas" container that is running immovas version

6) While in /var/lib/docker/volumes/openvas/_data, rename the current database folder to something else, eg: database.org

mv database database.org

7) Untar the dauggvm db you tarred up earlier and copied to /var/lib/docker/volumes/openvas/_data/ in step (2)

tar zxf dauggvm-db.tgz

You will now have the dauggvm database, but the ownership might be wrong, so you need to change it to what the immovas database ownership was in step (4), in this case it was owner "_apt", group "kvm". You need to do this for the database folder and all the files in it, so use the recursive argument.

chown -R _apt:kvm database

8) Restart the "ovas" container

9) Tailing the logs (docker logs -f ovas) will show that the databases are being migrated:

Now the long part, migrating the databse. md manage-Message: 19:02:14.605: cleanup_old_sql_functions: cleaning up SQL functions now included in pg-gvm extension 2023-08-04 11:30:17.715 UTC [893] ERROR:  canceling autovacuum task 2023-08-04 11:30:17.715 UTC [893] CONTEXT:  while vacuuming index "cpes_uuid_key" of relation "scap2.cpes" 
automatic vacuum of table "gvmd.scap2.cpes" 2023-08-04 11:30:21.264 UTC [957] WARNING:  skipping 
"pg_toast_1260_index" --- only superuser can analyze it 2023-08-04 11:30:21.264 UTC [957] WARNING:  skipping 
"pg_toast_1262_index" --- only superuser can analyze it 2023-08-04 11:30:21.264 UTC [957] WARNING:   ....

10) When your logs show

+ Your GVM/openvas/postgresql container is now ready to use! +

You can log into the portal as normal and check that all your past scan reports, notes, overrides, targets, etc are all there.

11) Once you confirm everything is in order, log out and stop the "ovas" container.

12) Go to /var/lib/docker/volumes/openvas/_data and MOVE the database folder to a location you want the database files to be, for example, I will move them to /ovasvolumes. So now you will have:

/ovasvolumes/database

which will have the correct ownership as determined in step (4) and set in step (7)

13) Now you can delete the "ovas" container that was started without any customized options

docker rm ovas

14) You now start a brand new container with IMMAUSS/openvas image adding your password, port, https, etc but ALSO add the following volume:

--volume /ovasvolumes/database:/data/database

eg:

docker run --detach --publish 9392:9392 --volume openvas:/data --volume /ovasvolumes/database:/data/database --env HTTPS="true" --env PASSWORD="youradminpass" --name openvas immauss/openvas

In this way the databases are separated always and you can do updates/upgrades to the image without worrying about affecting your database or running out of space as your database grows.

I hope this makes sense and helps!

Cheers!

markdesilva commented 10 months ago

@immauss hope you don't mind, I modified the image /scripts/single.sh and added the following:

@24,25
+CERTIFICATE=${CERTIFICATE:-none}
+CERTIFICATE_KEY=${CERTIFICATE_KEY:-none}
@34
-for var in USERNAME PASSWORD RELAYHOST SMTPPORT REDISDBS QUIET NEWDB SKIPSYNC RESTORE DEBUG HTTPS GSATIMEOUT ; do
+for var in USERNAME PASSWORD RELAYHOST SMTPPORT REDISDBS QUIET NEWDB SKIPSYNC RESTORE DEBUG HTTPS CERTIFICATE CERTIFICATE_KEY GSATIMEOUT ; do
@417,418
+                    --ssl-certificate=$CERTIFICATE \
+                    --ssl-private-key=$CERTIFICATE_KEY \

This is so I can have a separate volume for my ssl certs for easier maintenance and add them from the docker command with the following:

--volume /ovasvolumes/ssl:/data/ssl --env HTTPS="true" --env CERTIFICATE="/data/ssl/fullchain.pem" --env CERTIFICATE_KEY="/data/ssl/privkey.pem"

I am also looking at separating the feeds, so that updating the image or changing it won't require pulling all the feeds again.

Eventually I should have separate volumes for

  1. database
  2. ssl certs
  3. feeds

I think this makes management and future maintenance easier.

Thank you!

immauss commented 10 months ago

Mark, You gave me a new idea with this post .... But I'm strapped for time at the moment. I want to go through what you have written here to make sure I understand it well, but it might be a week or so before I get the time to do it properly. If you run into anything else in the meantime, please don't hesitate to ask.

Thanks, -Scott

immauss commented 9 months ago

I haven't forgotten this .. just hella busy the last moth or so.