Closed markdesilva closed 3 years ago
By the way this is for the latest pull (v20.08).
I think this is because your image was upgraded and the database has the info for the old reports locations but I am not sure I will try to reproduce the issue.
On a clean system, the image runs properly so you're probably right about the database holding the older info for v11 causing the issues.
Any idea how to fix it? Seems like v20.08 may change a few things from v11 in the db, with the ospd.sock and the reports being just 2 things discovered so far.
Is there an admin user and password for us to access the pgdb? I didn't see that on the env variables list or anywhere else in the documentation, unless I missed it.
Thanks for your help solving this and all the effort - appreciate the work you do maintaining this.
Hi sorry for not responding sooner. There is a env variable for the database user password DB_PASSWORD
and the username is gvm
. Then you just need to connect to the PostgreSQL database on port 5432.
I believe I have found a solution for this issue. I will try to get a fix out tomorrow.
Nevermind about tomorrow I might have a fix in the master image. If you could try the master image and see if it fixes this issue that would be helpful.
Thanks. Sorry for the late reply, stuck in meetings all day.
Will give it a try and get back to you.
Nevermind about tomorrow I might have a fix in the master image. If you could try the master image and see if it fixes this issue that would be helpful.
Nevermind about tomorrow I might have a fix in the master image. If you could try the master image and see if it fixes this issue that would be helpful.
Just tried out the master image, no go. The "An error occurred on this page. Please try again." still happens.
Same error details, "TypeError: s is undefined".
Thanks.
Well then back to the drawing board
Could not connect to Scanner at /var/run/ospd/ospd.sock the issue fixed ?
Could not connect to Scanner at /var/run/ospd/ospd.sock the issue fixed ?
There is a soft link from /tmp to /var/run/ospd so that fixes it but the reports still won't show.
i don't keep previos database so i remove container end volume . after remove i install new container gvm. reports is show . through the second day , the reports still won't show and scan task stop after some seconds
i don't keep previos database so i remove container end volume . after remove i install new container gvm. reports is show . through the second day , the reports still won't show and scan task stop after some seconds
Which is weird. If you delete the container and the physical volume (/var/lib/docker/volumes/gvm-data/_data), then you shouldn't need to bother about the location of the /var/run/ospd/ospd.sock. The soft link is only necessary if you are upgrading because the v11 sock location is in /tmp/ and this is stored in the database (which will be kept in /var/lib/docker/volumes/gvm-data/_data). For the same reason, upgrading also causes the reports to fail to show.
If you run a clean install (delete the previous gvm container, delete /var/lib/docker/volumes/gvm-data), everything (scans, reports, etc) works out of the box, no changes necessary.
Hi, because of this I tried to go back to 11.0.1-v1 but the db will no longer connect...
(gvmd:74): md manage-WARNING **: 13:57:14.605: sql_open: PQerrorMessage (conn): FATAL: role "gvm" does not exist
Do I have to apply a backup or is there an easy fix. In the future how will we know if an update breaks everything?
Probably apply the back up. 20.08 does some update to the db, even if it doesn't replace the socket or reports locations and settings.
I got things running on a VM so I made a snapshot before updating. Once I determined 20.08 wasn't working for me, I reverted to the snapshot and got back to v11 without fuss.
This issue seems to be related to a Podman-specific bug.
Specifically: https://github.com/containers/podman/issues/4318
When mounting your volumes, use this specific syntax instead: --volume gvm-data:/data:exec
, this allows the report-building scripts to be executed from the volume.
This fixes podman maybe, but not docker?
From the link you gave, its mentioned docker's default is to mount volumes with exec already. (https://github.com/containers/podman/issues/4318#issuecomment-554677665)
Also the 'exec' doesn't work with docker, I get this:
docker: Error response from daemon: invalid mode: exec.
Which seems to coincide with what was said in the link you gave, as docker has no such 'exec' flag. (https://github.com/containers/podman/issues/4318#issuecomment-545137221)
This issue seems to be related to a Podman-specific bug. Specifically: containers/podman#4318 When mounting your volumes, use this specific syntax instead:
--volume gvm-data:/data:exec
, this allows the report-building scripts to be executed from the volume.
Stale issue message
Describe the bug Click on 'Scans -> Reports" or click on any report from the 'Tasks' view and get the "An error has occurred on this page"
To Reproduce Steps to reproduce the behavior:
Expected behavior See list of reports
Screenshots -NA-
Additional context
/usr/local/var/log/gvm/gvmd.log shows this:
md manage:WARNING:2020-09-16 17h11.48 +08:2850: run_report_format_script: No generate script found at /usr/local/var/lib/gvm/gvmd/report_formats/generate
/usr/local/var/lib/gvm/gvmd only has the subdirectory gnupg but doesn't have reports_format subdirectory.
I found 'report_formats' subdirectory in :
/usr/local/var/lib/gvm/data-objects/gvmd/21.04/report_formats /usr/local/var/lib/gvm/data-objects/gvmd/20.08/report_formats
but neither of those directories has the generate script.
Error details that are available on the reports page are: