admirito / gvm-containers

Greenbone Vulnerability Management Containers
86 stars 58 forks source link

[Solution for other humans to find] password authentication failed for user "gvmduser" password does not match for user "gvmduser" #53

Open BloodyIron opened 2 years ago

BloodyIron commented 2 years ago

This is not a bug report, I am posting this as a solution for others because there is no solution posted on the internet elsewhere, and hopefully this will help someone else.

I'm using gvm with the helm chart in my dev homelab environment (with rancher btw), and somewhere along the lines I messed something up. I treat my testing environment abusively, so I can learn where things fail. Turns out I found a failing point.

I think the password for the "gvmduser" got corrupted somewhere along the lines. The gmvd pod would keep failing, and the gvmd-db pod logs would keep spouting:

2022-02-10 21:40:36.480 UTC [267] FATAL:  password authentication failed for user "gvmduser"
2022-02-10 21:40:36.480 UTC [267] DETAIL:  Password does not match for user "gvmduser".
    Connection matched pg_hba.conf line 99: "host all all all md5"

It would spit that out non-stop.

I checked everywhere, password wasn't changed, the correct one was being injected at runtime, even within the containers the environment variables had the correct password. I even went into the database container command line...

su - postgres psql -U gvmduser gvmd

Now that I have a postgresql commandline, I tried a bunch of things... to no avail. I even changed the password to the same one (and yes, I'm running defaults this is a test environment after all)

ALTER USER gvmduser WITH PASSWORD 'mypassword';

That didn't help at all!

So after multiple hours of exhaustive research (I'm a stubborn bastard and I'm on the case!) I modified the "pg_hba.conf" in the volume (it's on NFS storage, so I had direct file access on my NAS, love you TrueNAS and iX Systems. As a testing method, I changed the line in that file...

from: "host all all all md5" to "host all all all trust" DISCLAIMER: THIS IS AN INSECURE CONFIGURATION. DO NOT USE THIS BEYOND TESTING/REPAIR.

And then in the postgresql commandline triggered a config reload:

select pg_reload_conf();

I then set the gvmd pod to desirable 0, and then back to 1. Watched the gvm-db logs, and no authentication errors... after a minute, IT'S WORKING AGAIN! I can log into gvm, I see content, yay!

But wait, I need to go back to a secure configuration. So I changed that pg_hba.conf file again...

from: "host all all all trust" back to: "host all all all md5"

And then reloaded config from postgresql command line: select pg_reload_conf();

And then at the postgresql CLI, I did something different. I changed the password to something random "asdf" and then back to the password that was being used... then set the pod for gvmd to desired 0, then back to desired 1, and BOOM it all came back up! With the correct password, the secure configuration, and no database errors in the log!

YAY! Hopefully this helps someone else. I really didn't want to delete my container volume :/

Feel free to close this issue. I'm posting this here to help others, and myself if I forget this solution lol.

BloodyIron commented 2 years ago

Okay this breaks a lot easier than I thought and I'm not sure about this silver bullet. Happened again and... it doesn't seem to work. Changing the password to something else, and back, and it's still being rejected. This is so stupid and I don't understand why this is breaking from shutting a node down gracefully >:(

BloodyIron commented 2 years ago

okay I can't fuck around with this any more I'm going to set it to trust until I hear back on wtf the real solution is... this shit breaks way too easily...

BloodyIron commented 2 years ago

Yeah tried this again today after letting gvmd fully run and update, still not accepting password, even after changing away, and back. I have no clue how this broke or why...