Closed nsklikas closed 3 months ago
Thank you for reporting us your feedback!
The internal ticket has been created: https://warthogs.atlassian.net/browse/IAM-995.
This message was autogenerated
I managed to run glauth locally and connect it with the charm's database. It turns out that, on search, glauth tries fetch ALL the users from the database with all their attributes and parse them in memory (see https://github.com/glauth/glauth/blob/master/v2/pkg/plugins/basesqlhandler.go#L248).
By adding some log statements to the code (I couldn't make the debugger work), I saw that there was this error happening:
sql: Scan error on column index 3, name \"passbcrypt\": converting NULL to string is unsupported
on https://github.com/glauth/glauth/blob/v2.3.2/v2/pkg/handler/ldapopshelper.go#L441, which caused this bug. By setting the passbcrypt
default value to ""
in the charm sqlalchemy model, the bug seems to be fixed. I will open a PR about this.
This behavior from glauth seems weird, perhaps we should bring it up with upstream (cc @shipperizer @wood-push-melon )
reason seems.to.be due to the fact that it s using the customattr field which is a JSON field, that probably limits the ops a baseSQL handler can do as it cannot assume anything more complex than memory filtering as db dialects commands technology and how.they deal with json
anyway worth asking upstream for an enhancement
reason seems.to.be due to the fact that it s using the customattr field which is a JSON field, that probably limits the ops a baseSQL handler can do as it cannot assume anything more complex than memory filtering as db dialects commands technology and how.they deal with json
anyway worth asking upstream for an enhancement
I see your point, but postgres implements indexes on jsonb tables. Other than that, for our use case (which I believe is pretty common) the query is done on the cn
which a unique field of the table, not part of the json. Finally, even if postgres just loaded the json on memory and did the filtering, I think that it would do it on a smarter way that this. Loading a whole table on memory and parsing it does not scale very well.
When using the ldap integration something seems to be broken in the database.
To try this you will need an ldap requirer, one easy way to reproduce this is to use the test setup:
Wait for the tests to finish the deployment
Get into postgres
and inject some users by running:
(taken from https://glauth.github.io/docs/databases.html) The table should look like this
Try to run ldapsearch to get the
hackers
user:ldapsearch -LLL -H ldap://10.152.183.27:3893 -D cn=serviceuser,ou=svcaccts,dc=glauth,dc=com -w mysecret -x -b dc=glauth,dc=com cn=hackers -v
You will not get any users back. If you now go in the database and delete the
any-charm
user, the query will succeed: