canonical / glauth-k8s-operator

A Charmed Operator for running GLAuth on Kubernetes
https://charmhub.io/glauth-k8s
Apache License 2.0
0 stars 0 forks source link

Ldap integration seems to break the database #55

Closed nsklikas closed 3 months ago

nsklikas commented 3 months ago

When using the ldap integration something seems to be broken in the database.

To try this you will need an ldap requirer, one easy way to reproduce this is to use the test setup:

tox -e integration -- --keep-models

Wait for the tests to finish the deployment image

Get into postgres image

and inject some users by running:

INSERT INTO ldapgroups(name, gidnumber)
  VALUES('superheros', 5501);
INSERT INTO ldapgroups(name, gidnumber)
  VALUES('svcaccts', 5502);
INSERT INTO ldapgroups(name, gidnumber)
  VALUES('civilians', 5503);
INSERT INTO ldapgroups(name, gidnumber)
  VALUES('caped', 5504);
INSERT INTO ldapgroups(name, gidnumber)
  VALUES('lovesailing', 5505);
INSERT INTO ldapgroups(name, gidnumber)
  VALUES('smoker', 5506);
INSERT INTO includegroups(parentgroupid, includegroupid)
  VALUES(5503, 5501);
INSERT INTO includegroups(parentgroupid, includegroupid)
  VALUES(5504, 5502);
INSERT INTO includegroups(parentgroupid, includegroupid)
  VALUES(5504, 5501);
INSERT INTO users(name, uidnumber, primarygroup, passsha256)
  VALUES('hackers', 5001, 5501,
    '6478579e37aff45f013e14eeb30b3cc56c72ccdc310123bcdf53e0333e3f416a');
INSERT INTO users(name, uidnumber, primarygroup, passsha256)
  VALUES('johndoe', 5002, 5502,
    '6478579e37aff45f013e14eeb30b3cc56c72ccdc310123bcdf53e0333e3f416a');
INSERT INTO users(name, mail, uidnumber, primarygroup, passsha256)
  VALUES('serviceuser', 'serviceuser@example.com', 5003, 5502,
    '652c7dc687d98c9889304ed2e408c74b611e86a40caa51c4b43f1dd5913c5cd0');
INSERT INTO users(name, uidnumber, primarygroup, passsha256, othergroups)
  VALUES('user4', 5004, 5504,
    '652c7dc687d98c9889304ed2e408c74b611e86a40caa51c4b43f1dd5913c5cd0',
    '5505,5506');
INSERT INTO capabilities(userid, action, object)
  VALUES(5001, 'search', 'ou=superheros,dc=glauth,dc=com');
INSERT INTO capabilities(userid, action, object)
  VALUES(5003, 'search', '*');

(taken from https://glauth.github.io/docs/databases.html) The table should look like this image

Try to run ldapsearch to get the hackers user: ldapsearch -LLL -H ldap://10.152.183.27:3893 -D cn=serviceuser,ou=svcaccts,dc=glauth,dc=com -w mysecret -x -b dc=glauth,dc=com cn=hackers -v

You will not get any users back. If you now go in the database and delete the any-charm user, the query will succeed: image

syncronize-issues-to-jira[bot] commented 3 months ago

Thank you for reporting us your feedback!

The internal ticket has been created: https://warthogs.atlassian.net/browse/IAM-995.

This message was autogenerated

nsklikas commented 3 months ago

I managed to run glauth locally and connect it with the charm's database. It turns out that, on search, glauth tries fetch ALL the users from the database with all their attributes and parse them in memory (see https://github.com/glauth/glauth/blob/master/v2/pkg/plugins/basesqlhandler.go#L248).

By adding some log statements to the code (I couldn't make the debugger work), I saw that there was this error happening:

sql: Scan error on column index 3, name \"passbcrypt\": converting NULL to string is unsupported

on https://github.com/glauth/glauth/blob/v2.3.2/v2/pkg/handler/ldapopshelper.go#L441, which caused this bug. By setting the passbcrypt default value to "" in the charm sqlalchemy model, the bug seems to be fixed. I will open a PR about this.

This behavior from glauth seems weird, perhaps we should bring it up with upstream (cc @shipperizer @wood-push-melon )

shipperizer commented 3 months ago

reason seems.to.be due to the fact that it s using the customattr field which is a JSON field, that probably limits the ops a baseSQL handler can do as it cannot assume anything more complex than memory filtering as db dialects commands technology and how.they deal with json

anyway worth asking upstream for an enhancement

nsklikas commented 2 months ago

reason seems.to.be due to the fact that it s using the customattr field which is a JSON field, that probably limits the ops a baseSQL handler can do as it cannot assume anything more complex than memory filtering as db dialects commands technology and how.they deal with json

anyway worth asking upstream for an enhancement

I see your point, but postgres implements indexes on jsonb tables. Other than that, for our use case (which I believe is pretty common) the query is done on the cn which a unique field of the table, not part of the json. Finally, even if postgres just loaded the json on memory and did the filtering, I think that it would do it on a smarter way that this. Loading a whole table on memory and parsing it does not scale very well.