rook / rook

Storage Orchestration for Kubernetes
https://rook.io
Apache License 2.0
12.1k stars 2.66k forks source link

Support RGW LDAP integration #4315

Open LouNauSRC opened 4 years ago

LouNauSRC commented 4 years ago

Is this a bug report or feature request?

What should the feature do: The Ceph Object Gateway supports integrating with LDAP for authenticating and creating users, see: https://docs.ceph.com/docs/master/radosgw/ldap-auth/ It would be nice if this was directly supported in the "object" part of the rook-ceph integration. What is use case behind this feature: Providing this feature would enable easy integration with a corporate LDAP server for use of the object store, including its S3 compatible interface. This would prevent having to have someone create individual user accounts via the "object-user.yaml" for every user that needs access to the object store, and allow for control based on group membership(s) in the LDAP server. Environment:

This can currently be mostly achieved using existing "escape hatches" in rook. The turning on of the LDAP integration can be done by adding config values to the "rook-config-override" configmap similar to the following (actual values removed): [global] rgw ldap binddn = rgw_ldap_secret = /etc/ceph/ldap/bindpass.secret rgw ldap uri = rgw ldap searchdn = rgw ldap dnattr = rgw_s3_auth_use_ldap = true

The main problem comes from the fact that "rgw_ldap_secret" needs to point to a file that contains the unencrypted/unencoded password for the account being used to query the LDAP server that is specified by "rgw_ldap_binddn". Mounting this file into the "rook-ceph-rgw-my-store" related pod is difficult/fragile at the moment. By creating a secret that contains the password, we can mount that secret as a volume in the pod, but that requires modifying/patching the "rook-ceph-rgw-my-store" deployment similar to below: spec: template: spec: containers:

This works, and the LDAP integration succeeds. But it is fragile, in that anything that causes the operator to restart/redeploy the "rook-ceph-rgw-my-store" deployment overwrites the above change, and it needs to be reapplied.

stale[bot] commented 4 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.

LouNauSRC commented 4 years ago

I have a working "hack" based on 1.1.8 that creates an optional mount for the password secret in the RGW deployment so that it survives restarts of the operator. Working on trying to make the path to the secret configurable, but I am not a Go programmer. If I git something that is reasonable I will update.

sanminaben commented 4 years ago

I have a working "hack" based on 1.1.8 that creates an optional mount for the password secret in the RGW deployment so that it survives restarts of the operator. Working on trying to make the path to the secret configurable, but I am not a Go programmer. If I git something that is reasonable I will update.

HashiCorp Vault allows mounting secrets as templated files directly into the pod (injected before the target container runs), so you could use that instead.

stale[bot] commented 4 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.

jhoblitt commented 3 years ago

@LouNauSRC I appreciate you posting a workaround.

I to am interested in rook having native support for setting up ldap integration.

travisn commented 3 years ago

@jhoblitt Thanks for the reminder. What about this?

Seems like the following should be sufficient:

@thotz Is this something you'd be able to take a look at?

jhoblitt commented 3 years ago

@travisn That sounds about right. CephObjectStore is clearly the the right scope to set rgw_ldap_uri, etc. My personal preference is to allow the secret holding the ldap credentials to be explicitly listed in the CephObjectStore resource rather than looking for it heuristically.

Maybe something like this?

apiVersion: ceph.rook.io/v1
kind: CephObjectStore
metadata:
  name: my-store
  namespace: rook-ceph
spec:
  metadataPool:
    failureDomain: host
    replicated:
      size: 3
  dataPool:
    failureDomain: host
    erasureCoded:
      dataChunks: 2
      codingChunks: 1
  preservePoolsOnDelete: true
  gateway:
    type: s3
    sslCertificateRef:
    port: 80
    # securePort: 443
    instances: 1
  healthCheck:
    bucket:
      disabled: false
      interval: 60s
  ldap:
    enable: true
    # all params at the same tier or nest the literal ceph.conf values?
    global:
      # rgw_s3_auth_use_ldap = true -- set by `ldap.enable` ? 
      # maybe s/^rgw_ldap_// for the [global] config params?
      uri = ldaps://<fqdn>:636
      binddn = "<binddn>"
      searchdn = "<seachdn>"
      dnattr = "cn"
      #  rgw_ldap_secret = "/etc/bindpass" -- set by the operator to where the secret is mounted? 
    # ingress style
    secretName: rook-ceph-rgw-ldap...
travisn commented 2 years ago

@thotz could you take a look?

thotz commented 2 years ago

@travisn : Sure I will take a look. The changes look straight forward, but it may require some amount of testing IMO

travisn commented 2 years ago

Since the code changes are small and testing may be the more challenging part, you could also open the PR before testing is complete and ask the community to help validate. If the community asks for a feature, it's helpful anyway to get their validation. :)

thotz commented 2 years ago

I have created PR to address this issue, can some test this out.If needed I can provide test image on top of your required version of rook.

travisn commented 2 years ago

@LouNauSRC PR #8750 needs to be updated and needs more testing. Before we spend more time on it, could you confirm if you are still interested in this feature and can help test it? thanks

threepw00d commented 2 years ago

I am interested in this feature and am willing to help test it.

thotz commented 2 years ago

@threepw00d Let me know if you want a rook image based on the above changes? You can need to update the CRs for objectstore manually. I don't have much experience in creating helms or bundle images

travisn commented 2 years ago

@thotz How about if you resolve the merge conflicts in #8750 then push an image to dockerhub that can be tested? Thanks

thotz commented 2 years ago

@LouNauSRC @threepw00d @jhoblitt anyone interested to test this feature, I can provide builds if needed. Otherwise I am planning to close this PR and issue

jhoblitt commented 2 years ago

I'm willing to test.

thotz commented 2 years ago

I'm willing to test.

@jhoblitt

docker pull quay.io/jthottan/rook-ceph:ldap-rgw
docker pull quay.io/jthottan/rook-ceph@sha256:34c9d6c0e40b59aaf5c5f7c81265cf7dbe05488e177b39af70ccf24d82fcae6e

Above is docker image on top of the master

thotz commented 2 years ago

@jhoblitt any update??

jhoblitt commented 2 years ago

Ugh. I haven't gotten to it yet...

thotz commented 1 year ago

Sorry to ping you again any update??

cccsss01 commented 1 year ago

@thotz I'm not sure i'm the best candidate to test this, but I can give it a go.

jhoblitt commented 1 year ago

@thotz I apologize. I have been stuck in one fire drill after another for the last couple of months.

It looks like the registry repo isn't public:

 ~ $ docker pull quay.io/jthottan/rook-ceph:ldap-rgw
Error response from daemon: unauthorized: access to the requested resource is not authorized

Is that a direct build of #8750?

jhoblitt commented 1 year ago

Ugh.

go: github.com/libopenstorage/secrets@v0.0.0-20210709082113-dde442ea20ec requires
    github.com/hashicorp/vault@v1.4.2 requires
    github.com/hashicorp/go-kms-wrapping@v0.5.1 requires
    github.com/hashicorp/vault/sdk@v0.1.14-0.20191229212425-c478d00be0d6: invalid version: unknown revision c478d00be0d6
make: *** [build/makelib/golang.mk:174: go.mod.check] Error 1
jhoblitt commented 1 year ago

There doesn't appear to be a v0.1.14 tag in hashicorp/vault...

jhoblitt commented 1 year ago

The build failure was reported as #11045. With help from @travisn, I was able to make a build of #8750. Which I have published as jhoblitt/rook:ceph-amd64-ldaprgwsupport-25f319821 but haven't tested yet.

https://hub.docker.com/layers/jhoblitt/rook/ceph-amd64-ldaprgwsupport-25f319821/images/sha256-8d7e6ddc23fa1c789f4423720fca10364e9d7a1f209a45532e688f943587eba0?context=repo

thotz commented 1 year ago

The build failure was reported as #11045. With help from @travisn, I was able to make a build of #8750. Which I have published as jhoblitt/rook:ceph-amd64-ldaprgwsupport-25f319821 but haven't tested yet.

https://hub.docker.com/layers/jhoblitt/rook/ceph-amd64-ldaprgwsupport-25f319821/images/sha256-8d7e6ddc23fa1c789f4423720fca10364e9d7a1f209a45532e688f943587eba0?context=repo

@cccsss01 ^^ please try with this build

cccsss01 commented 1 year ago

I've modified the rook-ceph-operator deployment to use the image specified above, this recreated the operator. Then when attempting to edit the cephobjectstore with the contents listed below I receive validation error. (i'm assuming the ldap uri will need to be changed) Also, can I expect these to be listed in the rook-ceph-rgw-mystore /etc/ceph/ceph.conf?

zone: name: "" ldap: enable: true global: uri = ldap://127.0.0.1:1389 binddn = "cn=admin,dc=example,dc=org" dnattr = "cn" searchdn = "dc=example,dc=org" secretName: rook-ceph-rgw-ldap (just created simple openldap container, then proxy-forward to local ip port)`

jhoblitt commented 1 year ago

@cccsss01 I am just starting to look at this. In order to test it, you will need to update the CRDs to allow the ldap key to pass validation. E.g.

k apply -f https://raw.githubusercontent.com/rook/rook/25f319821c56bb9fc8e9f7f1a4fd126d6b6f1371/deploy/examples/crds.yaml
jhoblitt commented 1 year ago

The good news is that this does appear to be setting flags to rgw. E.g.

2022-09-27 22:31:40.574680 I | ceph-spec: CR has changed for "lfa". diff=  v1.ObjectStoreSpec{
    ... // 5 identical fields
    HealthCheck: {Bucket: {Interval: &{Duration: s"1m0s"}}},
    Security:    nil,
-   LDAP:        nil,
+   LDAP: &v1.LDAPSpec{
+       URI:                  "ldaps://ipa1.ls.example.com:636",
+       BindDN:               "uid=svc_ceph,cn=users,cn=accounts,dc=example,dc=com",
+       SearchDN:             "dc=example,dc=com",
+       DNattribute:          "uid",
+       CredentialSecretName: "lfa-ldap",
+   },
  }
$ k -n rook-ceph get pod -l app=rook-ceph-rgw,rgw=lfa -ojson | jq '.items[].spec.containers[].args'
[
  "--fsid=b64eca4b-a1fd-4d36-b481-3309f594ffbd",
  "--keyring=/etc/ceph/keyring-store/keyring",
  "--log-to-stderr=true",
  "--err-to-stderr=true",
  "--mon-cluster-log-to-stderr=true",
  "--log-stderr-prefix=debug ",
  "--default-log-to-file=false",
  "--default-mon-cluster-log-to-file=false",
  "--mon-host=$(ROOK_CEPH_MON_HOST)",
  "--mon-initial-members=$(ROOK_CEPH_MON_INITIAL_MEMBERS)",
  "--id=rgw.lfa.a",
  "--setuser=ceph",
  "--setgroup=ceph",
  "--foreground",
  "--rgw-frontends=beast port=8080",
  "--host=$(POD_NAME)",
  "--rgw-mime-types-file=/etc/ceph/rgw/mime.types",
  "--rgw-realm=lfa",
  "--rgw-zonegroup=lfa",
  "--rgw-zone=lfa",
  "--rgw-s3-auth-use-ldap=true",
  "--rgw-ldap-uri=ldaps://ipa1.ls.example.com:636",
  "--rgw-ldap-binddn=uid=svc_ceph,cn=users,cn=accounts,dc=example,dc=com",
  "--rgw-ldap-secret=/etc/ldap/rgw-ldap.secret",
  "--rgw-ldap-searchdn=dc=example,dc=com",
  "--rgw-ldap-dnattr=uid"
]
null
$ k -n rook-ceph get pod -l app=rook-ceph-rgw,rgw=lfa -ojson | jq '.items[].spec.volumes[] | select(.secret.secretName == "lfa-ldap")'
{
  "name": "rook-ceph-rgw-ldap",
  "secret": {
    "defaultMode": 420,
    "items": [
      {
        "key": "password",
        "path": "rgw-ldap.secret"
      }
    ],
    "secretName": "lfa-ldap"
  }
}

The bad news is I can't seem to get it to work. I followed the instructions for creating a token from the ceph docs: https://docs.ceph.com/en/quincy/radosgw/ldap-auth/

Running radosgw-token --encode in the toolbox has no output. It seems that an extra flag might be needed: radosgw-token --encode --ttype=ldap. With the extra flag, there is output, which base64 -d confirms is json that looks like the manually encoded example.

However, I haven't been able to get it to work.

 ~ $ aws s3 --endpoint-url "$S3_ENDPOINT" ls

An error occurred (InvalidAccessKeyId) when calling the ListBuckets operation: Unknown
 ~ $ s3cmd --host "$S3_ENDPOINT" ls
ERROR: S3 error: 403 (InvalidAccessKeyId)

And the 389ds / dirsrv instance I'm trying to bind to isn't showing anything in its logs. I am able to bind and query with ldapsearch using the same binddb / basedn.

jhoblitt commented 1 year ago

These are the rgw log messages:

debug 2022-09-28T00:44:25.984+0000 7f2e37188700  1 ====== starting new request req=0x7f2f08d69650 =====
debug 2022-09-28T00:44:26.223+0000 7f2e37188700  1 req 843918706480365044 0.239001542s op->ERRORHANDLER: err_no=-2028 new_err_no=-2028
debug 2022-09-28T00:44:26.223+0000 7f2e37188700  1 ====== req done req=0x7f2f08d69650 op status=0 http_status=403 latency=0.239001542s ======
debug 2022-09-28T00:44:26.223+0000 7f2e37188700  1 beast: 0x7f2f08d69650: 10.42.2.149 - - [28/Sep/2022:00:44:25.984 +0000] "GET / HTTP/1.1" 403 185 - - - latency=0.239001542s

I will try to debug it tomorrow.

jhoblitt commented 1 year ago

It doesn't look like rook has support for rgw debug? I was able to edit the rgw deploy and add in --debug-rgw=20/20.

jhoblitt commented 1 year ago

This is the ldap hash from the test cephobjectstore:

  ldap:
    uri: ldaps://ipa1.ls.example.com:636
    binddn: uid=svc_rancher,cn=users,cn=accounts,dc=example,dc=com
    searchdn: dc=example,dc=com
    dnattr: uid
    credentialsecret: lfa-ldap

The rgw debug logs are full of secrets but this seems to be relevant section:

debug 2022-09-28T17:05:10.774+0000 7f750f3d5700 12 auth search filter: (uid=jhoblitt-test)
debug 2022-09-28T17:05:10.774+0000 7f750f3d5700  5 auth ldap_search_s error uid=jhoblitt-test ldap err=-1
debug 2022-09-28T17:05:10.781+0000 7f750f3d5700  5 auth ldap_search_s error uid=jhoblitt-test ldap err=-1
debug 2022-09-28T17:05:10.781+0000 7f750f3d5700 20 req 265892899602024453 0.417005032s s3:list_buckets rgw::auth::s3::LDAPEngine denied with reason=-2028
debug 2022-09-28T17:05:10.781+0000 7f750f3d5700 20 req 265892899602024453 0.417005032s s3:list_buckets rgw::auth::s3::AWSv2ExternalAuthStrategy denied with reason=-2028
debug 2022-09-28T17:05:10.781+0000 7f750f3d5700 20 req 265892899602024453 0.417005032s s3:list_buckets rgw::auth::s3::AWSAuthStrategy: trying rgw::auth::s3::LocalEngine

Given that search filter and the config above, I am able to construct an ldapsearch equivalent which returns a valid result. E.g.:

$ ldapsearch -x -w'<base64 -d secret>' -D uid=svc_ceph,cn=users,cn=accounts,dc=example,dc=com -H ldaps://ipa1.ls.example.com:636 -b dc=example,dc=com '(uid=jhoblitt-test)'

...

# search result
search: 2
result: 0 Success

# numResponses: 3
# numEntries: 2
jhoblitt commented 1 year ago

I've made progress. The ldap client in rgw was refusing to connect to 389ds because ipa uses its own internal CA. Changing over to ldap://ipa1.ls.example.com:389 got past that error and I can now seeing ldap queries hitting the server.

debug 2022-09-28T17:34:28.065+0000 7f4b5b5a7700 12 auth search filter: (uid=jhoblitt-test)
debug 2022-09-28T17:34:28.104+0000 7f4b5b5a7700 20 req 12356448804687156202 0.040000163s s3:list_bucket rgw::auth::s3::LDAPEngine granted access
debug 2022-09-28T17:34:28.104+0000 7f4b5b5a7700 20 req 12356448804687156202 0.040000163s s3:list_bucket rgw::auth::s3::AWSv2ExternalAuthStrategy granted access
debug 2022-09-28T17:34:28.104+0000 7f4b5b5a7700 20 req 12356448804687156202 0.040000163s s3:list_bucket rgw::auth::s3::AWSAuthStrategy granted access
debug 2022-09-28T17:34:28.104+0000 7f4b5b5a7700 20 req 12356448804687156202 0.040000163s s3:list_bucket get_system_obj_state: rctx=0x7f4b12d14038 obj=lfa.rgw.meta:users.uid:jhoblitt-test$jhoblitt-test state=0x5579bbe974e0 s->prefetch_data=0
debug 2022-09-28T17:34:28.104+0000 7f4b5b5a7700 10 req 12356448804687156202 0.040000163s s3:list_bucket cache get: name=lfa.rgw.meta+users.uid+jhoblitt-test$jhoblitt-test : hit (negative entry)
debug 2022-09-28T17:34:28.104+0000 7f4b5b5a7700 20 req 12356448804687156202 0.040000163s s3:list_bucket get_system_obj_state: rctx=0x7f4b12d14038 obj=lfa.rgw.meta:users.uid:jhoblitt-test state=0x5579bbe974e0 s->prefetch_data=0
debug 2022-09-28T17:34:28.104+0000 7f4b5b5a7700 10 req 12356448804687156202 0.040000163s s3:list_bucket cache get: name=lfa.rgw.meta+users.uid+jhoblitt-test : hit (requested=0x16, cached=0x17)
debug 2022-09-28T17:34:28.104+0000 7f4b5b5a7700 20 req 12356448804687156202 0.040000163s s3:list_bucket get_system_obj_state: s->obj_tag was set empty
debug 2022-09-28T17:34:28.104+0000 7f4b5b5a7700 20 req 12356448804687156202 0.040000163s s3:list_bucket Read xattr: user.rgw.idtag
debug 2022-09-28T17:34:28.104+0000 7f4b5b5a7700 10 req 12356448804687156202 0.040000163s s3:list_bucket cache get: name=lfa.rgw.meta+users.uid+jhoblitt-test : hit (requested=0x13, cached=0x17)
debug 2022-09-28T17:34:28.104+0000 7f4b5b5a7700  2 req 12356448804687156202 0.040000163s s3:list_bucket normalizing buckets and tenants
debug 2022-09-28T17:34:28.104+0000 7f4b5b5a7700 10 req 12356448804687156202 0.040000163s s->object=<NULL> s->bucket=foo
debug 2022-09-28T17:34:28.104+0000 7f4b5b5a7700  2 req 12356448804687156202 0.040000163s s3:list_bucket init permissions
debug 2022-09-28T17:34:28.104+0000 7f4b5b5a7700 20 req 12356448804687156202 0.040000163s s3:list_bucket get_system_obj_state: rctx=0x7f4b12d14060 obj=lfa.rgw.meta:root:foo state=0x5579bbe974e0 s->prefetch_data=0
debug 2022-09-28T17:34:28.104+0000 7f4b5b5a7700 10 req 12356448804687156202 0.040000163s s3:list_bucket cache get: name=lfa.rgw.meta+root+foo : hit (negative entry)
debug 2022-09-28T17:34:28.104+0000 7f4b5b5a7700 10 req 12356448804687156202 0.040000163s s3:list_bucket init_permissions on <NULL> failed, ret=-2002
debug 2022-09-28T17:34:28.104+0000 7f4b5b5a7700  1 req 12356448804687156202 0.040000163s op->ERRORHANDLER: err_no=-2002 new_err_no=-2002
debug 2022-09-28T17:34:28.104+0000 7f4b51d94700 20 req 12356448804687156202 0.040000163s get_system_obj_state: rctx=0x7f4b12d14680 obj=lfa.rgw.log:script.postrequest. state=0x5579bbe97720 s->prefetch_data=0
debug 2022-09-28T17:34:28.105+0000 7f4b51d94700 10 req 12356448804687156202 0.041000169s cache get: name=lfa.rgw.log++script.postrequest. : hit (negative entry)
debug 2022-09-28T17:34:28.105+0000 7f4b51d94700  2 req 12356448804687156202 0.041000169s s3:list_bucket op status=0
debug 2022-09-28T17:34:28.105+0000 7f4b51d94700  2 req 12356448804687156202 0.041000169s s3:list_bucket http status=404
debug 2022-09-28T17:34:28.105+0000 7f4b51d94700  1 ====== req done req=0x7f4b12d15650 op status=0 http_status=404 latency=0.041000169s ======
debug 2022-09-28T17:34:28.105+0000 7f4b51d94700  1 beast: 0x7f4b12d15650: 10.42.1.47 - jhoblitt-test [28/Sep/2022:17:34:28.064 +0000] "GET /foo?list-type=2&prefix=&delimiter=%2F&encoding-type=url HTTP/1.1" 404 207 - "aws-cli/2.1.28 Python/3.8.8 Linux/5.19.9-200.fc36.x86_64 exe/x86_64.fedora.36 prompt/off command/s3.ls" - latency=0.041000169s

So I'm guessing I now need to figure out how permissions are supposed to be setup in ldap.

jhoblitt commented 1 year ago

The test cluster I'm using has another zone (used for mutlisite testing) and the ceph dashboard was able to list users from it but not the ldap test zone. I was able to list users via radosgw-admin and see that jhoblitt-test user had been automatically created after authing against ldap but wasn't able to create a bucket via s3. I'm tearing down the test zone to try again from scratch.

jhoblitt commented 1 year ago

I have rebased #8750 on current master (74e7d3cc0) and re-published as jhoblitt/rook:ceph-amd64-ldaprgwsupport-6201bfdd1.

jhoblitt commented 1 year ago

After blowing away and recreating the cephobjectstore (and manually updating the keys for the dashboard-admin user), the ceph dashboard inability to list user/buckets was resolved.

I also have sorted out permission failure. It was user error. I was not specifying the region... which I think is not required when there is only a single rgw zone but there are multiple zones currently in my test cluster. E.g.

 ~ $ aws s3 --endpoint-url "$S3_ENDPOINT" mb s3://foo
make_bucket failed: s3://foo An error occurred (InvalidLocationConstraint) when calling the CreateBucket operation: The specified location-constraint is not valid
 ~ $ aws s3 --endpoint-url "$S3_ENDPOINT" --region lfa mb s3://foo
make_bucket: foo
jhoblitt commented 1 year ago

The last hurdle for me to declare this ready for use is getting ldaps working. The good news is that there is already a caBundleRef in the CRD. The bad news is that that public ca cert bundle + my ipa ca seems to be getting too close to the etcd size limit and I'm getting an http 413.

$ ls -lah ipa-cabundle-secret.yaml 
-rw-r--r-- 1 jhoblitt jhoblitt 870K Sep 28 16:35 ipa-cabundle-secret.yaml
$ k apply -f ipa-cabundle-secret.yaml 
Error from server: error when creating "ipa-cabundle-secret.yaml": the server responded with the status code 413 but did not return more information (post secrets)
jhoblitt commented 1 year ago

I had to walk through the init container logic but I know understand that a ca bundle from a secret is added to the existing default trust anchors by p11-kit, and I don't need to try to cram everything into the secret.

jhoblitt commented 1 year ago

I have a completely function setup that is able to bind via ldaps/636 to freeipa instance with this configuration:

  ldap:
    uri: ldaps://ipa1.ls.example.com
    binddn: uid=svc_rancher,cn=users,cn=accounts,dc=example,dc=com
    searchdn: dc=example,dc=com
    dnattr: uid
    credentialsecret: lfa-ldap
---
apiVersion: v1
kind: Secret
metadata:
  name: ipa-cabundle
  namespace: rook-ceph
stringData:
  cabundle: |
    -----BEGIN CERTIFICATE-----

@thotz This is working for me -- thank you! I am working on adding support for rgw_ldap_searchfilter on top of #8750 in https://github.com/rook/rook/compare/master...jhoblitt:rook:ldaprgwsupport

jhoblitt commented 1 year ago

Build of #8750 rebased onto current master + support for searchfilter:

jhoblitt commented 1 year ago

I have opened https://github.com/rook/rook/pull/11091 with my changes. I presume that integration tests will still be required before this is ready for merge.

jhoblitt commented 1 year ago

Eh, I can't seem to find a good ldap server chart for an integration test. There doesn't appear to be an ipa chart in a working state. There are a few openldap charts being updated but I'm not sure if any of them are a good choice to deploy on a single node kind cluster.

thotz commented 1 year ago

Kudos @jhoblitt, it is good to know it is finally working fine. I guess we can have unit tests than integration if it's difficult to do end to end testing

jhoblitt commented 1 year ago

@thotz You did all the hard work!

I think integration tests are certainly possible. It is simply a matter of finding a solution that will work on kind without consuming a lot of memory, wanting PVCs, etc.

I have made some progress experimenting with various openldap charts. However, I haven't used slapd in almost a decade and I think I'm wrestling with permissions issues.

This is able to create a working slapd instance with just a single pod:

VERSION="3.0.1"

helm upgrade --install \
  --atomic \
  openldap helm-openldap/openldap-stack-ha \
  --create-namespace --namespace openldap \
  --version "v${VERSION}" \
  --values - <<EOF
---
persistence:
  enabled: false
phpldapadmin:
  enabled: false
ltb-passwd:
  enabled: false
replication:
  enabled: false
replicaCount: 1

customLdifFiles:
  01-foo-user.ldif: |-
    dn: uid=foo,dc=example,dc=org
    uid: foo
    objectClass: top
    objectClass: person
    objectClass: posixaccount
    cn: foo
    sn: bar
    homeDirectory: /home/foo
    uidNumber: 70054
    gidNumber: 70054
    userPassword: {SHA}C+7Hteo/D9vJXQ3UfzxbwnXaijM=
EOF

This query (Not@SecurePassw0rd is the chart default) works:

ldapsearch -x -H ldap://openldap.openldap.svc.cluster.local:389 -D "cn=admin,dc=example,dc=org" -w Not@SecurePassw0rd -b dc=example,dc=org '(uid=foo)'

but this one fails (foo is the hashed value in the ldif):

ldapsearch -x -H ldap://openldap.openldap.svc.cluster.local:389 -D "uid=foo,dc=example,dc=org" -w foo -b dc=example,dc=org '(uid=foo)'

The user is able to bind but the query result is empty.

jhoblitt commented 1 year ago

An integration test is working in #11091.

github-actions[bot] commented 8 months ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.

jhoblitt commented 8 months ago

2023 is whizzing by...

github-actions[bot] commented 6 months ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.

github-actions[bot] commented 5 months ago

This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation.