osism / issues

This repository is used for bug reports that are cross-project or not bound to a specific repository (or to an unknown repository).
https://www.osism.tech
1 stars 1 forks source link

S3: AccessDenied (RGW Keystone Integration) #1178

Closed Nils98Ar closed 15 hours ago

Nils98Ar commented 1 week ago

I try to communicate with the S3 api using s3cmd and this config (access and secret key from openstack ec2 credentials create --project <project name>):

[default]
access_key = <access key>
secret_key = <secret key>
host_base = https://<api ext domain>:6780
host_bucket = https://<api ext domain>:6780

I do always get a AccessDenied whereas in Horizon I can create Containers without problems:

root@li01:~# s3cmd ls
ERROR: S3 error: 403 (AccessDenied)
root@li01:~# s3cmd mb s3://test2
ERROR: Access to bucket 'test2' was denied
ERROR: S3 error: 403 (AccessDenied)

The ceph_conf_overrides contains:

  "client.rgw.{{ hostvars[inventory_hostname]['ansible_hostname'] }}.rgw0":
    "rgw content length compat": "true"
    "rgw enable apis": "swift, s3, admin"
    "rgw keystone accepted roles": "member, admin"
    "rgw keystone accepted admin roles": "admin"
    "rgw keystone admin domain": "default"
    "rgw keystone admin password": "{{ ceph_rgw_keystone_password }}"
    "rgw keystone admin project": "service"
    "rgw keystone admin tenant": "service"
    "rgw keystone admin user": "ceph_rgw"
    "rgw keystone api version": "3"
    "rgw keystone url": "https://<int api domain>:5000"
    "rgw keystone verify ssl": "false"
    "rgw verify ssl": "false"
    "rgw keystone implicit tenants": "true"
    "rgw s3 auth use keystone": "true"
    "rgw swift account in url": "true"
    "rgw swift versioning enabled": "true"

Ceph RGW log says:

2024-11-07T14:09:26.525+0100 7fe72d0d9700  1 ====== starting new request req=0x7fe62e6db710 =====
2024-11-07T14:09:26.525+0100 7fe72b8d6700  0 req 6810719075705092821 0.000000000s s3:list_buckets No stored secret string, cache miss
2024-11-07T14:09:26.909+0100 7fe72b0d5700  0 req 6810719075705092821 0.383997619s s3:list_buckets No stored secret string, cache miss
2024-11-07T14:09:26.925+0100 7fe72b0d5700  1 req 6810719075705092821 0.399997503s op->ERRORHANDLER: err_no=-1 new_err_no=-1
2024-11-07T14:09:26.925+0100 7fe7288d0700  1 ====== req done req=0x7fe62e6db710 op status=0 http_status=403 latency=0.399997503s ======
2024-11-07T14:09:26.925+0100 7fe7288d0700  1 beast: 0x7fe62e6db710: 10.99.8.22 - - [07/Nov/2024:14:09:26.525 +0100] "GET / HTTP/1.1" 403 191 - - - latency=0.39999

Keystone log says:

2024-11-07 14:10:54.280 1016 ERROR keystone.common.fernet_utils [None req-c384f767-2328-43a9-8dbe-3cc69c4a09bc e29f7111ee5241a2b5dc8d151507fbbc 91c343c9091f4b8d9aacd4262f2560ae - - default default] Either [credential] key_repository does not exist or Keystone does not have sufficient permission to access it: /etc/keystone/credential-keys/
2024-11-07 14:10:54.285 1016 WARNING keystone.server.flask.application [None req-c384f767-2328-43a9-8dbe-3cc69c4a09bc e29f7111ee5241a2b5dc8d151507fbbc 91c343c9091f4b8d9aacd4262f2560ae - - default default] Authorization failed. The request you have made requires authentication. from 10.99.5.12: keystone.exception.Unauthorized: The request you have made requires authentication.

Any idea what I am doing wrong? I think this worked in the past but we did not really need it until now. The only thing I know is that we used the user swift in the past but the new user ceph_rgw exists and looks good.

dragon@manager01:/opt/configuration/environments/ceph$ openstack endpoint list | grep -i swift
| 27f8e3874e42473cb74b71cf3d672172 | RegionOne | swift        | object-store   | True    | internal  | https://<api int domain>:6780/swift/v1/AUTH_%(project_id)s |
| e6dd6ad800494212a790e28828f1cfad | RegionOne | swift        | object-store   | True    | public    | https://<api ext domain>:6780/swift/v1/AUTH_%(project_id)s     |

dragon@manager01:/opt/configuration/environments/ceph$ openstack service list | grep -i swift
| 276cdb2a0d174acf8c2355909cc9610e | swift       | object-store   |

dragon@manager01:/opt/configuration/environments/ceph$ openstack user list | grep -i ceph_rgw
| e29f7111ee5241a2b5dc8d151507fbbc | ceph_rgw                 |

dragon@manager01:/opt/configuration/environments/ceph$ openstack role assignment list --user ceph_rgw --project service --names
+-------+------------------+-------+-----------------+--------+--------+-----------+
| Role  | User             | Group | Project         | Domain | System | Inherited |
+-------+------------------+-------+-----------------+--------+--------+-----------+
| admin | ceph_rgw@Default |       | service@Default |        |        | False     |
+-------+------------------+-------+-----------------+--------+--------+-----------+
fzakfeld commented 17 hours ago

Do you have credential encryption configured in any way? (AFAIK this is enabled by default)

https://docs.openstack.org/keystone/pike/admin/identity-credential-encryption.html

Can you access /etc/keystone/credential-keys/ from the keystone or keystone_fernet container?

Nils98Ar commented 17 hours ago

@fzakfeld Thanks for your response!

No, /etc/keystone/credential-keys/ does not exist in the keystone or keystone_fernet container. Should it?

Nils98Ar commented 17 hours ago
dragon@control01:~$ docker exec keystone ls /etc/keystone
default_catalog.templates
fernet-keys
keystone.conf
logging.conf.sample
README.txt
sso_callback_template.html
dragon@control01:~$ docker exec keystone_fernet ls /etc/keystone
default_catalog.templates
fernet-keys
keystone.conf
logging.conf.sample
README.txt
sso_callback_template.html
Nils98Ar commented 17 hours ago

Seems to be expected in kolla-ansible: https://bugs.launchpad.net/kolla/train/+bug/1863643

Nils98Ar commented 16 hours ago

So the problem seems to be:

2024-11-07 14:10:54.285 1016 WARNING keystone.server.flask.application [None req-c384f767-2328-43a9-8dbe-3cc69c4a09bc e29f7111ee5241a2b5dc8d151507fbbc 91c343c9091f4b8d9aacd4262f2560ae - - default default] Authorization failed. The request you have made requires authentication. from 10.99.5.12: keystone.exception.Unauthorized: The request you have made requires authentication.
Nils98Ar commented 16 hours ago

The config section [client.rgw.storage01.rgw0] on the storage nodes is duplicate. This could be a problem? But the question is why it is duplicate...

[client.rgw.storage01.rgw0]
host = storage01
keyring = /var/lib/ceph/radosgw/ceph-rgw.storage01.rgw0/keyring
log file = /var/log/ceph/ceph-rgw-storage01.rgw0.log
rgw frontends = beast endpoint=X.X.X.X:8081
rgw thread pool size = 512

[mon]
mon allow pool delete = True

[client.rgw.storage01.rgw0]
rgw content length compat = true
rgw enable apis = swift, s3, admin
rgw keystone accepted roles = member, admin
rgw keystone accepted admin roles = admin
rgw keystone admin domain = default
rgw keystone admin password = <password>
rgw keystone admin project = service
rgw keystone admin tenant = service
rgw keystone admin user = ceph_rgw
rgw keystone api version = 3
rgw keystone url = https://<api domain>:5000
rgw keystone verify ssl = false
rgw verify ssl = false
rgw keystone implicit tenants = true
rgw s3 auth use keystone = true
rgw swift account in url = true
rgw swift versioning enabled = true
fzakfeld commented 16 hours ago

It is also duplicate for us, but I believe this is just due to overlay files and should not cause issues

Nils98Ar commented 15 hours ago

Stupid mistake. Everything was configured correctly. Ec2 credentials from the admin are just not valid for all projects but only for projects where the admin has the member or admin role explicitely.