Closed Nils98Ar closed 15 hours ago
Do you have credential encryption configured in any way? (AFAIK this is enabled by default)
https://docs.openstack.org/keystone/pike/admin/identity-credential-encryption.html
Can you access /etc/keystone/credential-keys/
from the keystone
or keystone_fernet
container?
@fzakfeld Thanks for your response!
No, /etc/keystone/credential-keys/
does not exist in the keystone
or keystone_fernet
container. Should it?
dragon@control01:~$ docker exec keystone ls /etc/keystone
default_catalog.templates
fernet-keys
keystone.conf
logging.conf.sample
README.txt
sso_callback_template.html
dragon@control01:~$ docker exec keystone_fernet ls /etc/keystone
default_catalog.templates
fernet-keys
keystone.conf
logging.conf.sample
README.txt
sso_callback_template.html
Seems to be expected in kolla-ansible: https://bugs.launchpad.net/kolla/train/+bug/1863643
So the problem seems to be:
2024-11-07 14:10:54.285 1016 WARNING keystone.server.flask.application [None req-c384f767-2328-43a9-8dbe-3cc69c4a09bc e29f7111ee5241a2b5dc8d151507fbbc 91c343c9091f4b8d9aacd4262f2560ae - - default default] Authorization failed. The request you have made requires authentication. from 10.99.5.12: keystone.exception.Unauthorized: The request you have made requires authentication.
The config section [client.rgw.storage01.rgw0]
on the storage nodes is duplicate. This could be a problem? But the question is why it is duplicate...
[client.rgw.storage01.rgw0]
host = storage01
keyring = /var/lib/ceph/radosgw/ceph-rgw.storage01.rgw0/keyring
log file = /var/log/ceph/ceph-rgw-storage01.rgw0.log
rgw frontends = beast endpoint=X.X.X.X:8081
rgw thread pool size = 512
[mon]
mon allow pool delete = True
[client.rgw.storage01.rgw0]
rgw content length compat = true
rgw enable apis = swift, s3, admin
rgw keystone accepted roles = member, admin
rgw keystone accepted admin roles = admin
rgw keystone admin domain = default
rgw keystone admin password = <password>
rgw keystone admin project = service
rgw keystone admin tenant = service
rgw keystone admin user = ceph_rgw
rgw keystone api version = 3
rgw keystone url = https://<api domain>:5000
rgw keystone verify ssl = false
rgw verify ssl = false
rgw keystone implicit tenants = true
rgw s3 auth use keystone = true
rgw swift account in url = true
rgw swift versioning enabled = true
It is also duplicate for us, but I believe this is just due to overlay files and should not cause issues
Stupid mistake. Everything was configured correctly. Ec2 credentials from the admin are just not valid for all projects but only for projects where the admin has the member or admin role explicitely.
I try to communicate with the S3 api using s3cmd and this config (access and secret key from
openstack ec2 credentials create --project <project name>
):I do always get a
AccessDenied
whereas in Horizon I can create Containers without problems:The
ceph_conf_overrides
contains:Ceph RGW log says:
Keystone log says:
Any idea what I am doing wrong? I think this worked in the past but we did not really need it until now. The only thing I know is that we used the user
swift
in the past but the new userceph_rgw
exists and looks good.