Closed killermoehre closed 11 months ago
Are you using self-signed SSL certificates? I reproduced this issue in the Cloud in a Box environment and fixed it by adding the following 2 parameters to the Ceph RGW configuration:
"rgw keystone verify ssl": "false"
"rgw verify ssl": "false"
The logs of the Ceph RGW service are part of /var/log/syslog
.
At the moment there is no better way for the the definition of ceph_rgw_hosts
.
I added the option to ignore SSL and redeployed with osism apply ceph-rgws
, but still the same error.
I don't find the request ID with
osism console --type ansible ceph-rgw
become true
grep -r tx000008182f752f46d05bb /var/log/
either.
Can you verify that the ceph_rgw user is usable?
Yes, the user is usable, I was able to log in with it via horizon.
Can you share the Ceph RGW configuration? That's all I can think of at the moment what it could be. And in both osism/testbed and osism/cloud-in-a-box it works as documented.
##########################
# custom
ceph_conf_overrides:
global:
osd pool default size: 3
mon:
mon allow pool delete: true
"client.rgw.{{ hostvars[inventory_hostname]['ansible_hostname'] }}.rgw0":
"rgw content length compat": "true"
"rgw enable apis": "swift, s3, swift_auth, admin"
"rgw keystone accepted roles": "_member_, member, admin"
"rgw keystone accepted admin roles": "admin"
"rgw keystone admin domain": "default"
"rgw keystone admin password": "{{ ceph_rgw_keystone_password }}"
"rgw keystone admin project": "service"
"rgw keystone admin tenant": "service"
"rgw keystone admin user": "ceph_rgw"
"rgw keystone api version": "3"
"rgw keystone url": "https://api-intern.internal.domain.tld:5000"
"rgw keystone verify ssl": "false"
"rgw keystone implicit tenants": "true"
"rgw s3 auth use keystone": "true"
"rgw swift account in url": "true"
"rgw swift versioning enabled": "true"
"rgw verify ssl": "false"
"rgw enforce swift acls": "true"
Can you disable the swift_auth API + remove the member role.
"rgw enable apis": "swift, s3, admin"
API disabled and _member_
removed, but still no luck.
Would next see if requests from Ceph RGW arrive on the Keystone service. And if there are any errors in the logs. It is best to set Keystone to debug. Other than that I can't think of anything else at the moment. The config itself looks good.
Ha, found the issue. You see, our internal endpoint is not TLS secured. So the connection failed in a non-obvious way. My bad.
Now the issue is that after creating a container via horizon or CLI, you can't toggle the visibility (public or private) in horizon. Also in the default OSISM distribution there is no swift
client included to check this via the CLI (openstack container set
can't do this).
Doh :)
Horizon: this is a well known issue in Horizon that's there for a long time. OpenStack Client: we use the OpenStack Client + SDK (which tries to get rid of the most python-*client packages). I will install it.
In most use cases we do not work with the Swift API but with the S3 API. Probably this makes most sense on your site as well. Especially when integrating with Kubernetes and there with services like Velcro.
You can simply use e.g. the CLI of Minio to modify the visibility of containers via the S3 API. Required credentials can be created with openstack ec2 credentials commands.
Looks like swift is already available in the openstackclient container image:
dragon@testbed-manager:~$ docker exec -it openstackclient swift
usage: swift [--version] [--help] [--os-help] [--snet] [--verbose]
[--debug] [--info] [--quiet] [--auth <auth_url>]
[--auth-version <auth_version> |
--os-identity-api-version <auth_version> ]
[--user <username>]
[--key <api_key>] [--retries <num_retries>]
[--os-username <auth-user-name>]
[--os-password <auth-password>]
[--os-user-id <auth-user-id>]
[--os-user-domain-id <auth-user-domain-id>]
[...]
But it doesn't work with --os-cloud admin
. Also, a wrapper in /usr/local/bin
would be really nice.
For applications and such using the S3-API in this case is totally fine, no worries. It's more or the local admin to do a quick check before announcing "Yep, feature ready, dear customer. Please use!"
So, as I told earlier, I want to deploy Ceph RADOS-Gateways and integrate them into OpenStack as container backend instead of Swift. I'm following this documentation. (Btw, is there a smarter way to get the
ceph_rgw_hosts
instead of listing them?)Doing a nifty
openstack --os-cloud admin container list --debug
to check if the setup is right I get the following error message.I'm not sure which log I have to check for this
access denied
error. I assume it's something inceph
, right? Where would I search for theRequest-ID
?Side effect in the horizon: you become logged out immediately, when open the container view.