Closed ehsan310 closed 4 months ago
I cant see if the bucket policy feature is implemented yet https://github.com/rook/rook/pull/5800 @thotz @travisn any idea?
OBCs provision a bucket with security settings that are predefined, and every reconcile will reset those settings. Neither Rook nor the library that provides OBC reconciliation supports policy modifications.
If you want to set your own policies on buckets, you will have to create those buckets yourself rather than use OBCs.
I have had some conversations with RGW developers to see if we can allow for policy modifications with the Container Object Storage Interface (COSI) work (which will replace OBCs in the future), but likely this also requires some new RGW features as well.
I am closing this bug report with wontfix
because we don't plan to support this for OBCs.
Is this a bug report or feature request?
Deviation from expected behavior:
I have a rook cluster configured with external ceph , works as expected. I also have RGW configured with some existing buckets and configured storage class to overcome the limitation of adding existing bucket. when exsting bucket manifest is applied i was expecting rook will create a new obc user and attach and update bucket policy which i does! the problem is when it applied bucket policy that also overwrite any exising bucket policy , means if i had that bucket set to serve public traffic suddentdly it becomes private! , the issue is every time manifest applies even though there is no new user is created but still bucket policy is applied and remove existing bucket policies Expected behavior:
create obc user and append bucket policy and not overwritting existing one.
How to reproduce it (minimal and precise):
create a bucket in exisintg external cluster with bucket policy set to read public , configure rook with external and configure rgw external, add storage class with exisintg bucketname and then create obc. bucket suddenly become private.
File(s) to submit:
cluster.yaml
, if necessaryLogs to submit:
Crashing pod(s) logs, if necessary
To get logs, use
kubectl -n <namespace> logs <pod name>
When pasting logs, always surround them with backticks or use theinsert code
button from the Github UI. Read GitHub documentation if you need help.Cluster Status to submit:
Output of kubectl commands, if necessary
To get the health of the cluster, use
kubectl rook-ceph health
To get the status of the cluster, usekubectl rook-ceph ceph status
For more details, see the Rook kubectl PluginEnvironment:
uname -a
):rook version
inside of a Rook Pod):ceph -v
):kubectl version
):ceph health
in the Rook Ceph toolbox):