Open jhoblitt opened 1 month ago
To be pedantic, as userID
is ignored, a new rgw user is being created as the owner for each greenfield bucket.
I believe this is important functionality to have as part of rook as it doesn't appear that the COSI CRDs have a mechanism for managing bucket quotas. E.g. bucketclaim.spec
doesn't have a field for passing down backend specific parameters.
I was able to reproduce this issue on minikube.
I want to have a discussion about whether this behavior should realistically be allowed/enabled. In COSI discussions, we have explicitly disallowed users from having an opaque API. A user-facing opaque field API makes for a leaky abstraction that limits workload portability. It also is a potential concern for admins where security and/or resource constraints are at play. Allowing a user to modify backend params isn't something that many admins would want to allow. [As a user, I could potentially give myself elevated permissions or an unrealistic space quota]
I also understand that there might be existing users of this feature that we should fix, for legacy reasons. If that's the case, I think our hands are tied.
But if this feature never was working, or if no existing users are complaining, then I would be more inclined to take the approach that it's a bug that it appears to be exposed to users, and it should be un-exposed.
As a administrator, I need to pre-create buckets for end users and set quota and policy on them. Policy in terms of preventing orphaned multi-part uploads, read/write access, etc. "End users" do not have access to the k8s cluster running rgw except via s3. I would like to use rook CRs as the administrative vehicle for configuration management.
Ceph makes this difficult as there is now radosgw-admin bucket create
command. Requiring usage of both the admin and s3 apis to create a bucket and set a quota/policy on it. There are other options such as using terraform for setting quotas but that requires manual triggering. As everytihng else is managed as a k8s CR, it makes a lot of sense for policy/quota to be handled via the same mechanism (and rook is internally already creating rgw users to be able to create buckets and set policy on them).
[As a user, I could potentially give myself elevated permissions or an unrealistic space quota]
If an end user can create CephObjectStoreUser
CRs, they are already able to grant themselves administrative privileges.
@travisn @BlaineEXE As discussed at the community meeting this morning, I looked into trying to set a bucket quota via a sc
and was unable to make it work.
I wasn't able to find a statement that looks up sc
parameters for quota sizes.
This is where the quota size is read from the obc
: https://github.com/rook/rook/blob/master/pkg/operator/ceph/object/bucket/provisioner.go#L586
Then that value is used to a quota on the user: https://github.com/rook/rook/blob/master/pkg/operator/ceph/object/bucket/provisioner.go#L621-L622
This shows the 1Ti quota is applied as a user quota:
bash-5.1$ radosgw-admin user info --uid obc-rook-ceph-test2-bf1699bf-5b42-4275-81cf-f7c4965e4522
{
"user_id": "obc-rook-ceph-test2-bf1699bf-5b42-4275-81cf-f7c4965e4522",
"display_name": "obc-rook-ceph-test2-bf1699bf-5b42-4275-81cf-f7c4965e4522",
"email": "",
"suspended": 0,
"max_buckets": 1,
"subusers": [],
"keys": [
{
"user": "obc-rook-ceph-test2-bf1699bf-5b42-4275-81cf-f7c4965e4522",
"access_key": "Y3SVXQ9DZ9K23WINND03",
"secret_key": "q1AoNZ3hv3l6IO3TPlXUQCcjdPj1uSOsUCvp8fNK"
}
],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"default_storage_class": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": true,
"check_on_raw": false,
"max_size": 1099511627776,
"max_size_kb": 1073741824,
"max_objects": -1
},
"temp_url_keys": [],
"type": "rgw",
"mfa_ids": []
}
And there is no quota set on the bucket itself:
bash-5.1$ radosgw-admin bucket stats --bucket test2
{
"bucket": "test2",
"num_shards": 401,
"tenant": "",
"zonegroup": "c61cc46b-669a-476c-abd0-cf176acd0d90",
"placement_rule": "default-placement",
"explicit_placement": {
"data_pool": "",
"data_extra_pool": "",
"index_pool": ""
},
"id": "7f65d17d-4802-4399-9a2e-4aefc8400b5f.1041172.1",
"marker": "7f65d17d-4802-4399-9a2e-4aefc8400b5f.1041172.1",
"index_type": "Normal",
"versioned": false,
"versioning_enabled": false,
"object_lock_enabled": false,
"mfa_enabled": false,
"owner": "obc-rook-ceph-test2-bf1699bf-5b42-4275-81cf-f7c4965e4522",
"ver": "0#1,1#1,2#1,3#1,4#1,5#1,6#1,7#1,8#1,9#1,10#1,11#1,12#1,13#1,14#1,15#1,16#1,17#1,18#1,19#1,20#1,21#1,22#1,23#1,24#1,25#1,26#1,27#1,28#1,29#1,30#1,31#1,32#1,33#1,34#1,35#1,36#1,37#1,38#1,39#1,40#1,41#1,42#1,43#1,44#1,45#1,46#1,47#1,48#1,49#1,50#1,51#1,52#1,53#1,54#1,55#1,56#1,57#1,58#1,59#1,60#1,61#1,62#1,63#1,64#1,65#1,66#1,67#1,68#1,69#1,70#1,71#1,72#1,73#1,74#1,75#1,76#1,77#1,78#1,79#1,80#1,81#1,82#1,83#1,84#1,85#1,86#1,87#1,88#1,89#1,90#1,91#1,92#1,93#1,94#1,95#1,96#1,97#1,98#1,99#1,100#1,101#1,102#1,103#1,104#1,105#1,106#1,107#1,108#1,109#1,110#1,111#1,112#1,113#1,114#1,115#1,116#1,117#1,118#1,119#1,120#1,121#1,122#1,123#1,124#1,125#1,126#1,127#1,128#1,129#1,130#1,131#1,132#1,133#1,134#1,135#1,136#1,137#1,138#1,139#1,140#1,141#1,142#1,143#1,144#1,145#1,146#1,147#1,148#1,149#1,150#1,151#1,152#1,153#1,154#1,155#1,156#1,157#1,158#1,159#1,160#1,161#1,162#1,163#1,164#1,165#1,166#1,167#1,168#1,169#1,170#1,171#1,172#1,173#1,174#1,175#1,176#1,177#1,178#1,179#1,180#1,181#1,182#1,183#1,184#1,185#1,186#1,187#1,188#1,189#1,190#1,191#1,192#1,193#1,194#1,195#1,196#1,197#1,198#1,199#1,200#1,201#1,202#1,203#1,204#1,205#1,206#1,207#1,208#1,209#1,210#1,211#1,212#1,213#1,214#1,215#1,216#1,217#1,218#1,219#1,220#1,221#1,222#1,223#1,224#1,225#1,226#1,227#1,228#1,229#1,230#1,231#1,232#1,233#1,234#1,235#1,236#1,237#1,238#1,239#1,240#1,241#1,242#1,243#1,244#1,245#1,246#1,247#1,248#1,249#1,250#1,251#1,252#1,253#1,254#1,255#1,256#1,257#1,258#1,259#1,260#1,261#1,262#1,263#1,264#1,265#1,266#1,267#1,268#1,269#1,270#1,271#1,272#1,273#1,274#1,275#1,276#1,277#1,278#1,279#1,280#1,281#1,282#1,283#1,284#1,285#1,286#1,287#1,288#1,289#1,290#1,291#1,292#1,293#1,294#1,295#1,296#1,297#1,298#1,299#1,300#1,301#1,302#1,303#1,304#1,305#1,306#1,307#1,308#1,309#1,310#1,311#1,312#1,313#1,314#1,315#1,316#1,317#1,318#1,319#1,320#1,321#1,322#1,323#1,324#1,325#1,326#1,327#1,328#1,329#1,330#1,331#1,332#1,333#1,334#1,335#1,336#1,337#1,338#1,339#1,340#1,341#1,342#1,343#1,344#1,345#1,346#1,347#1,348#1,349#1,350#1,351#1,352#1,353#1,354#1,355#1,356#1,357#1,358#1,359#1,360#1,361#1,362#1,363#1,364#1,365#1,366#1,367#1,368#1,369#1,370#1,371#1,372#1,373#1,374#1,375#1,376#1,377#1,378#1,379#1,380#1,381#1,382#1,383#1,384#1,385#1,386#1,387#1,388#1,389#1,390#1,391#1,392#1,393#1,394#1,395#1,396#1,397#1,398#1,399#1,400#1",
"master_ver": "0#0,1#0,2#0,3#0,4#0,5#0,6#0,7#0,8#0,9#0,10#0,11#0,12#0,13#0,14#0,15#0,16#0,17#0,18#0,19#0,20#0,21#0,22#0,23#0,24#0,25#0,26#0,27#0,28#0,29#0,30#0,31#0,32#0,33#0,34#0,35#0,36#0,37#0,38#0,39#0,40#0,41#0,42#0,43#0,44#0,45#0,46#0,47#0,48#0,49#0,50#0,51#0,52#0,53#0,54#0,55#0,56#0,57#0,58#0,59#0,60#0,61#0,62#0,63#0,64#0,65#0,66#0,67#0,68#0,69#0,70#0,71#0,72#0,73#0,74#0,75#0,76#0,77#0,78#0,79#0,80#0,81#0,82#0,83#0,84#0,85#0,86#0,87#0,88#0,89#0,90#0,91#0,92#0,93#0,94#0,95#0,96#0,97#0,98#0,99#0,100#0,101#0,102#0,103#0,104#0,105#0,106#0,107#0,108#0,109#0,110#0,111#0,112#0,113#0,114#0,115#0,116#0,117#0,118#0,119#0,120#0,121#0,122#0,123#0,124#0,125#0,126#0,127#0,128#0,129#0,130#0,131#0,132#0,133#0,134#0,135#0,136#0,137#0,138#0,139#0,140#0,141#0,142#0,143#0,144#0,145#0,146#0,147#0,148#0,149#0,150#0,151#0,152#0,153#0,154#0,155#0,156#0,157#0,158#0,159#0,160#0,161#0,162#0,163#0,164#0,165#0,166#0,167#0,168#0,169#0,170#0,171#0,172#0,173#0,174#0,175#0,176#0,177#0,178#0,179#0,180#0,181#0,182#0,183#0,184#0,185#0,186#0,187#0,188#0,189#0,190#0,191#0,192#0,193#0,194#0,195#0,196#0,197#0,198#0,199#0,200#0,201#0,202#0,203#0,204#0,205#0,206#0,207#0,208#0,209#0,210#0,211#0,212#0,213#0,214#0,215#0,216#0,217#0,218#0,219#0,220#0,221#0,222#0,223#0,224#0,225#0,226#0,227#0,228#0,229#0,230#0,231#0,232#0,233#0,234#0,235#0,236#0,237#0,238#0,239#0,240#0,241#0,242#0,243#0,244#0,245#0,246#0,247#0,248#0,249#0,250#0,251#0,252#0,253#0,254#0,255#0,256#0,257#0,258#0,259#0,260#0,261#0,262#0,263#0,264#0,265#0,266#0,267#0,268#0,269#0,270#0,271#0,272#0,273#0,274#0,275#0,276#0,277#0,278#0,279#0,280#0,281#0,282#0,283#0,284#0,285#0,286#0,287#0,288#0,289#0,290#0,291#0,292#0,293#0,294#0,295#0,296#0,297#0,298#0,299#0,300#0,301#0,302#0,303#0,304#0,305#0,306#0,307#0,308#0,309#0,310#0,311#0,312#0,313#0,314#0,315#0,316#0,317#0,318#0,319#0,320#0,321#0,322#0,323#0,324#0,325#0,326#0,327#0,328#0,329#0,330#0,331#0,332#0,333#0,334#0,335#0,336#0,337#0,338#0,339#0,340#0,341#0,342#0,343#0,344#0,345#0,346#0,347#0,348#0,349#0,350#0,351#0,352#0,353#0,354#0,355#0,356#0,357#0,358#0,359#0,360#0,361#0,362#0,363#0,364#0,365#0,366#0,367#0,368#0,369#0,370#0,371#0,372#0,373#0,374#0,375#0,376#0,377#0,378#0,379#0,380#0,381#0,382#0,383#0,384#0,385#0,386#0,387#0,388#0,389#0,390#0,391#0,392#0,393#0,394#0,395#0,396#0,397#0,398#0,399#0,400#0",
"mtime": "2024-10-08T17:46:26.866532Z",
"creation_time": "2024-10-08T17:46:26.860370Z",
"max_marker": "0#,1#,2#,3#,4#,5#,6#,7#,8#,9#,10#,11#,12#,13#,14#,15#,16#,17#,18#,19#,20#,21#,22#,23#,24#,25#,26#,27#,28#,29#,30#,31#,32#,33#,34#,35#,36#,37#,38#,39#,40#,41#,42#,43#,44#,45#,46#,47#,48#,49#,50#,51#,52#,53#,54#,55#,56#,57#,58#,59#,60#,61#,62#,63#,64#,65#,66#,67#,68#,69#,70#,71#,72#,73#,74#,75#,76#,77#,78#,79#,80#,81#,82#,83#,84#,85#,86#,87#,88#,89#,90#,91#,92#,93#,94#,95#,96#,97#,98#,99#,100#,101#,102#,103#,104#,105#,106#,107#,108#,109#,110#,111#,112#,113#,114#,115#,116#,117#,118#,119#,120#,121#,122#,123#,124#,125#,126#,127#,128#,129#,130#,131#,132#,133#,134#,135#,136#,137#,138#,139#,140#,141#,142#,143#,144#,145#,146#,147#,148#,149#,150#,151#,152#,153#,154#,155#,156#,157#,158#,159#,160#,161#,162#,163#,164#,165#,166#,167#,168#,169#,170#,171#,172#,173#,174#,175#,176#,177#,178#,179#,180#,181#,182#,183#,184#,185#,186#,187#,188#,189#,190#,191#,192#,193#,194#,195#,196#,197#,198#,199#,200#,201#,202#,203#,204#,205#,206#,207#,208#,209#,210#,211#,212#,213#,214#,215#,216#,217#,218#,219#,220#,221#,222#,223#,224#,225#,226#,227#,228#,229#,230#,231#,232#,233#,234#,235#,236#,237#,238#,239#,240#,241#,242#,243#,244#,245#,246#,247#,248#,249#,250#,251#,252#,253#,254#,255#,256#,257#,258#,259#,260#,261#,262#,263#,264#,265#,266#,267#,268#,269#,270#,271#,272#,273#,274#,275#,276#,277#,278#,279#,280#,281#,282#,283#,284#,285#,286#,287#,288#,289#,290#,291#,292#,293#,294#,295#,296#,297#,298#,299#,300#,301#,302#,303#,304#,305#,306#,307#,308#,309#,310#,311#,312#,313#,314#,315#,316#,317#,318#,319#,320#,321#,322#,323#,324#,325#,326#,327#,328#,329#,330#,331#,332#,333#,334#,335#,336#,337#,338#,339#,340#,341#,342#,343#,344#,345#,346#,347#,348#,349#,350#,351#,352#,353#,354#,355#,356#,357#,358#,359#,360#,361#,362#,363#,364#,365#,366#,367#,368#,369#,370#,371#,372#,373#,374#,375#,376#,377#,378#,379#,380#,381#,382#,383#,384#,385#,386#,387#,388#,389#,390#,391#,392#,393#,394#,395#,396#,397#,398#,399#,400#",
"usage": {},
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
}
}
What I am looking for would ultimately be a call to https://github.com/ceph/go-ceph/blob/master/rgw/admin/bucket_quota.go#L10 as I need to allow access from a single rgw user to multiple buckets and would not be using the obc
generated user.
We have a CI test that checks additionalConfig
quota. The way that test works is that it sets the additional config on the OBC itself, rather than on the StorageClass as I erroneously believed before. Sorry about that confusion.
This design and implementation has been part of Rook since before I was taking ownership of this, so we can't back out of the design decision on the OBC side.
I do most definitely take it to be a poor choice to allow additionalConfig
on OBCs themselves from an admin security perspective. Since CI is validating this use-case, I presume it does continue to work (unless I've misread some detail of your report).
I guess the thing that is not working is setting quotas via StorageClass, which from a security perspective should be the preferred usage. Does that track with what you see @jhoblitt ?
The thing that isn't working is setting an individual bucket level quota. This is different from configuring the bucket_quota
on the user in that it is scoped to the individual bucket (which also isn't configurable from an sc/obc
). The API call is documented but doing this with radosgw-admin
is undocumented. It is also confusing in that an individual bucket quota is set using the same command as setting a user level bucket quota (it even requires the --uid
flag) but with the addition of an extra --bucket <foo>
flag.
Setting an individual bucket quota with radosgw-admin
. E.g.:
bash-5.1$ radosgw-admin quota set --uid=obc-rook-ceph-test2-bf1699bf-5b42-4275-81cf-f7c4965e4522 --quota-scope=bucket --max-size=1Ti --bucket test2
bash-5.1$ radosgw-admin quota enable --quota-scope=bucket --uid=obc-rook-ceph-test2-bf1699bf-5b42-4275-81cf-f7c4965e4522 --bucket test2
bash-5.1$ radosgw-admin bucket stats --bucket test2 | jq .bucket_quota
{
"enabled": true,
"check_on_raw": false,
"max_size": 1099511627776,
"max_size_kb": 1073741824,
"max_objects": -1
}
bash-5.1$ radosgw-admin user info --uid=obc-rook-ceph-test2-bf1699bf-5b42-4275-81cf-f7c4965e4522 | jq .bucket_quota
{
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
}
Setting that on either the OBC or SC would work. My preference would be to configure bucket level quotas on the OBC as I need to set a different quota size for each bucket. Rather than having to create an sc
dedicated for every obc
, it seems conceptually simpler to reuse the same sc
. I don't think forcing a fixed quota size via an sc
is a very effective administrative control. If the desire is to restrict a user to being able to consume a total of 1Ti, that can't be enforced via an sc
and the user could still create multiple obc
s using the same sc
be able to consume multiples of any quota value set on the sc
. If limiting the total storage footprint was my goal, I would provide the user with a CephObjectStoreUser
with a user level quota set.
I have tried and failed in the past to manage PVC usage with ResourceQuota
and the lesson was that quotas which aren't aggregated across resources and namespace are ineffective.
Oh, I see what is happening here. Yeah, OBCs don't set bucket-level quotas. OBCs set quotas on the user instead. So it's not that there's no effect. The effect just isn't what you are expecting.
Rook's OBC controller sets user_quota
, not bucket_quota
, which I can see here in the unit tests: https://github.com/rook/rook/blob/a84daf9bf0b0d47e8cc886e9f017d6c528021426/pkg/operator/ceph/object/bucket/provisioner_test.go#L230-L245
I have some concerns that if Rook changes the implementation for OBCs, there could be some unintended side effects. At any rate, before we change it, we will have to do some research to make sure existing OBC users won't be affected by a switch from one quota type to another.
Indeed. .spec.additionalConfig
does have an effect but not the one that I expected or was looking for when I opened this issue. I'll update the issue title since I this has turned out to be a feature request rather than a bug.
I too am hesitant to change the existing user level quota behavior. I think it would probably be best to leave it as-is and to add a new field for bucket scope quota(s).
Yeah. I think a reasonable path forward would be to add bucket*
equivalents of the current maxObjects
/maxSize
additionalConfig options. E.g., bucketMaxSize
. That isn't ideal from an API/naming standpoint, but it will be straightforward and also ensure that the many many existing OBC users won't be negatively affected by adding the support.
@BlaineEXE Do you have any guidance on how the new .additionalConfig
keys should be tested? It appears that almost all of the existing OBC coverage is in https://github.com/rook/rook/blob/master/tests/integration/ceph_object_test.go Should the integration tests be extended or should I create a new canary test for OBC?
Unit tests should be sufficient. The object test already verifies that configuration of quotas takes within Ceph and controls end-users as desired.
Is this a bug report or feature request?
Deviation from expected behavior:
Update:
.spec.additionalConfig
sets a quota on the auto-generated user for the bucket. This was not the behavior I expected or was looking for. Nor does it work for my use case of using multiple buckets with a single rgw user. The original text of the issue is left intact below to provide context to the discussion that followed.Settings keys in
objectbucketclaim.spec.additionalConfig
seems to have no effect. There is a test that checks ifadditionalConfig
values are passed through to theobjectbucket
but there doesn't appear to be any tests that check for parameters on the rgw bucket.Expected behavior:
Setting
maxObjects
andmaxSize
would result inbucket_quota
values being set and quotas being enabled. SettinguserID
would result in the bucketowner
being set to the uid of the rgw user.How to reproduce it (minimal and precise):
File(s) to submit:
Logs to submit:
Cluster Status to submit:
Output of kubectl commands, if necessary
Rook version (use
rook version
inside of a Rook Pod):1.15.2
Storage backend version (e.g. for ceph do
ceph -v
):