Previously, min-replicas applied to the number of backends on which the bucket should be updated before it can be considered Synced. This change means that min-replicas now applies to the number of backends on which the bucket should be created before it can be considered Ready. This means we only consider a bucket Synced when it is successfully updated on all backends.
The reason for the change is that with the introduction of more sub-resources such as Versioning (and planned Locking), it is problematic to consider a Bucket to be Synced if sub resources such as Versioning have not been enforced on all backends. We need to avoid a situation where, for example, a Bucket is considered Synced, but of it's three backends Versioning has been enabled on backends 1 and 2, while Locking has been enabled on backends 2 and 3. If the Bucket is then paused, the Bucket will be stuck in this scenario but seen as Synced by the user.
The easiest remedy is to simplify this process and only consider a bucket Synced once it is available on all backends, and only then pause the bucket (should auto-pause be enabled).
I have:
[x] Run make reviewable to ensure this PR is ready for review.
[x] Run make ceph-chainsaw to validate these changes against Ceph. This step is not always necessary. However, for changes related to S3 calls it is sensible to validate against an actual Ceph cluster. Localstack is used in our CI Chainsaw suite for convenience and there can be disparity in S3 behaviours betwee it and Ceph. See docs/TESTING.md for information on how to run tests against a Ceph cluster.
[ ] Added backport release-x.y labels to auto-backport this PR if necessary.
Description of your changes
Previously,
min-replicas
applied to the number of backends on which the bucket should be updated before it can be considered Synced. This change means thatmin-replicas
now applies to the number of backends on which the bucket should be created before it can be considered Ready. This means we only consider a bucket Synced when it is successfully updated on all backends.The reason for the change is that with the introduction of more sub-resources such as Versioning (and planned Locking), it is problematic to consider a Bucket to be Synced if sub resources such as Versioning have not been enforced on all backends. We need to avoid a situation where, for example, a Bucket is considered Synced, but of it's three backends Versioning has been enabled on backends 1 and 2, while Locking has been enabled on backends 2 and 3. If the Bucket is then paused, the Bucket will be stuck in this scenario but seen as Synced by the user.
The easiest remedy is to simplify this process and only consider a bucket Synced once it is available on all backends, and only then pause the bucket (should auto-pause be enabled).
I have:
make reviewable
to ensure this PR is ready for review.make ceph-chainsaw
to validate these changes against Ceph. This step is not always necessary. However, for changes related to S3 calls it is sensible to validate against an actual Ceph cluster. Localstack is used in our CI Chainsaw suite for convenience and there can be disparity in S3 behaviours betwee it and Ceph. Seedocs/TESTING.md
for information on how to run tests against a Ceph cluster.backport release-x.y
labels to auto-backport this PR if necessary.How has this code been tested
Updated chainsaw and a lot of unit tests