Open gecube opened 1 year ago
I made several tests. It looks like that
existingObjectReplication:
status: "Enabled"
breaks the XML.
So the next options are working:
replication:
role: "arn:aws:iam::966321756598:role/s3-cross-region-replication"
rules:
- deleteMarkerReplication:
status: "Disabled"
status: "Enabled"
priority: 1
filter:
prefix: "logs/"
destination:
bucket: "arn:aws:s3:::organization-test-bucket-1-replica"
id: "Rule-1"
replication:
role: "arn:aws:iam::966321756598:role/s3-cross-region-replication"
rules:
- deleteMarkerReplication:
status: "Disabled"
status: "Enabled"
priority: 1
filter:
prefix: "/"
destination:
bucket: "arn:aws:s3:::organisation-test-bucket-1-replica"
id: "Rule-1"
but when I am adding "existingObjectReplication" key - Malformed XML in logs and the replication rules are not changing anymore.
Also I tried to create a bucket with ObjectLock enabled and replication rules in one go. And it was unsuccessful
2023-09-01T09:59:16.520Z ERROR Reconciler error {"controller": "bucket", "controllerGroup": "s3.services.k8s.aws", "controllerKind": "Bucket", "Bucket": {"name":"org-test-bucket-4","namespace":"infra-production"}, "namespace": "infra-production", "name": "org-test-bucket-4", "reconcileID": "a94cb2c5-a581-40d9-ad80-9a3bedd038aa", "error": "Error syncing property 'Replication': InvalidRequest: Replication configuration cannot be applied to an Object Lock enabled bucket\n\tstatus code: 400, request id: A25F04YEASX24NT2, host id: Nx7daGVfHMQmDhNQpYqO7BKS+v8m3cHRpnEalXVYKY9AJbXZVemnLycUm3vTQVhvvvu+9zZ2J7U=", "errorVerbose": "InvalidRequest: Replication configuration cannot be applied to an Object Lock enabled bucket\n\tstatus code: 400, request id: A25F04YEASX24NT2, host id: Nx7daGVfHMQmDhNQpYqO7BKS+v8m3cHRpnEalXVYKY9AJbXZVemnLycUm3vTQVhvvvu+9zZ2J7U=\nError syncing property 'Replication'\ngithub.com/aws-controllers-k8s/s3-controller/pkg/resource/bucket.(*resourceManager).customUpdateBucket\n\t/github.com/aws-controllers-k8s/s3-controller/pkg/resource/bucket/hook.go:286\ngithub.com/aws-controllers-k8s/s3-controller/pkg/resource/bucket.(*resourceManager).sdkUpdate\n\t/github.com/aws-controllers-k8s/s3-controller/pkg/resource/bucket/sdk.go:229\ngithub.com/aws-controllers-k8s/s3-controller/pkg/resource/bucket.(*resourceManager).Update\n\t/github.com/aws-controllers-k8s/s3-controller/pkg/resource/bucket/manager.go:157\ngithub.com/aws-controllers-k8s/runtime/pkg/runtime.(*resourceReconciler).updateResource\n\t/go/pkg/mod/github.com/aws-controllers-k8s/runtime@v0.26.0/pkg/runtime/reconciler.go:536\ngithub.com/aws-controllers-k8s/runtime/pkg/runtime.(*resourceReconciler).Sync\n\t/go/pkg/mod/github.com/aws-controllers-k8s/runtime@v0.26.0/pkg/runtime/reconciler.go:279\ngithub.com/aws-controllers-k8s/runtime/pkg/runtime.(*resourceReconciler).reconcile\n\t/go/pkg/mod/github.com/aws-controllers-k8s/runtime@v0.26.0/pkg/runtime/reconciler.go:215\ngithub.com/aws-controllers-k8s/runtime/pkg/runtime.(*resourceReconciler).Reconcile\n\t/go/pkg/mod/github.com/aws-controllers-k8s/runtime@v0.26.0/pkg/runtime/reconciler.go:186\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.5/pkg/internal/controller/controller.go:122\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.5/pkg/internal/controller/controller.go:323\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.5/pkg/internal/controller/controller.go:274\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.5/pkg/internal/controller/controller.go:235\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1594"}
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.5/pkg/internal/controller/controller.go:329
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.5/pkg/internal/controller/controller.go:274
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.14.5/pkg/internal/controller/controller.go:235
I used the next manifest, so it was not the situation when I created the bucket first and then added the replication options:
apiVersion: s3.services.k8s.aws/v1alpha1
kind: Bucket
metadata:
name: organisation-test-bucket-4
spec:
name: organisation-test-bucket-4
publicAccessBlock:
blockPublicACLs: true
blockPublicPolicy: true
ignorePublicACLs: true
restrictPublicBuckets: true
logging:
loggingEnabled:
targetBucket: "organisation-rec-access"
targetPrefix: ""
objectLockEnabledForBucket: true
versioning:
status: Enabled
policy: >
{
"Id": "DenyNonSSL",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowSSLRequestsOnly",
"Action": "s3:*",
"Effect": "Deny",
"Resource": [
"arn:aws:s3:::organisation-test-bucket-4",
"arn:aws:s3:::organisation-test-bucket-4/*"
],
"Condition": {
"Bool": {
"aws:SecureTransport": "false"
}
},
"Principal": "*"
}
]
}
replication:
role: "arn:aws:iam::966321756598:role/s3-cross-region-replication"
rules:
- deleteMarkerReplication:
status: "Disabled"
status: "Enabled"
priority: 1
filter:
prefix: "/"
destination:
bucket: "arn:aws:s3:::organisation-test-bucket-objectlock-replica"
id: "Rule-1"
Issues go stale after 180d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 60d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 180d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 60d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale
/remove-lifecycle stale
Describe the bug
Hi! I am trying to create a test setup with two buckets with replication enabled from bucket A to bucket B (they are located in different zones). I prepared the next manifests:
The buckets were created. But the replication was not set up (I could check it with Amazon console).
Also I noticed the next error message in S3 controller logs:
The error message does not give any useful information regarding what's wrong with replication settings. I'd prefer to get more detailed output or - better - to disallow any changes to Bucket object leading to incorrect replication settings (could be implemented with admission controller or something similar). Also it would be very nice to provide a working examples of YAML files for basic usages. Like: