Open joerocklin opened 6 years ago
I am getting the same error as well. Unable to delete the s3 bucket. Here is my s3.yaml file that i used to create
apiVersion: service-operator.aws/v1alpha1
kind: S3Bucket
metadata:
name: prashant-operator
spec:
versioning: true
accessControl: PublicRead
website:
enabled: true
indexPage: index.html
errorPage: 500.html
logging:
enabled: false
prefix: "archive"
time="2018-10-19T22:17:13Z" level=info msg="deleted s3bucket 'prashant-operator'" hostname=aws-service-operator-69c869846d-j69k2
time="2018-10-19T22:17:31Z" level=error msg="error getting s3buckets" error="s3buckets.service-operator.aws \"prashant-operator\" not found" hostname=aws-service-operator-69c869846d-j69k2
time="2018-10-19T22:17:31Z" level=error msg="error processing message" error="s3buckets.service-operator.aws \"prashant-operator\" not found" hostname=aws-service-operator-69c869846d-j69k2
I still see the bucket exists in s3, Configmap and Service exists
There are two things going on here, as of right now the s3 template doesn't tear down the bucket on deletion, Honestly, I'm not sure if there is something that I set up in the CFN to do that or if that is standard practice for buckets created by CFN. Something to look into for sure.
Second, you are hitting: #41 which I've been a little undecided about from an implementation perspective. I have a second issue which is related that will give the dependency tree a different way of handling it #84 this is a standard component in Kubernetes and might help this implementation in the long run.
@joerocklin and @prashantchitta can you put a note on #41 with your thoughts for if it should handle this or leave it to you to clean up. My thinking it purely around not removing a potential dependency for a running application, IE, if you delete the ConfigMap
and an app, uses that ConfigMap
the app would then be in an undesirable state.
Also changing the title to reflect the first issue in the above comment.
@christopherhein I just checked the s3 CFN that you created has DeletionPolicy: Retain
Tags:
- Key: Namespace
Value: !Ref Namespace
- Key: ResourceVersion
Value: !Ref ResourceVersion
- Key: ResourceName
Value: !Ref ResourceName
- Key: ClusterName
Value: !Ref ClusterName
- Key: Heritage
Value: operator.aws
VersioningConfiguration: !If
- UseVersioning
- Status: Enabled
- !Ref 'AWS::NoValue'
DeletionPolicy: Retain
Any reason you did that? If we delete the s3 CRD, I expect it that it deletes the S3 bucket as well. May be you can introduce an option in CRD to say if the user wants to delete or retain the S3 bucket when the CRD is deleted. But the default should be Delete unless the user says to Retain. That is how Dynamic Volumes works in K8 using Storage Classes where you can specify if you want to Retain or Delete the volume when the pvc is deleted
Also, Anything created with s3 should be deleted along with s3 deletion. So the configmaps and services should be deleted as well. This is how K8 design is. In k8 you can delete configmap even if its mounted to another pod. The pod will continue to work since the configmap is mounted. If the pod dies and tries to come back up, it will fail. So here the design should be similar and in-line to K8's implementation. Thoughts?
There it is… honestly, that template was pulled from the https://github.com/awslabs/aws-servicebroker then modified to support the static website attributes. We could should make that something configurable via the CRD. Best option to give some flexibility both.
For the additional resources we can track that in #41
Thanks @prashantchitta
I'm not sure if this is an issue with the service operator or with something in cloudformation, but here's what I'm seeing.
I'm happy to provide any information which might be of interest in solving this problem. I'm still looking into details myself but want to see if others are experiencing the same problem.
Here is a partial log of the output with some values altered to protect some information: