aws-controllers-k8s / community

AWS Controllers for Kubernetes (ACK) is a project enabling you to manage AWS services from Kubernetes
https://aws-controllers-k8s.github.io/community/
Apache License 2.0
2.37k stars 251 forks source link

OpenSearch CRDs #1166

Open oleg-yudovich opened 2 years ago

oleg-yudovich commented 2 years ago

Describe the bug During the chart installation there are 2 different CRDs added to the cluster:

  1. opensearchservice.services.k8s.aws
  2. opensearch.services.k8s.aws

When applying my Domain yaml - under API version opensearchservice.services.k8s.aws/v1alpha1 (as described in docs) and try to get my yaml by:

kubectl get Domain <my-domain-name>

I get an error: Error from server (NotFound): domains.opensearch.services.k8s.aws "" not found

This is because is not under domains.opensearch.services.k8s.aws but under domains.opensearchservice.services.k8s.aws.

This is very confusing and it won't work when we try to read the status of our Domain YAML.

Why do we have both CRDs opensearchservice.services.k8s.aws and opensearch.services.k8s.aws?

Expected outcome As in any other AWS ACK Operator, I think we should have only one CRD per Kind

Environment

narcosis commented 2 years ago

.

jaypipes commented 2 years ago

@oleg-yudovich Hi! Thank you for this issue! I've noticed the same in building out and debugging the Opensearchservice controller. It's super annoying behaviour and I suspect this is because the name of the service package in aws-sdk-go is different from the name of the model for the service in aws-sdk-go. In this case, the name of the service package is opensearchservice and the name of the model is opensearch. Yes, I know, it's unnecessarily inconsistent and annoying.

Somewhere in our generation of the CRD manifests, controller-gen crds must be getting confused as to which is the API group. I will try to track down specifically what is going on and fix ASAP.

ack-bot commented 2 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close. If this issue is safe to close now please do so with /close. Provide feedback via https://github.com/aws-controllers-k8s/community. /lifecycle stale

a-hilaly commented 1 year ago

/remove-lifecycle stale

eks-bot commented 1 year ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close. If this issue is safe to close now please do so with /close. Provide feedback via https://github.com/aws-controllers-k8s/community. /lifecycle stale

a-hilaly commented 1 year ago

/remove-lifecycle stale

ack-bot commented 1 year ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close. If this issue is safe to close now please do so with /close. Provide feedback via https://github.com/aws-controllers-k8s/community. /lifecycle stale

a-hilaly commented 1 year ago

/remove-lifecycle stale

ack-bot commented 1 year ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close. If this issue is safe to close now please do so with /close. Provide feedback via https://github.com/aws-controllers-k8s/community. /lifecycle stale

ack-bot commented 1 year ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity. If this issue is safe to close now please do so with /close. Provide feedback via https://github.com/aws-controllers-k8s/community. /lifecycle rotten

ack-bot commented 7 months ago

Issues go stale after 180d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 60d of inactivity and eventually close. If this issue is safe to close now please do so with /close. Provide feedback via https://github.com/aws-controllers-k8s/community. /lifecycle stale

ack-bot commented 5 months ago

Stale issues rot after 60d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 60d of inactivity. If this issue is safe to close now please do so with /close. Provide feedback via https://github.com/aws-controllers-k8s/community. /lifecycle rotten