jaegertracing / jaeger-operator

Jaeger Operator for Kubernetes simplifies deploying and running Jaeger on Kubernetes.
https://www.jaegertracing.io/docs/latest/operator/
Apache License 2.0
1.02k stars 344 forks source link

Too long CRD definition #873

Closed haf closed 4 years ago

haf commented 4 years ago

Error from server (Invalid): error when creating "STDIN": CustomResourceDefinition.apiextensions.k8s.io "jaegers.jaegertracing.io" is invalid: metadata.annotations: Too long: must have at most 262144 characters

From master, right now. Onto docker-desktop macOS

Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.6", GitCommit:"96fac5cd13a5dc064f7d9f4f23030a6aeface6cc", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:49Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.5", GitCommit:"20c265fef0741dd71a66480e35bd69f18351daea", GitTreeState:"clean", BuildDate:"2019-10-15T19:07:57Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}

repro

curl --silent -L https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/crds/jaegertracing.io_jaegers_crd.yaml \
                > k8s/base/jaegertracing.io_jaegers_crd.yaml
kubectl apply -f k8s/base/jaegertracing.io_jaegers_crd.yaml
jpkrohling commented 4 years ago

See #854. In short, use kubectl create -f ..., or remove the validation node if you absolutely have to use kubectl apply.

haf commented 4 years ago

This is not a good enough solution, if kustomize uses apply and if I'm expected to continuously re-run apply from a scheduled job (as I am). Can you at least ship a CRD definition without the validation node, so that the kubectl tooling works as expected?

jpkrohling commented 4 years ago

If there are more people in need of this, we can certainly consider providing a CRD that shouldn't be used for production purposes.

haf commented 4 years ago

Could you give a suggestion on how to do gitops with your solution then?

jpkrohling commented 4 years ago

What prevents you from storing your own modified version of the CRD? Or from piping the result of cURL to a json converter + jq ? I can think of quite a few possibilities, but as you own your environment, you'd be the best person to come up with solutions ;-)

haf commented 4 years ago

Anecdotally: I'm storing modified versions of tooling output e.g. for the Istio mesh that we got configured because they have a bug; https://github.com/istio/istio/issues/20082 which seems to be extra problematic when its output is applied to already applied k8s state; triggering a much worse bug — grey failure — https://github.com/istio/istio/issues/20454 . Every bug report in that repo is preceeded by a 1-3 week triage phase where they ensure it's not "operator error"; which it is sometimes, but like in the above bugs, it also makes the DX suck.

But why am I stupid enough to apply the output of the tooling (istioctl) straight to the cluster? Because the documentation says that's how it's done and it doesn't explain the link; that the Helm chart is inlined in the tool and can actually be used to generate k8s manifests; as such they've made what was previously explicit, implicit, and this makes people make mistakes. Your suggestion is exactly this; let's work around a bug in an upstream project with a hack, and it will bite someone in the ass sooner or later ;)

jpkrohling commented 4 years ago

By suggesting that we should keep an alternate version of the CRD for your use case, you are putting the burden of maintenance on us. If there are enough users with the same use case as you, I don't have a problem in keeping the special version in our repository, but this is the first case that we're receiving saying that "kubectl create" can't be used instead of "kubectl apply".

haf commented 4 years ago

this is the first case that we're receiving

In fact, everyone I've worked with in enterprises (presumably your market/audience) don't write issues on github.

If you think that "being able to deploy with kustomize" is only my use-case, I think you need to take a look around you ;) It's built into kubectl nowadays!

haf commented 4 years ago

Also, I think this is the first issue since I'm pulling from master and not your latest deploy tag. Canaries, and what not ;)

haf commented 4 years ago

Thirdly, I count this to be the second issue, right? # duplicate

jpkrohling commented 4 years ago

The issue has been reported a few times in different channels, but everyone else seems to be happy with kubectl create being an alternative to kubectl apply.

In any case, could you provide the kustomize commands you are using?

jpkrohling commented 4 years ago

For reference, here are the upstream issues relevant to this: https://github.com/operator-framework/operator-sdk/issues/2285#issuecomment-560479367 https://github.com/kubernetes-sigs/kubebuilder/issues/1140

haf commented 4 years ago
.PHONY: view
view:
    @kustomize build k8s/dev

.PHONY: view_test
view_test:
    @kustomize build k8s/test

.PHONY: crds
crds:
    sh -c '[ -f k8s/base/crd-full.yaml ]' || curl --output k8s/base/crd-full.yaml https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/crds/jaegertracing.io_jaegers_crd.yaml
    kubectl get crd | grep jaegers.jaegertracing.io || kubectl create -f k8s/base/crd-full.yaml

.PHONY: template
template:
    curl --output k8s/base/crd.yaml $(BASE_URI)/crds/jaegertracing_v1_jaegers_crd.yaml
    curl --output k8s/base/sa.yaml $(BASE_URI)/service_account.yaml
    curl --output k8s/base/role.yaml $(BASE_URI)/role.yaml
    curl --output k8s/base/role-binding.yaml $(BASE_URI)/role_binding.yaml
    curl --output k8s/base/operator.yaml $(BASE_URI)/operator.yaml

.PHONY: deploy_dev
deploy_dev: crds
    kustomize build k8s/dev | kubectl apply -f -
    kubectl get deployment jaeger-operator -n jaeger-system

.PHONY: deploy_test
deploy_test: crds
    kustomize build k8s/test | kubectl apply -f -
    kubectl get deployment jaeger-operator -n jaeger-system
pavolloffay commented 4 years ago

One idea I have tried is to remove repetitive nodes and use pointers https://medium.com/@kinghuang/docker-compose-anchors-aliases-extensions-a1e4105d70bd

However after handling volumes nodes the file is still too large.

crd-yaml.txt

nouney commented 4 years ago

The issue has been reported a few times in different channels, but everyone else seems to be happy with kubectl create being an alternative to kubectl apply.

kubectl create and kubectl apply are not the same. I only use kubectl apply and now my workflow is broken because of the CRD.

haf commented 4 years ago

@pavolloffay I don't think that will help since the k8s API server writes the annotation (the problem area here) as JSON, which doesn't have pointers.

jpkrohling commented 4 years ago

I only use kubectl apply and now my workflow is broken because of the CRD.

How does your workflow look like? Are you also using kustomize?

jpkrohling commented 4 years ago

I'm reopening, but marking this as "needs-info" (see my last comment). I'm also yet to test kustomize myself, as I'm really not familiar with the tool.

adamhosier commented 4 years ago

We've worked around this by removing all "description" fields from the provided CRDs, based on a suggestion by the upstream Kubebuilder issue: kubernetes-sigs/kubebuilder#1140

Would you consider publishing the CRDs without the description fields to allow them to be compatible with kubectl apply out of the box?

jpkrohling commented 4 years ago

Absolutely. Is it required only for fields in the JaegerCommonSpec, or should it be disabled for all types?

adamhosier commented 4 years ago

We have it disabled for all fields, but happy to work with a subset of this if it can drop the size down enough.

jpkrohling commented 4 years ago

I agree in principle, especially because we can build a document elsewhere with the doc for those fields. Would you like to send in a PR with the required changes?

jpkrohling commented 4 years ago

Another related issue: https://github.com/kubernetes/kubernetes/issues/82292

Given the amount of people that seems affected by this one, I'll try to apply the suggested workaround, by disabling the description fields.

jpkrohling commented 4 years ago

@haf, @adamhosier, @nouney, and others in this issue: would you be able to test the CRD that is part of #932? It now has about 400k, down from 1.3M. Locally, I'm able to issue kubectl apply with this CRD, but I need your help in confirming that it works fine with your workflow/kustomize as well.

adamhosier commented 4 years ago

@jpkrohling thanks for this! i've tested the updated CRD and it works fine with kustomize + apply.