Closed haf closed 4 years ago
See #854. In short, use kubectl create -f ...
, or remove the validation
node if you absolutely have to use kubectl apply
.
This is not a good enough solution, if kustomize uses apply and if I'm expected to continuously re-run apply from a scheduled job (as I am). Can you at least ship a CRD definition without the validation node, so that the kubectl tooling works as expected?
If there are more people in need of this, we can certainly consider providing a CRD that shouldn't be used for production purposes.
Could you give a suggestion on how to do gitops with your solution then?
What prevents you from storing your own modified version of the CRD? Or from piping the result of cURL
to a json converter + jq ? I can think of quite a few possibilities, but as you own your environment, you'd be the best person to come up with solutions ;-)
yq
or python
which are commonly used to transform yaml to json and then filter. Your expectations of our environment don't match the real-world; it has to run on Linux, Windows and macOS.validation
in the schema if it's so utterly useless that you make it sound likeAnecdotally: I'm storing modified versions of tooling output e.g. for the Istio mesh that we got configured because they have a bug; https://github.com/istio/istio/issues/20082 which seems to be extra problematic when its output is applied to already applied k8s state; triggering a much worse bug — grey failure — https://github.com/istio/istio/issues/20454 . Every bug report in that repo is preceeded by a 1-3 week triage phase where they ensure it's not "operator error"; which it is sometimes, but like in the above bugs, it also makes the DX suck.
But why am I stupid enough to apply the output of the tooling (istioctl) straight to the cluster? Because the documentation says that's how it's done and it doesn't explain the link; that the Helm chart is inlined in the tool and can actually be used to generate k8s manifests; as such they've made what was previously explicit, implicit, and this makes people make mistakes. Your suggestion is exactly this; let's work around a bug in an upstream project with a hack, and it will bite someone in the ass sooner or later ;)
By suggesting that we should keep an alternate version of the CRD for your use case, you are putting the burden of maintenance on us. If there are enough users with the same use case as you, I don't have a problem in keeping the special version in our repository, but this is the first case that we're receiving saying that "kubectl create" can't be used instead of "kubectl apply".
this is the first case that we're receiving
In fact, everyone I've worked with in enterprises (presumably your market/audience) don't write issues on github.
If you think that "being able to deploy with kustomize" is only my use-case, I think you need to take a look around you ;) It's built into kubectl nowadays!
Also, I think this is the first issue since I'm pulling from master and not your latest deploy tag. Canaries, and what not ;)
Thirdly, I count this to be the second issue, right? # duplicate
The issue has been reported a few times in different channels, but everyone else seems to be happy with kubectl create
being an alternative to kubectl apply
.
In any case, could you provide the kustomize
commands you are using?
For reference, here are the upstream issues relevant to this: https://github.com/operator-framework/operator-sdk/issues/2285#issuecomment-560479367 https://github.com/kubernetes-sigs/kubebuilder/issues/1140
.PHONY: view
view:
@kustomize build k8s/dev
.PHONY: view_test
view_test:
@kustomize build k8s/test
.PHONY: crds
crds:
sh -c '[ -f k8s/base/crd-full.yaml ]' || curl --output k8s/base/crd-full.yaml https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/crds/jaegertracing.io_jaegers_crd.yaml
kubectl get crd | grep jaegers.jaegertracing.io || kubectl create -f k8s/base/crd-full.yaml
.PHONY: template
template:
curl --output k8s/base/crd.yaml $(BASE_URI)/crds/jaegertracing_v1_jaegers_crd.yaml
curl --output k8s/base/sa.yaml $(BASE_URI)/service_account.yaml
curl --output k8s/base/role.yaml $(BASE_URI)/role.yaml
curl --output k8s/base/role-binding.yaml $(BASE_URI)/role_binding.yaml
curl --output k8s/base/operator.yaml $(BASE_URI)/operator.yaml
.PHONY: deploy_dev
deploy_dev: crds
kustomize build k8s/dev | kubectl apply -f -
kubectl get deployment jaeger-operator -n jaeger-system
.PHONY: deploy_test
deploy_test: crds
kustomize build k8s/test | kubectl apply -f -
kubectl get deployment jaeger-operator -n jaeger-system
One idea I have tried is to remove repetitive nodes and use pointers https://medium.com/@kinghuang/docker-compose-anchors-aliases-extensions-a1e4105d70bd
However after handling volumes
nodes the file is still too large.
The issue has been reported a few times in different channels, but everyone else seems to be happy with
kubectl create
being an alternative tokubectl apply
.
kubectl create
and kubectl apply
are not the same. I only use kubectl apply
and now my workflow is broken because of the CRD.
@pavolloffay I don't think that will help since the k8s API server writes the annotation (the problem area here) as JSON, which doesn't have pointers.
I only use kubectl apply and now my workflow is broken because of the CRD.
How does your workflow look like? Are you also using kustomize?
I'm reopening, but marking this as "needs-info" (see my last comment). I'm also yet to test kustomize myself, as I'm really not familiar with the tool.
We've worked around this by removing all "description" fields from the provided CRDs, based on a suggestion by the upstream Kubebuilder issue: kubernetes-sigs/kubebuilder#1140
Would you consider publishing the CRDs without the description fields to allow them to be compatible with kubectl apply
out of the box?
Absolutely. Is it required only for fields in the JaegerCommonSpec
, or should it be disabled for all types?
We have it disabled for all fields, but happy to work with a subset of this if it can drop the size down enough.
I agree in principle, especially because we can build a document elsewhere with the doc for those fields. Would you like to send in a PR with the required changes?
Another related issue: https://github.com/kubernetes/kubernetes/issues/82292
Given the amount of people that seems affected by this one, I'll try to apply the suggested workaround, by disabling the description fields.
@haf, @adamhosier, @nouney, and others in this issue: would you be able to test the CRD that is part of #932? It now has about 400k, down from 1.3M. Locally, I'm able to issue kubectl apply
with this CRD, but I need your help in confirming that it works fine with your workflow/kustomize as well.
@jpkrohling thanks for this! i've tested the updated CRD and it works fine with kustomize + apply.
From master, right now. Onto docker-desktop macOS
repro