vmware-archive / helm-crd

Experimental CRD controller for managing Helm releases
Apache License 2.0
100 stars 13 forks source link

Deployments break after deploying a json as values #19

Open sozercan opened 6 years ago

sozercan commented 6 years ago

After deploying the following example that contains values as json, any new deployments registers in helmreleases but are never deployed to cluster. Using yaml instead of json resolved this issue, but it shouldn't break by sending json.

curl 'http://localhost:65146/api/kube/apis/helm.bitnami.com/v1/namespaces/default/helmreleases' -H 'Pragma: no-cache' -H 'Origin: http://localhost:65146' -H 'Accept-Encoding: gzip, deflate, br' -H 'Accept-Language: en-US,en;q=0.9,tr;q=0.8' -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.18 Safari/537.36' -H 'Content-Type: application/json' -H 'Accept: */*' -H 'Cache-Control: no-cache' -H 'Referer: http://localhost:65146/charts/incubator/goldfish' -H 'Connection: keep-alive' -H 'DNT: 1' --data-binary '{"apiVersion":"helm.bitnami.com/v1","kind":"HelmRelease","metadata":{"name":"fish"},"spec":{"chartName":"goldfish","repoUrl":"https://kubernetes-charts-incubator.storage.googleapis.com","values":{"config":{"disable_mlock":1,"listener":{"tcp":{"address":"0.0.0.0:8000","tls_cert_file":"","tls_disable":1,"tls_key_file":""}},"vault":{"address":"http://vault:8200","approle_id":"goldfish","approle_login":"auth/approle/login","runtime_config":"secret/goldfish","tls_skip_verify":1}},"image":{"pullPolicy":"IfNotPresent","repository":"quay.io/tuannvm/goldfish:sadsaf","tag":"v0.7.3"},"ingress":{"annotations":null,"enabled":false,"hosts":["chart-example.local"],"tls":null},"replicaCount":1,"resources":{},"secrets":{"vault-addr":"http://vault:8200","vault-token":"token"},"service":{"externalPort":80,"internalPort":8000,"name":"nginx","type":"ClusterIP"}},"version":"0.2.4"}}' --compressed

prydonius commented 6 years ago

cc @anguslees

anguslees commented 6 years ago

Thanks for such a detailed report!

The error in the above request is that the "values" value is a json object, whereas it should be a string (which contains valid yaml/json). I fear this is going to be a common error :(

So the bugs arising from this are:

Separately, I'm ok with changing the schema into a json object (and not a string), if we think that's better. I considered this at the time, but I didn't want to force the value to be json since that loses yaml comments, etc. Opinions sought.

devdattakulkarni commented 6 years ago

I think values should be json/yaml as it is meant to be used in lieu of a chart's values.yaml, right?

prydonius commented 6 years ago

@devdattakulkarni why JSON? Helm just expects YAML right?

devdattakulkarni commented 6 years ago

@prydonius True, YAML is the preferred option especially since helm values are YAML. JSON can also be an option since Kubernetes supports that. The way it could work is till the time JSON support is not available in helm, the controller can internally convert JSON values into YAML before preparing and installing a chart. However, this is still a v2. YAML should be first.

anguslees commented 6 years ago
  1. yaml is json too (a json expression is also valid yaml)
  2. helm values.yaml is yaml. As in: helm-crd passes a string blob (that just happens to be yaml) all the way to tiller, and it isn't actually parsed until there. If the HelmRelease CRD is changed to take a yaml/json object instead of a string, then helm-crd needs to turn that object into a yaml string before passing to tiller (simple to do, but just highlighting the way the abstraction layers interact here).
  3. k8s supports json (only, not yaml). It looks like k8s also supports yaml, because kubectl can read yaml and turn it into json before passing to the k8s api.

In particular, if we decide to change the HelmRelease CRD to take an object instead of a big string blob, then (2)+(3) combine to mean user-input -> json -> yaml -> tiller, and we will always lose comments and human-driven ordering, etc in the values.yaml that reaches tiller. This is probably fine, but just highlighting that this is the way this has to work if we move away from an opaque string blob.

In particular, the json vs yaml discussion above is mostly irrelevant. If we move to an object it will always be converted from yaml/json by kubectl, to json at the k8s API, to a yaml string at the tiller grpc interface by helm-crd, to a parsed yaml document inside tiller somewhere.