Open peterbroadhurst opened 1 year ago
Hmmn, well actually this is more significant a change than I thought.
We had been relying on a Kustomize as per the below, in order to create a nested set of values that could be supplied by a user installing the chart.
In 0.0.30 and below, this worked nicely by generating a section in the values.yaml
with multiple nested values that could be set (as long as we didn't use the friendlyname=config.yaml
syntax, and accepted the odd configYaml
name in the values).
With https://github.com/arttor/helmify/pull/94, it seems in order to allow multi-line individual sub-values, the function we were relying on has been removed.
configMapGenerator:
- files:
- config.yaml
name: myname
It will take a bit of work to try and form a PR proposal that meets both requirements 🤔
Hi, thank you for raising the issue. Just want to confirm that old version was handling yaml data in configMap incorrectly because k8s expects string key-value pairs in data. I get the correct template from the example in the latest version:
# Source: operator/templates/manager-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: release-name-operator-manager-config
labels:
helm.sh/chart: operator-0.1.0
app.kubernetes.io/name: operator
app.kubernetes.io/instance: release-name
app.kubernetes.io/version: "0.1.0"
app.kubernetes.io/managed-by: Helm
data:
controller_manager_config.yaml: |-
apiVersion: controller-runtime.sigs.k8s.io/v1alpha1
kind: ControllerManagerConfig
health:
healthProbeBindAddress: :8081
metrics:
bindAddress: 127.0.0.1:8080
webhook:
port: 9443
leaderElection:
leaderElect: true
resourceName: 3a2e09e9.example.com
rook:
namespace: rook-ceph
toolboxPodLabel: rook-ceph-tools
dummyconfigmapkey: "dummyconfigmapvalue"
That is indeed the change in behavior we've seen.
Before we were able to generate a sample values.yaml
that contained a detailed set of individual config settings, which a user installing the operator could set using normal Helm semantics. Just by using a configMapGenerator
in a kustomization.yaml
.
Now, instead the whole section is converted into a single string
field, where all of those fields have to be constructed into a YAML payload. Not a very friendly or professional user experience for someone consuming the operator.
The best illustration is in the samples in the code.
In v0.0.30
the user experience installing an operator with a sophisticated YAML config could be as so:
Now the same experience becomes:
We've had to just pin the version back for now, as I don't have the cycles immediately to make a proposal to re-instate the very helpful function that was removed in v0.0.31
and later.
It's possible there's another way of doing what I was trying to achieve.
I note one previous approach to generating such end-user config we'd found was to bind an environment variable with a valueFrom
pointing at a ConfigMap
resource. helmfiy
generated a unique Helm variable for each of those, but that meant we had to define all the config for the operator runtime as environment variables - which is much less desirable from a code complexity perspective than having a single YAML config mounted directly from the configmap.
I absolutely agree that separate values for yaml were more user-friendly but resulted ConfigMap was incorrect because k8s expects string value according to spec.
please describe how it should look in a helm chart keeping in mind that it should be represented as a string in resulted k8s manifest.
Hello @arttor. I believe this regression was unnecessary because helmify
was never generating an invalid ConfigMap
as you suggested:
I absolutely agree that separate values for yaml were more user-friendly but resulted ConfigMap was incorrect because k8s expects string value according to spec.
I looked into the original issue (https://github.com/arttor/helmify/issues/30) which motivated the fixes in https://github.com/arttor/helmify/pull/94. I've created a branch on my fork based on v0.3.30 where I copied over the sample app output, re-generated the example app chart, and then copied over the E2E tests that were added to the CI workflow in later releases which validate the resulting helm template
creates correct K8s resources: https://github.com/arttor/helmify/compare/v0.3.30...hfuss:helmify:0.3.30-backport?expand=1
Running the helm template
manually I get:
# Source: app/templates/my-config-props.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: release-name-app-my-config-props
labels:
helm.sh/chart: app-0.1.0
app.kubernetes.io/name: app
app.kubernetes.io/instance: release-name
app.kubernetes.io/version: "0.1.0"
app.kubernetes.io/managed-by: Helm
data:
my.prop1: "1"
my.prop2: "val 1"
my.prop3: "true"
my.services:
- docker
- helm
- k8s
myval.yaml: |
apiVersion: clickhouse.altinity.com/v1
kind: ClickHouseInstallationTemplate
metadata:
name: "default-oneperhost-pod-template"
spec:
templates:
podTemplates:
- distribution: OnePerHost
name: default-oneperhost-pod-template
And the GitHub Action run confirms it was always valid: https://github.com/hfuss/helmify/actions/runs/4638714352/jobs/8208790960
So now we're in a tough situation. Folks using v0.3.31+ may have come to rely on the new resulting string values, just like we relied on the resulting YAML values in v0.3.30.
We could either:
Understood. Can you please elaborate on the first option? What change do we need to introduce or you are speaking about your project?
About the second option: having an extra flag for backward compatibility is ok but it should not work exactly like before because ConfigMap was invalid.
We can make an exception only for kubebuilder. Check if property name is controller_manager_config.yaml
or parse underlying yaml and check kind: ControllerManagerConfig
. But operator developers can break it by introducing other kinds of configs or using different property names.
Alternatively, we can just check if property is in yaml format and move everything to values.yaml in yaml format. But in this case we will have apiVersion, kind, etc... in values.yaml.
Due to PR https://github.com/arttor/helmify/pull/94 the use of a
configMapGenerator
with a YAML file in a Kustomize, is generating an invalid config map.I get YAML as so, where you can the line with
indent 1
, which does not work with a complex set of valuesYou can see the same issue in the example provided in the PR: https://github.com/arttor/helmify/blob/main/examples/operator/templates/manager-config.yaml
Here's an example of what a
helm template --debug
shows me, showing how the YAML is incorrectly inserted: