helm / charts

⚠️(OBSOLETE) Curated applications for Kubernetes
Apache License 2.0
15.5k stars 16.85k forks source link

Create Best Practices doc #44

Closed viglesiasce closed 5 years ago

viglesiasce commented 8 years ago

We need a place where we can establish and document best practices for chart creation. This should then be reference from the README in order to point users to this repositories conventions.

viglesiasce commented 8 years ago

Things that need to be discussed:

prydonius commented 8 years ago

Thanks for capturing this @viglesiasce. @sameersbn put together a great set of kubernetes manifest and helm chart authoring guides (https://github.com/bitnami/charts/tree/master/_docs) that could provide a base for something.

chrislovecnm commented 7 years ago

The one thing that has been bugging me is docker standards. @viglesiasce we probably need to get together with a couple of people to discuss, or we can hijack a sig apps.

chrislovecnm commented 7 years ago

I know I am beating this dead horse just a bit, but we need to go through the existing charts and get http://kubernetes.io/docs/user-guide/compute-resources/ laid out. Probably should update any standards docs / guidelines that we have.

michelleN commented 7 years ago

A doc for us to collaborate on: https://docs.google.com/document/d/120IobQc4Q1__wSL5XZ7cSJEm2LDePoamUbX6N9SVuBA/edit?usp=sharing

chrislovecnm commented 7 years ago

@michelleN should we set 15 min in the helm sig or sig-apps to knock this out? I am knee deep in writing the HA upgrade guide, so I am not sure how we can knock this out.

viglesiasce commented 7 years ago

@chrislovecnm have a look at the technical requirements that I bolstered today. I think it will be best to just build out that section in the docs: https://github.com/kubernetes/charts/blob/master/CONTRIBUTING.md#technical-requirements

chrislovecnm commented 7 years ago

Nice! Added a bunch of notes to the doc.

chrislovecnm commented 7 years ago

We need a linter to check for that stuff btw

chrislovecnm commented 7 years ago

For ease of reading

Also upgrades are not supported by Petsets... Just a note to the section

gtaylor commented 7 years ago

My pet list of things to consistify, after reading through a good number of these charts:

mgoodness commented 7 years ago

My best practice recs:

I explained my reasoning for the first two items in #168 (specifically here, here, and here).

The last item is a pattern I use at work, explained here.

sputnick commented 7 years ago

I would like to add a best practice item request... I would love to see the recommendation and practice of using a global property

persistence:
  enabled: true/false

in all official stable charts that completely disables PVs/PVCs in the main Chart and Dependent charts.

Why? On k8s clusters with no PVs you have to go figure out what non standard parameters disable the PVC for each snowflake chart values and then also for the its dependencies. This is true e.g. for existing charts like WordPress in K8s Stable (you need to turn off PVs for MariaDB specifically).

In my view, this is a UX problem for newbies using the official Stable charts because on a cluster like "kube-solo" or "minikube", that many start on, there are no PVs and in those cases there will never be a mixed situation where part of the chart should have PVs. If this became a standard (you should also be able to individually turn chart PVCs off or on) it will undoubtedly mean less grief for users and chart maintainers...

Also, if this became a best practice standard then perhaps this could be added as a Helm install option (further improving UX).

gtaylor commented 7 years ago

I would like to add a best practice item request... I would love to see the recommendation and practice of using a global property

I'm -1 on this, as outlined in the matching discussion on the gitlab-ce PR:

https://github.com/kubernetes/charts/pull/202#issuecomment-260711737

tl;dr: Some charts like factorio, gitlab, and even Cassandra store various bits of data in different places. There are cases when some of your users might want persistence for one, but not the other. For example, we'd be actively wasting money/quota/resources for many of our factorio users, since the default install doesn't come with any mods to persist.

I think our policy of sane defaults covers the split, in that users with special needs will read the chart README/values and adjust as needed.

On k8s clusters with no PVs you have to go figure out what non standard parameters disable the PVC for each snowflake chart values and then also for the its dependencies

What percentage of our userbase do we think will be put in a situation like this? Is it enough to warrant the loss of flexibility?

Part of this is requiring 1.4+, which it looks like we may be leaning towards (based on @prydonius last comment and the thumbs ups): https://github.com/kubernetes/helm/issues/1528

sputnick commented 7 years ago

@gtaylor did you read my answer on #202? The use case is clusters with no persistent storage which right now are pretty much all out of the box do-it-yourself dev cluster like minikube, kube-solo, kube-cluster and I'm suggesting having both flags per dependency and a global, not either or. So more flexible.

viglesiasce commented 7 years ago

Another one for this list:

Ensure repository (not just tag) is configurable. https://github.com/kubernetes/charts/pull/276/files

blakebarnett commented 7 years ago

Echoing what @mgoodness said above, app labels are problematic (and long) when they include the release, I've also noticed that many charts don't have an app label either. We've needed to add component labels to various charts also so that we can select pieces to use with kubectl port-forward, etc. I also add the release label to selectors for specificity there.

Example changes for the prometheus server service:

@@ -2,7 +2,7 @@
 kind: Service
 metadata:
   labels:
-    app: {{ template "fullname" . }}
+    app: "{{ .Chart.Name }}"
     chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
     component: "{{ .Values.server.name }}"
     heritage: "{{ .Release.Service }}"
@@ -15,6 +15,7 @@
       protocol: TCP
       targetPort: 9090
   selector:
-    app: {{ template "fullname" . }}
+    app: "{{ .Chart.Name }}"
+    release: "{{ .Release.Name }}"
     component: "{{ .Values.server.name }}"
   type: "{{ .Values.server.serviceType }}"
chancez commented 7 years ago

I'd like to know what the best practices are for indicating whether values are required or optional.

mgoodness commented 7 years ago

In addition to my previous suggestions, I've taken to changing the fullname template in my internal charts. Rather than having it generate RELEASE_NAME-CHART_NAME, it's flipped to CHART_NAME-RELEASE_NAME. If a chart handles multiple components, I'll do something like CHART_NAME-COMPONENT-RELEASE_NAME. I find it's easier to view multiple releases of the same chart in kubectl when using that pattern.

viglesiasce commented 7 years ago

Pod names should include both the release name and chart name. When using a chart as a dependency its good to be able to distinguish pods that belong to the same release.

@prydonius,

Do we have a macro for this yet or is the _helper template for "fullname" the best we have right now?

prydonius commented 7 years ago

@viglesiasce the fullname template is the best we have for now, see https://github.com/kubernetes/helm/issues/1186 - looks like it's slated for 2.3.0

so0k commented 7 years ago

would it make sense to say:

it is preferred to include defaults in a packaged values.yaml over using the default function in the manifest templates?

it seems having default everywhere in a template makes it less readable, and a secure default values.yaml needs to be provided according to the technical requirements anyway?

(plus, you can see a lot of documentation is added to most stable chart values.yaml - which is a single place to look instead of that information being spread over the files in templates folder)

I've tried to take into account the comments in this thread, as well as Bitnami's documentation into conventions for my company - https://gist.github.com/so0k/f927a4b60003cedd101a0911757c605a

viglesiasce commented 7 years ago

Checksum your configs in an annotation on your deployments so that config changes can automatically update your pods:

annotations:
        checksum/config: {{ include (print $.Chart.Name "/templates/configmap/spinnaker-config.yaml") . | sha256sum }}
so0k commented 7 years ago

@viglesiasce k8s configmaps mounted in volumes are updated and fire inotify... I think the above is more applicable for secrets (I'm not sure about configmaps referenced in env vars)

mgoodness commented 7 years ago

@so0k I think exceptions can (should?) be made for applications that monitor filesystem events and reload their configuration. But for the majority of cases, it's helpful to force a re-deploy when ConfigMaps and/or Secrets change.

mgoodness commented 7 years ago

Standardize (to the extent possible) how Ingresses are created. It's the wild west out there!

kfox1111 commented 7 years ago

some I've hit recently:

Need a way to set nodeSelectors on templates. Otherwise, arrangement is completely random, which fails on nonuniform clusters.

For svc's, I would like a way to set the external_ip on them so I can access them outside the cluster without having to setup an ingress.

gladiatr72 commented 7 years ago

summary

For Helm Charts to really take off as a standard way of managing k8s deployments, I think it is important for chart consumers to understand the usage patterns any Best Practices are designed to work best with.


[...] Pod names should include both the release name and chart name. When using a chart as a dependency its good to be able to distinguish pods that belong to the same release.

I sort of understand using this pattern with deployment/rs/rc/pod/etc names; though, I must say that I have gotten used to using label filters rather than relying on kubectl's default get output.

Specifically, with services, names such as chartname-flinging-monkey don't work. (I also have taken to putting the chartname before the release name) Even with specifically named releases, names end up similar to 'component-dev-feature-YYZ420`.

e.g.: I have a telegraf daemonset that exposes its statsd service to the cluster. One of the modifications I made to the telegraf chart was to set .metadata.name to match the exposed service port.

Before:

dig +search +short _statsd._udp.telegraf-ds-prod-monitor.monitors SRV

after

dig +search +short _statsd._udp.statsd.monitors SRV

It gets even uglier when dealing with service configuration data via the container environment. (not terribly useful for inter-namespace work but stil)

With that said, I'm also interested in understanding the production context that this pattern comes from. The pattern seems to run counter to maintaining loose coupling between resources and components. I'm fairly certain my perception is due to not understanding and not any lack of wizardry on the part of the implementors.

technosophos commented 7 years ago

Best practices for naming of template partials is discussed here: https://github.com/kubernetes/helm/issues/1688

technosophos commented 7 years ago

Exposing the resource-policy annotation on PV/PVC is covered here: https://github.com/kubernetes/helm/issues/1875

ghost commented 7 years ago

The summary provided by @so0k gist actually summarise most of the things we talked here + what I have seen in multiple charts in kubernetes repo.

In our startup, right now, we are converting our kubernetes yaml files by writing a lot of charts. We are planning to use @so0k gist for now.

ghost commented 7 years ago

Also, we follow "if you want best practices followed by most people, you should provide practical implementation & common use case examples" in our company.

For that, we are thinking about using "includes" or "blocks" for common labels and names patterns such as app, heritage, release, chart, component. Maybe templates for container-name, service-name... . So that we don't need to remember or copy-paste each time to follow the best practices.

so0k commented 7 years ago

I'd like to make small adjustments (haven't applied these back to the gist):

ghost commented 7 years ago

For helm,

Are everyone @viglesiasce @mgoodness @technosophos is on the same page about improving up on @so0k gist and adding it to the docs as best_practices.md?

@so0k What do you think about providing includes, block or templates for best practices you summarised?

Sorry for bugging you all with many messages, I wanted helm charts move forward before we move everything to helm charts

so0k commented 7 years ago

includes / blocks / templates for best practices such as the labels sounds good - there must already be charts out there that follow this practice?

ps, I personally also like the following snippet I saw in one of @ohaiwalt charts:

This converts snake-cased maps in the values.yaml to environment variables:

# values.yaml
config:
  kelsey-rating: "pretty dope"
  env: gcpug
  asset-pipeline: false

# manifest template fragment
env:
{{- range $key, $value :=  .Values.config }}
- name: {{ $key | upper | replace "-" "_" }}
  value: {{ $value | quote }}
{{- end }}

# rendered template fragment:
env:
- name: ASSET_PIPELINE
  value: "false"
- name: ENV
  value: "gcpug"
- name: KELSEY_RATING
  value: "pretty dope"

Example chart from a workshop I gave last week.

ghost commented 7 years ago

...to make a more re-usable chart skeleton... apps with a single component should have an agreed prefix (none or app- ?)

None IMHO, because multi component charts are app as a whole too.

I do find the default fullname template to be useful sometimes... I think my understanding of the way the subchart is referenced was a bit wrong and you still need to re-define the scoped fullname template in the parent chart anyway

I saw charts with multi components have scoped fullname s for naming components yamls with their own names, like here.

For subcharts, I didn't tried but subcharts fullname template will use parents chart .Chart and .Release, so it should be same fullname in parent and sub charts as I remember from docs.

My argumentation for nested hierarchies kind of falls away when you always have to document all options in values.yaml (and this is preferred over sprinkling | default in the templates). ...

I don't like default in the templates too, Defaults should be given in values.yaml for having them on one place.

ghost commented 7 years ago

@so0k

I am just writing followings for demonstration purposes

for deploy.yaml

...
metadata:
  name: {{ template "deployment-name" .}}
  labels:
{{ include "labels" . | indent 4 }}
spec:
  template:
    metadata:
      labels:
{{ include "labels" . | indent 8 }}
    spec:
      containers:
      - name: {{ template "container-name" .}}
...
ghost commented 7 years ago

@so0k Also, the implication here is we should have labels (app, heritage...) and names (name, fullname and different patterns) same as best practices at the _helpers.tpl.

Chart developer can add their own labels directly to the yaml templates just over the {{ include "labels".... We don't limit them, we just make it easy to follow conventions.

ghost commented 7 years ago

@technosophos added best practices guide at kubernetes/helm#2007 for helm 2.3

so0k commented 7 years ago

I hadn't seen it before - thanks for the link!

So we can PR on those documents to add sample templates for default labels?

mgoodness commented 7 years ago

That's also news to me. Give us maintainers a chance to review and decide if we want to adopt those as the best practices for this repo. We'll also want to consider all the suggestions/requests made in this thread.

ghost commented 7 years ago

For first point of @mgoodness, I invite everyone to kubernetes/helm#2007 for reviewing current state of the guide. I believe we have still time for little changes until 2.3 on that PR.

Then we should shape&move suggestions here by discussing over a new PR at kubernetes/helm.

Unsuprisingly, @so0k gist mostly overlap with best practices guide but also includes additional suggestions that can be added as additional parts to the guide. I believe it is a good starting point for our new PR.

@so0k if you are willing to work on this, we can work together on that new PR and everyone is welcome.

Our plan is maybe:

If anyone(esp. @so0k) want to join me (actively), I want to work on this together

ghost commented 7 years ago

After best practices shaped more clearly, we can add suggestive comments and best practices to the default starter pack with another PR in the future.

IMHO, default starter pack should reflect what is written on the best practices guide closely.

@so0k That is better place to discuss including one template for recommended labels I mentioned before

scottrigby commented 7 years ago

Hey everyone, where is the best place to address yaml indentation inconsistencies across the stable charts?

For example, is there a standard for YAML array indentation?

- One
- Two

vs

  - One
  - Two

Both are legit, but most docs I see across projects (eg Symfony) use the second. Does charts have a standard?

I was talking with @prydonius about YAML linting - he noted https://yamllint.readthedocs.io/en/latest/rules.html#module-yamllint.rules.hyphens (personally, I am an advocate for indentation: {spaces: 2, indent-sequences: consistent}, but really whatever the standard, as long as it’s consistent). Perhaps we could do as an automated PR check in the charts repo? But in either case, it seems this would be a good helper command in helm.

I noticed this was raised this comment https://github.com/kubernetes/charts/issues/44#issuecomment-256472424 and also in https://github.com/kubernetes/helm/issues/1663. I sought this out as a response to https://github.com/kubernetes/charts/pull/920. I'll be happy to follow up wherever is best, if anyone has a good idea of where to go with this?

bacongobbler commented 7 years ago

Should this be closed with the introduction of https://docs.helm.sh/chart_best_practices/#the-chart-best-practices-guide?

scottrigby commented 7 years ago

@bacongobbler That makes sense. And we can continue to discuss these topics in separate issues/PRs against https://github.com/kubernetes/helm/tree/master/docs/chart_best_practices?

fejta-bot commented 6 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta. /lifecycle stale

fejta-bot commented 6 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten /remove-lifecycle stale

kfox1111 commented 6 years ago

/remove-lifecycle rotten /remove-lifecycle stale

fejta-bot commented 6 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale