norwoodj / helm-docs

A tool for automatically generating markdown documentation for helm charts
GNU General Public License v3.0
1.38k stars 185 forks source link

Templates not defined error with complicated README #48

Closed sc250024 closed 4 years ago

sc250024 commented 4 years ago

Description

The title of the issue is admittedly a bit weird, but I'm not sure what else is going on with the underlying README.md.gotmpl file I am using.

Long story short, I am helping to move the cluster-autoscaler Helm chart from the helm/charts repository to the kubernetes/autoscaler repository because of the eventual deprecation of the stable channel.

I am hoping to make the new README enabled with helm-docs, but I noticed that when trying to template the README.md file from that project, I receive errors that some of the templates are "not defined."

Expected Behavior

The template should render successfully, just like full-template example.

Actual behavior

I get error messages for some of the templates, but not all of them:

All of the non-working templates receive an error similar to this:

WARN[2020-07-22T17:08:07+02:00] Error generating documentation for chart cluster-autoscaler: template: cluster-autoscaler:9:12: executing "cluster-autoscaler" at <{{template "chart.typeBadge" .}}>: template "chart.typeBadge" not defined

Additional

Here's the information of my underlying environment:

Are here are the files I used. Note that the README.md.gotmpl contains all templates that work. Adding any of the Non-working templates list above results in an error, and does not render.

README.md.gotmpl
{{ template "chart.header" . }}

{{ template "chart.description" . }}

{{ template "chart.type" . }}

{{ template "chart.typeLine" . }}

## TL;DR:

```console
$ helm repo add autoscaler https://kubernetes.github.io/autoscaler

$ helm install autoscaler/cluster-autoscaler --name my-release --set "autoscalingGroups[0].name=your-asg-name,autoscalingGroups[0].maxSize=10,autoscalingGroups[0].minSize=1"
```

## Introduction

This chart bootstraps a cluster-autoscaler deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.

## Prerequisites

- Helm 3+
- Kubernetes 1.8+
  - [Older versions](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#releases) may work by overriding the `image`. Cluster autoscaler internally simulates the scheduler and bugs between mismatched versions may be subtle.
- Azure AKS specific Prerequisites:
  - Kubernetes 1.10+ with RBAC-enabled.

## Previous Helm Chart

The previous `cluster-autoscaler` Helm chart hosted at [helm/charts](https://github.com/helm/charts) has been moved to this repository in accordance with the [Deprecation timeline](https://github.com/helm/charts#deprecation-timeline). Note that a few things have changed between this version and the old version:

- This repository **only** supports Helm chart installations using Helm 3+ since the `apiVersion` on the charts has been marked as `v2`.
- Previous versions of the Helm chart have not been migrated, and the version was reset to `1.0.0` at the onset. If you are looking for old versions of the chart, it's best to run `helm pull stable/cluster-autoscaler --version ` until you are ready to move to this repository's version.

## Installing the Chart

**By default, no deployment is created and nothing will autoscale**.

You must provide some minimal configuration, either to specify instance groups or enable auto-discovery. It is not recommended to do both.

Either:

- Set `autoDiscovery.clusterName` and tag your autoscaling groups appropriately (`--cloud-provider=aws` only) **or**
- Set at least one ASG as an element in the `autoscalingGroups` array with its three values: `name`, `minSize` and `maxSize`.

To install the chart with the release name `my-release`:

### AWS - Using auto-discovery of tagged instance groups

Auto-discovery finds ASGs tags as below and automatically manages them based on the min and max size specified in the ASG. `cloudProvider=aws` only.

- Tag the ASGs with keys to match `.Values.autoDiscovery.tags`, by default: `k8s.io/cluster-autoscaler/enabled` and `k8s.io/cluster-autoscaler/`
- Verify the [IAM Permissions](#iam)
- Set `autoDiscovery.clusterName=`
- Set `awsRegion=`
- Set `awsAccessKeyID=` and `awsSecretAccessKey=` if you want to [use AWS credentials directly instead of an instance role](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md#using-aws-credentials)

```console
$ helm install autoscaler/cluster-autoscaler --name my-release --set autoDiscovery.clusterName=
```

#### Specifying groups manually

Without autodiscovery, specify an array of elements each containing ASG name, min size, max size. The sizes specified here will be applied to the ASG, assuming IAM permissions are correctly configured.

- Verify the [IAM Permissions](#iam)
- Either provide a yaml file setting `autoscalingGroups` (see values.yaml) or use `--set` e.g.:

```console
$ helm install autoscaler/cluster-autoscaler --name my-release --set "autoscalingGroups[0].name=your-asg-name,autoscalingGroups[0].maxSize=10,autoscalingGroups[0].minSize=1"
```

#### Auto-discovery

For auto-discovery of instances to work, they must be tagged with the keys in `.Values.autoDiscovery.tags`, which by default are
`k8s.io/cluster-autoscaler/enabled` and `k8s.io/cluster-autoscaler/`

The value of the tag does not matter, only the key.

An example kops spec excerpt:

```
apiVersion: kops/v1alpha2
kind: Cluster
metadata:
  name: my.cluster.internal
spec:
  additionalPolicies:
    node: |
      [
        {"Effect":"Allow","Action":["autoscaling:DescribeAutoScalingGroups","autoscaling:DescribeAutoScalingInstances","autoscaling:DescribeLaunchConfigurations","autoscaling:DescribeTags","autoscaling:SetDesiredCapacity","autoscaling:TerminateInstanceInAutoScalingGroup"],"Resource":"*"}
      ]
      ...
---
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  labels:
    kops.k8s.io/cluster: my.cluster.internal
  name: my-instances
spec:
  cloudLabels:
    k8s.io/cluster-autoscaler/enabled: ""
    k8s.io/cluster-autoscaler/my.cluster.internal: ""
  image: kops.io/k8s-1.8-debian-jessie-amd64-hvm-ebs-2018-01-14
  machineType: r4.large
  maxSize: 4
  minSize: 0
```

In this example you would need to `--set autoDiscovery.clusterName=my.cluster.internal` when installing.

It is not recommended to try to mix this with setting `autoscalingGroups`

See [autoscaler AWS documentation](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md#auto-discovery-setup) for a more discussion of the setup.

### GCE

The following parameters are required:

- `autoDiscovery.clusterName=any-name`
- `cloud-provider=gce`
- `autoscalingGroupsnamePrefix[0].name=your-ig-prefix,autoscalingGroupsnamePrefix[0].maxSize=10,autoscalingGroupsnamePrefix[0].minSize=1`

To use Managed Instance Group (MIG) auto-discovery, provide a YAML file setting `autoscalingGroupsnamePrefix` (see values.yaml) or use `--set` when installing the Chart - e.g.

```console
$ helm install autoscaler/cluster-autoscaler \
--name my-release \
--set autoDiscovery.clusterName= \
--set cloudProvider=gce \
--set "autoscalingGroupsnamePrefix[0].name=your-ig-prefix,autoscalingGroupsnamePrefix[0].maxSize=10,autoscalingGroupsnamePrefix[0].minSize=1"
```

Note that `your-ig-prefix` should be a _prefix_ matching one or more MIGs, and _not_ the full name of the MIG. For example, to match multiple instance groups - `k8s-node-group-a-standard`, `k8s-node-group-b-gpu`, you would use a prefix of `k8s-node-group-`.

In the event you want to explicitly specify MIGs instead of using auto-discovery, set members of the `autoscalingGroups` array directly - e.g.

```
# where 'n' is the index, starting at 0
-- set autoscalingGroups[n].name=https://content.googleapis.com/compute/v1/projects/$PROJECTID/zones/$ZONENAME/instanceGroupManagers/$FULL-MIG-NAME,autoscalingGroups[n].maxSize=$MAXSIZE,autoscalingGroups[n].minSize=$MINSIZE
```

### Azure AKS

The following parameters are required:

- `cloudProvider=azure`
- `autoscalingGroups[0].name=your-agent-pool,autoscalingGroups[0].maxSize=10,autoscalingGroups[0].minSize=1`
- `azureClientID: "your-service-principal-app-id"`
- `azureClientSecret: "your-service-principal-client-secret"`
- `azureSubscriptionID: "your-azure-subscription-id"`
- `azureTenantID: "your-azure-tenant-id"`
- `azureClusterName: "your-aks-cluster-name"`
- `azureResourceGroup: "your-aks-cluster-resource-group-name"`
- `azureVMType: "AKS"`
- `azureNodeResourceGroup: "your-aks-cluster-node-resource-group"`

## Uninstalling the Chart

To uninstall `my-release`:

```console
$ helm uninstall my-release
```

The command removes all the Kubernetes components associated with the chart and deletes the release.

> **Tip**: List all releases using `helm list` or start clean with `helm uninstall my-release`

## Additional Configuration

### AWS - IAM

The worker running the cluster autoscaler will need access to certain resources and actions:

```
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "autoscaling:DescribeAutoScalingGroups",
        "autoscaling:DescribeAutoScalingInstances",
        "autoscaling:DescribeLaunchConfigurations",
        "autoscaling:DescribeTags",
        "autoscaling:SetDesiredCapacity",
        "autoscaling:TerminateInstanceInAutoScalingGroup"
      ],
      "Resource": "*"
    }
  ]
}
```

- `DescribeTags` is required for autodiscovery.
- `DescribeLaunchConfigurations` is required to scale up an ASG from 0.

Unfortunately AWS does not support ARNs for autoscaling groups yet so you must use "*" as the resource. More information [here](http://docs.aws.amazon.com/autoscaling/latest/userguide/IAM.html#UsingWithAutoScaling_Actions).

### AWS - IAM Roles for Service Accounts (IRSA)

For Kubernetes clusters that use Amazon EKS, the service account can be configured with an IAM role using [IAM Roles for Service Accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) to avoid needing to grant access to the worker nodes for AWS resources.

In order to accomplish this, you will first need to create a new IAM role with the above mentions policies.  Take care in [configuring the trust relationship](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts-technical-overview.html#iam-role-configuration) to restrict access just to the service account used by cluster autoscaler.

Once you have the IAM role configured, you would then need to `--set rbac.serviceAccountAnnotations."eks\.amazonaws\.com/role-arn"=arn:aws:iam::123456789012:role/MyRoleName` when installing.

## Troubleshooting

The chart will succeed even if the container arguments are incorrect. A few minutes after starting
`kubectl logs -l "app=aws-cluster-autoscaler" --tail=50` should loop through something like

```
polling_autoscaler.go:111] Poll finished
static_autoscaler.go:97] Starting main loop
utils.go:435] No pod using affinity / antiaffinity found in cluster, disabling affinity predicate for this loop
static_autoscaler.go:230] Filtering out schedulables
```

If not, find a pod that the deployment created and `describe` it, paying close attention to the arguments under `Command`. e.g.:

```
Containers:
  cluster-autoscaler:
    Command:
      ./cluster-autoscaler
      --cloud-provider=aws
# if specifying ASGs manually
      --nodes=1:10:your-scaling-group-name
# if using autodiscovery
      --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/
      --v=4
```

### PodSecurityPolicy

Though enough for the majority of installations, the default PodSecurityPolicy _could_ be too restrictive depending on the specifics of your release. Please make sure to check that the template fits with any customizations made or disable it by setting `rbac.pspEnabled` to `false`.

{{ template "chart.valuesSection" . }}
Chart.yaml ```yaml apiVersion: v2 appVersion: 1.18.1 description: Scales Kubernetes worker nodes within autoscaling groups. engine: gotpl home: https://github.com/kubernetes/autoscaler icon: https://github.com/kubernetes/kubernetes/blob/master/logo/logo.png maintainers: - email: guyjtempleton@googlemail.com name: gjtempleton - email: mgoodness@gmail.com name: mgoodness - email: scott.crooks@gmail.com name: sc250024 - email: e.bailey@sportradar.com name: yurrriq name: cluster-autoscaler sources: - https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler - https://github.com/spotinst/kubernetes-autoscaler/tree/master/cluster-autoscaler type: application version: 1.0.0 ```
values.yaml ```yaml ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity # affinity -- Affinity for pod assignment affinity: {} autoDiscovery: # Only cloudProvider `aws` and `gce` are supported by auto-discovery at this time # AWS: Set tags as described in https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md#auto-discovery-setup # autoDiscovery.clusterName -- Enable autodiscovery for name in ASG tag (only `cloudProvider=aws`). Must be set for `cloudProvider=gce`, but no MIG tagging required. clusterName: # cluster.local # autoDiscovery.tags -- ASG tags to match, run through `tpl`. tags: - k8s.io/cluster-autoscaler/enabled - k8s.io/cluster-autoscaler/{{ .Values.autoDiscovery.clusterName }} # - kubernetes.io/cluster/{{ .Values.autoDiscovery.clusterName }} # autoscalingGroups -- For AWS. At least one element is required if not using `autoDiscovery`. For example: #
# - name: asg1
# maxSize: 2
# minSize: 1 #
autoscalingGroups: [] # - name: asg1 # maxSize: 2 # minSize: 1 # - name: asg2 # maxSize: 2 # minSize: 1 # autoscalingGroupsnamePrefix -- For GCE. At least one element is required if not using `autoDiscovery`. For example: #
# - name: ig01
# maxSize: 10
# minSize: 0 #
autoscalingGroupsnamePrefix: [] # - name: ig01 # maxSize: 10 # minSize: 0 # - name: ig02 # maxSize: 10 # minSize: 0 # awsAccessKeyID -- AWS access key ID ([if AWS user keys used](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md#using-aws-credentials)) awsAccessKeyID: "" # awsRegion -- AWS region (required if `cloudProvider=aws`) awsRegion: us-east-1 # awsSecretAccessKey -- AWS access secret key ([if AWS user keys used](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md#using-aws-credentials)) awsSecretAccessKey: "" # azureClientID -- Service Principal ClientID with contributor permission to Cluster and Node ResourceGroup. # Required if `cloudProvider=azure` azureClientID: "" # azureClientSecret -- Service Principal ClientSecret with contributor permission to Cluster and Node ResourceGroup. # Required if `cloudProvider=azure` azureClientSecret: "" # azureResourceGroup -- Azure resource group that the cluster is located. # Required if `cloudProvider=azure` azureResourceGroup: "" # azureSubscriptionID -- Azure subscription where the resources are located. # Required if `cloudProvider=azure` azureSubscriptionID: "" # azureTenantID -- Azure tenant where the resources are located. # Required if `cloudProvider=azure` azureTenantID: "" # azureVMType -- Azure VM type. azureVMType: "AKS" # azureClusterName -- Azure AKS cluster name. # Required if `cloudProvider=azure` azureClusterName: "" # azureNodeResourceGroup -- Azure resource group where the cluster's nodes are located, typically set as `MC___`. # Required if `cloudProvider=azure` azureNodeResourceGroup: "" # azureUseManagedIdentityExtension -- Whether to use Azure's managed identity extension for credentials. If using MSI, ensure subscription ID and resource group are set. azureUseManagedIdentityExtension: false # cloudConfigPath -- Configuration file for cloud provider. cloudConfigPath: /etc/gce.conf # cloudProvider -- The cloud provider where the autoscaler runs. # Currently only `gce`, `aws`, and `azure` are supported. # `aws` supported for AWS. `gce` for GCE. `azure` for Azure AKS. cloudProvider: aws # containerSecurityContext -- [Security context for container](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) containerSecurityContext: {} # capabilities: # drop: # - ALL # dnsPolicy -- Defaults to `ClusterFirst`. Valid values are: # `ClusterFirstWithHostNet`, `ClusterFirst`, `Default` or `None`. # If autoscaler does not depend on cluster DNS, recommended to set this to `Default`. dnsPolicy: ClusterFirst ## Priorities Expander # expanderPriorities -- The expanderPriorities is used if `extraArgs.expander` is set to `priority` and expanderPriorities is also set with the priorities. # If `extraArgs.expander` is set to `priority`, then expanderPriorities is used to define cluster-autoscaler-priority-expander priorities. # See: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/expander/priority/readme.md expanderPriorities: {} # extraArgs -- Additional container arguments. extraArgs: logtostderr: true stderrthreshold: info v: 4 # write-status-configmap: true # leader-elect: true # skip-nodes-with-local-storage: false # expander: least-waste # scale-down-enabled: true # balance-similar-node-groups: true # min-replica-count: 2 # scale-down-utilization-threshold: 0.5 # scale-down-non-empty-candidates-count: 5 # max-node-provision-time: 15m0s # scan-interval: 10s # scale-down-delay-after-add: 10m # scale-down-delay-after-delete: 0s # scale-down-delay-after-failure: 3m # scale-down-unneeded-time: 10m # skip-nodes-with-local-storage: false # skip-nodes-with-system-pods: true # extraEnv -- Additional container environment variables. extraEnv: {} # fullnameOverride -- String to fully override `cluster-autoscaler.fullname` template. fullnameOverride: "" image: # image.repository -- Image repository repository: us.gcr.io/k8s-artifacts-prod/autoscaling/cluster-autoscaler # image.tag -- Image tag tag: v1.18.1 # image.pullPolicy -- Image pull policy pullPolicy: IfNotPresent ## Optionally specify an array of imagePullSecrets. ## Secrets must be manually created in the namespace. ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ ## # image.pullSecrets -- Image pull secrets pullSecrets: [] # - myRegistrKeySecretName # kubeTargetVersionOverride -- Allow overridding the `.Capabilities.KubeVersion.GitVersion` check. Useful for `helm template` commands. kubeTargetVersionOverride: "" # nameOverride -- String to partially override `cluster-autoscaler.fullname` template (will maintain the release name) nameOverride: "" # nodeSelector -- Node labels for pod assignment. Ref: https://kubernetes.io/docs/user-guide/node-selection/. nodeSelector: {} # podAnnotations -- Annotations to add to each pod. podAnnotations: {} # podDisruptionBudget -- Pod disruption budget. podDisruptionBudget: maxUnavailable: 1 # minAvailable: 2 # podLabels -- Labels to add to each pod. podLabels: {} # priorityClassName -- priorityClassName priorityClassName: "" rbac: # rbac.create -- If `true`, create and use RBAC resources. create: true # rbac.pspEnabled -- If `true`, creates and uses RBAC resources required in the cluster with [Pod Security Policies](https://kubernetes.io/docs/concepts/policy/pod-security-policy/) enabled. # Must be used with `rbac.create` set to `true`. pspEnabled: false serviceAccount: # rbac.serviceAccount.annotations -- Additional Service Account annotations. annotations: {} # rbac.serviceAccount.create -- If `true` and `rbac.create` is also true, a Service Account will be created. create: true # rbac.serviceAccount.name -- The name of the ServiceAccount to use. If not set and create is `true`, a name is generated using the fullname template. name: "" # replicaCount -- Desired number of pods replicaCount: 1 # resources -- Pod resource requests and limits. resources: {} # limits: # cpu: 100m # memory: 300Mi # requests: # cpu: 100m # memory: 300Mi # securityContext -- [Security context for pod](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) securityContext: {} # runAsNonRoot: true # runAsUser: 1001 # runAsGroup: 1001 service: # service.annotations -- Annotations to add to service annotations: {} # service.externalIPs -- List of IP addresses at which the service is available. Ref: https://kubernetes.io/docs/user-guide/services/#external-ips. externalIPs: [] # service.loadBalancerIP -- IP address to assign to load balancer (if supported). loadBalancerIP: "" # service.loadBalancerSourceRanges -- List of IP CIDRs allowed access to load balancer (if supported). loadBalancerSourceRanges: [] # service.servicePort -- Service port to expose. servicePort: 8085 # service.portName -- Name for service port. portName: http # service.type -- Type of service to create. type: ClusterIP ## Are you using Prometheus Operator? serviceMonitor: # serviceMonitor.enabled -- If true, creates a Prometheus Operator ServiceMonitor. enabled: false # serviceMonitor.interval -- Interval that Prometheus scrapes Cluster Autoscaler metrics. interval: 10s # serviceMonitor.namespace -- Namespace which Prometheus is running in. namespace: monitoring ## [Prometheus Selector Label](https://github.com/helm/charts/tree/master/stable/prometheus-operator#prometheus-operator-1) ## [Kube Prometheus Selector Label](https://github.com/helm/charts/tree/master/stable/prometheus-operator#exporters) # serviceMonitor.selector -- Default to kube-prometheus install (CoreOS recommended), but should be set according to Prometheus install. selector: release: prometheus-operator # serviceMonitor.path -- The path to scrape for metrics; autoscaler exposes `/metrics` (this is standard) path: /metrics # tolerations -- List of node taints to tolerate (requires Kubernetes >= 1.6). tolerations: [] # updateStrategy -- [Deployment update strategy](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy) updateStrategy: {} # rollingUpdate: # maxSurge: 1 # maxUnavailable: 0 # type: RollingUpdate ```
HelderGualberto commented 4 years ago

This kind of templates just work with the master branch version. Try to clone the git repo and build it from master branch. It worked for me.

For MacOS export CGO_ENABLED=0 GOOS=darwin GOARCH=amd64

sc250024 commented 4 years ago

This kind of templates just work with the master branch version. Try to clone the git repo and build it from master branch. It worked for me.

For MacOS export CGO_ENABLED=0 GOOS=darwin GOARCH=amd64

Wow, it was that simple. Thanks @HelderGualberto Indeed that's the case: https://github.com/norwoodj/helm-docs/compare/v0.13.0...master

This feature hasn't been tagged and released then? @norwoodj Is there a reason there's a delay for cutting a release?

norwoodj commented 4 years ago

No reason, no. Sorry, I've been dealing with carpal tunnel issues the last few months, and have had to completely ignore this repository. I will be better going forward.

sc250024 commented 4 years ago

@norwoodj Do you need help with the repo? I'm a pretty avid user, and can start picking up some issues.