Closed framled closed 5 years ago
Is this a BUG REPORT :
Version of Helm and Kubernetes: helm version: v2.14.0 kubernetes version: v1.15.0
Which chart in which version: fluentd-elasticsearch
What happened: When trying to render the template I got
Error: render error in "fluentd-elasticsearch/templates/service.yaml": template: fluentd-elasticsearch/templates/service.yaml:10:3: executing "fluentd-elasticsearch/templates/service.yaml" at <include "fluentd-elasticsearch.labels" .>: error calling include: template: fluentd-elasticsearch/templates/_helpers.tpl:48:27: executing "fluentd-elasticsearch.labels" at <include "fluentd-elasticsearch.name" .>: error calling include: template: fluentd-elasticsearch/templates/_helpers.tpl:6:18: executing "fluentd-elasticsearch.name" at <.Chart.Name>: nil pointer evaluating interface {}.Name
What you expected to happen: render fluentd manifest
How to reproduce it values.yaml
image: pullPolicy: IfNotPresent repository: quay.io/fluentd_elasticsearch/fluentd tag: v2.7.0 ## Configure resource requests and limits ## ref: http://kubernetes.io/docs/user-guide/compute-resources/ ## resources: limits: memory: 500Mi requests: cpu: 100m memory: 200Mi awsSigningSidecar: enabled: false priorityClassName: "" hostLogDir: varLog: /var/log dockerContainers: /var/lib/docker/containers libSystemdDir: /usr/lib64 elasticsearch: auth: enabled: false host: elasticsearch-client port: 9200 buffer_chunk_limit: 2M buffer_queue_limit: 8 logstash_prefix: 'logstash' fluentdArgs: "--no-supervisor -q" rbac: create: true serviceAccount: # Specifies whether a ServiceAccount should be created create: true # The name of the ServiceAccount to use. # If not set and create is true, a name is generated using the fullname template name: "" updateStrategy: type: RollingUpdate livenessProbe: enabled: true serviceMonitor: enabled: false prometheusRule: enabled: false annotations: {} # prometheus.io/scrape: "true" # prometheus.io/port: "24231" podSecurityPolicy: enabled: false ingress: enabled: false configMaps: useDefaults: systemConf: true containersInputConf: true systemInputConf: true forwardInputConf: true monitoringConf: false outputConf: true extraConfigMaps: containers.input.conf: |- <match fluent.**> @type null </match> <source> @id fluentd-containers.log @type tail path /var/log/containers/*.log pos_file /var/log/containers.log.pos tag raw.kubernetes.* read_from_head true <parse> @type multi_format <pattern> format json time_key time time_format %Y-%m-%dT%H:%M:%S.%NZ </pattern> <pattern> format /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/ time_format %Y-%m-%dT%H:%M:%S.%N%:z </pattern> </parse> </source> # Detect exceptions in the log output and forward them as one log entry. <match raw.kubernetes.**> @id raw.kubernetes @type detect_exceptions remove_tag_prefix raw message log stream stream multiline_flush_interval 5 max_bytes 500000 max_lines 1000 </match> # Concatenate multi-line logs <filter **> @id filter_concat @type concat key message multiline_end_regexp /\n$/ separator "" </filter> # Enriches records with Kubernetes metadata <filter kubernetes.**> @id filter_kubernetes_metadata @type kubernetes_metadata </filter> # Fixes json fields in Elasticsearch <filter kubernetes.**> @id filter_parser @type parser key_name log reserve_data true remove_key_name_field true <parse> @type multi_format <pattern> format json </pattern> <pattern> format none </pattern> </parse> </filter> output.conf: |- # Enriches records with Kubernetes metadata <filter **> @type record_transformer enable_ruby <record> enviroment "${environment}" global_log $${ record["log"] || record["MESSAGE"] } </record> </filter> <filter kubernetes.**> @type kubernetes_metadata </filter> <match **> @id elasticsearch @type elasticsearch @log_level info include_tag_key true type_name _doc host "#{ENV['OUTPUT_HOST']}" port "#{ENV['OUTPUT_PORT']}" scheme "#{ENV['OUTPUT_SCHEME']}" ssl_version "#{ENV['OUTPUT_SSL_VERSION']}" ssl_verify true logstash_format true logstash_prefix "#{ENV['LOGSTASH_PREFIX']}" reconnect_on_error true <buffer> @type file path /var/log/fluentd-buffers/kubernetes.system.buffer flush_mode interval retry_type exponential_backoff flush_thread_count 2 flush_interval 5s retry_forever retry_max_interval 30 chunk_limit_size "#{ENV['OUTPUT_BUFFER_CHUNK_LIMIT']}" queue_limit_length "#{ENV['OUTPUT_BUFFER_QUEUE_LIMIT']}" overflow_action block </buffer> </match> ## Persist data to a persistent volume persistence: enabled: false service: ports: - name: fluentd-tcp port: 24224 protocol: TCP targetPort: 24224 type: ClusterIp - name: fluentd-udp port: 24224 protocol: UDP targetPort: 24224 type: ClusterIp nodeSelector: {} tolerations: [] affinity: {}
helm fetch kiwigrid/fluentd-elasticsearch --untar helm template fluentd-elasticsearch --values values.yaml > fluentd.yaml
fixed with: https://github.com/kiwigrid/helm-charts/pull/163
Is this a BUG REPORT :
Version of Helm and Kubernetes: helm version: v2.14.0 kubernetes version: v1.15.0
Which chart in which version: fluentd-elasticsearch
What happened: When trying to render the template I got
What you expected to happen: render fluentd manifest
How to reproduce it values.yaml
helm fetch kiwigrid/fluentd-elasticsearch --untar helm template fluentd-elasticsearch --values values.yaml > fluentd.yaml