Open gallowaystorm opened 2 years ago
The Helm TF provider is not ideal when it comes to values, I got rid of most of these issues using this method (i do not use it for DD but i was using it for Splunk)
locals {
splunk_connect_for_kubernetes_values = <<EOL
---
global:
logLevel: ${var.log_level}
splunk:
hec:
protocol: https
token: ${local.splunk_hec_token}
host: ${local.splunk_hec_host}
port: 443
indexName: ${local.splunk_index}
splunk-kubernetes-logging:
enabled: true
containers:
logFormatType: json
logFormat: "%Y-%m-%dT%H:%M:%S.%N%:z"
kubernetes:
clusterName: ${var.cluster_id}
securityContext: true
journalLogPath: /var/log/journal
podSecurityPolicy:
create: false
apparmor_security: false
k8sMetadata:
# Pod labels to collect
podLabels:
- app
- k8s-app
- release
customMetadata:
- name: "sdlc"
value: "${var.common["sdlcenv"]}"
- name: "aws_account_id"
value: "${module.gatherer.account_id}"
- name: "aws_account_name"
value: "${data.aws_iam_account_alias.current.id}"
- name: "aws_region"
value: "${module.gatherer.region_name}"
- name: "segment_business_unit"
value: "${module.gatherer.common["segment_business_unit"]}"
- name: "application_name"
value: "${module.gatherer.common["application_name"]}"
- name: "short_name"
value: "${var.common["product"]}"
splunk-kubernetes-objects:
enabled: false
splunk-kubernetes-metrics:
enabled: false
EOL
}
resource "helm_release" "splunk_connect_for_kubernetes" {
<removed>
values = [
yamlencode(yamldecode(local.splunk_connect_for_kubernetes_values))
]
}
I used this approach:
{
name = "agents.containers.agent.env[0].name"
value = "DD_SECRET_BACKEND_COMMAND"
},
{
name = "agents.containers.agent.env[0].value"
value = "/readsecret_multiple_providers.sh"
},
}
Hmm, none of those seem to have worked for me. Has there been any update on this
I use the following approach, where the values.yaml file is the same in the examples folder
resource "helm_release" "datadog" {
name = "datadog"
namespace = local.k8s_datadog_namespace
create_namespace = true
repository = "https://helm.datadoghq.com"
version = var.datadog_version
chart = "datadog"
values = [file("datadog-chart-values.yaml"]
}
Alternately you could use templatefile
to pass in values as well
I got this working using jsonencode
and values
.
values = [jsonencode(
{
"datadog" = {
"env" = [
{
"name" = "DD_LOGS_CONFIG_USE_HTTP"
"value" = "true"
}
]
}
}
)]
The tricky part was figuring out that using a .
as a separator will get misinterpreted by terraform, and that I actually had to construct a proper nested map instead.
This works for me:
values = [<<YAML
datadog:
env:
- name: DD_LOGS_CONFIG_USE_HTTP
value: "true"
YAML
]
This has been changed to {{- include "additional-env-entries" .Values.clusterAgent.env | indent 10 }}
As mentioned by @mohsiur using templatefile
:
resource "helm_release" "datadog_agent" {
name = "datadog-agent"
chart = "datadog"
repository = "https://helm.datadoghq.com"
version = "3.7.3"
namespace = "default"
set_sensitive {
name = "datadog.apiKey"
value = var.datadog_api_key
}
# The full list of configuration options for the datadog-value.yaml file in the helm/charts GitHub repository.
# https://github.com/DataDog/helm-charts/tree/main/charts/datadog#all-configuration-options
set {
name = "datadog.site"
value = "datadoghq.com"
}
values = [templatefile("${path.module}/templates/datadog_env_vars.tpl", {
datadog_log_collection_only = var.datadog_log_collection_only ? "false" : "true"
})]
The template:
datadog:
env:
- name: DD_CLOUD_PROVIDER_METADATA
value: "aws"
- name: DD_ENABLE_PAYLOADS_EVENTS
value: "${datadog_log_collection_only}"
Describe what happened: Errored on adding enviornment variable to help deployment via Terraform.
Describe what you expected: I want an environment variable of DD_LOGS_CONFIG_USE_HTTP set to true.
Steps to reproduce the issue:
Additional environment details (Operating System, Cloud provider, etc): This is the terraform code:
This is the error:
The documentation is not very clear on this.