Closed askoriy closed 3 years ago
You can set sensitive environment on the provider for this very reason. Also If you had an update block set in life cycle commands then it wouldn't force new the resource for environment variable changes
The part of resource configuration must be sensitive, so I used sensitive_environment
to pass values to the script and hide in terraform plan
output and pipeline logs. It's resource-related data, not global, and the provider environment can't be used for it.
Here is my usecase of using shell provider that expects different key-value input for different resources, and some of input values must be sensitive. https://github.com/extenda/tf-module-kafka/tree/master/kafka-connect-ccloud
I'm just not sure why you can't declare sensitive variables in provider block. Its there for exactly situations like this. You can have multiple shell provider declarations if you don't want to share it globally across all shell resources. Also you can still use update method to ignore changes if the username and password change
I'd really move username and password parameters to provider block, as they are used only during deployment. But other sensitive parameters (login/password, token, service account, and many others, depending on connection type) are part of the final resource configuration, must be tracked for change and updated with running update
script.
If its an input parameter that must be tracked then you are right, using a sensitive_environment on the resource is appropriate. If one of these inputs changes then it will trigger an update on the resource, as long as an update script is supplied. You can compare the previous state of the resource, which is passed as stdin with the current configuration input and decide what you want your update script to do. It doesn't have to destroy the resource and recreate it. Does this not solve your problem? Sorry if I am not understanding exactly.
I'm sorry, I needed to show an example first before discussion
terraform {
required_providers {
shell = {
source = "scottwinkler/shell"
version = "1.7.3"
}
}
}
resource "shell_script" "weather" {
lifecycle_commands {
create = <<-EOF
echo "{\"secret\": \"$secret\", \"London\": \"$(curl wttr.in/London?format="%l:+%c+%t")\"}" > state.json
echo 'resources is created' >> log.txt
cat state.json
EOF
delete = <<-EOF
rm state.json
echo 'resource is destroyed' >> log.txt
EOF
read = "cat state.json"
update = <<-EOF
echo "{\"secret\": \"$secret\", \"London\": \"$(curl wttr.in/London?format="%l:+%c+%t")\"}" > state.json
echo 'resources is gracefully updated' >> log.txt
cat state.json
EOF
}
sensitive_environment = {
"secret" = "value1"
}
}
If run terraform apply
, then change secret
value to value2
and run terraform apply
again I expect the log.txt to have:
resources is created
resources is gracefully updated
But receive
resources is created
resource is destroyed
resources is created
If its an input parameter that must be tracked then you are right, using a sensitive_environment on the resource is appropriate. If one of these inputs changes then it will trigger an update on the resource, as long as an update script is supplied.
That is expected to me too, but in the example above it trigger destroy and create if secret
value changed
Oh, my bad, I used the old version, In the recent version it works as expected.
Shell provider going to update the object if
lifecycle_commands
section is changed but going to forcedly replace the object ifenvironment
orsensitive_environment
section is changed. Thus target object is destroyed/recreated if for example credentials passed via sensitive_environment are changed. That is unacceptable in most cases.Could you fix it or at least add an option to the provider configuration?