Closed ulucinar closed 2 years ago
Thanks for reporting @ulucinar!
Unfortunately, this case where an indexed field goes to status hence into the connection details secret was not something that I've tested end to end and indeed there is no such resource in the current subset of AWS resources:
Sensitive field "customer_key" in resource "S3ObjectCopy"
Sensitive field "kms_encryption_context" in resource "S3ObjectCopy"
Sensitive field "kms_key_id" in resource "S3ObjectCopy"
Sensitive field "source_customer_key" in resource "S3ObjectCopy"
Sensitive field "default_action[*].authenticate_oidc[*].client_secret" in resource "LbListener"
Sensitive field "action[*].authenticate_oidc[*].client_secret" in resource "LbListenerRule"
Sensitive field "master_password" in resource "RdsCluster"
Sensitive field "secret" in resource "IamAccessKey"
Sensitive field "ses_smtp_password_v4" in resource "IamAccessKey"
Sensitive field "private_key" in resource "IamServerCertificate"
I think we need some other unique representation which would still allow us to get back to original path while restoring the state back. The first solution coming to my mind is to convert kube_config[0].password
to kube_config.0.password
in the connection details secret. This would work except if there are keys with dots inside (not sure if there is a case for this).
I wouldn't expect too much trouble from Terraform attribute names, I think they are mostly alphanumeric snake case keys. Seems like we need to handle Paved syntax. We can also scan for the remaining sensitive fields of the Azure resources in the planned release.
@ulucinar it would be great if you could verify the fix on your side 🙏
What happened?
While trying to store sensitive data from a
v1alpha1.KubernetesCluster
in provider-tf-azure, after setting thewriteConnectionSecretToRef
field, I observed the following logs:How can we reproduce it?
Using provider-tf-azure (from a local build):