Open El-Carverino opened 2 years ago
This is still occurring with v0.40.0.
still occurring in 2024
Hey @aleenprd, we will soon rework the provider config as part of https://github.com/Snowflake-Labs/terraform-provider-snowflake/blob/main/ROADMAP.md#providers-configuration-rework. We will then address this issue.
@aleenprd are you using the newest version of the provider (v0.95.0)?
0.94.1 was stable for me but started randomly acting out yesterday. I run Opentofu with this (version 1.8.1) locally, in gitlab runner and over kubernetes and the error appeared to be inconsistent between environments. I had to grant warehouse usage to useradmin and security admin which is not actually needed and worked just fine before yesterday. Super weird. I also noticed that the setup eventually degrades if you keep .terraform/ and lock.hcl in the project and that it is better to run init all the time.
If it started acting out randomly, then it may be an issue with Snowflake rather than with the provider itself. Could you provide the config you are using and the logs (running with the TF_LOG=DEBUG
environment variable)?
Also, please keep in mind that currently, we are not supporting OpenTofu (which should work out-of-the-box but we are not testing the provider against it).
I doubt it. I tried manually provisioning the same resources in Snowflake with those roles. They don't even need a warehouse in the first place. I will maybe revert my fix and give you some logs if it appears again. Thanks
Provider Version
Snowflake Provider v0.37.0 (and v0.37.1) Note: This is/was not an issue with v0.36.0 (or earlier).
Terraform Version
Terraform v1.0.9 (but also experienced with later versions).
Describe the bug
After upgrading to v0.37.x, when running
terraform plan
(orapply
), on an existing config with no changes, it successfully refreshes the state of all existing resources; however upon inspecting the current Snowflake environment, it returns this same error (in some form) for every existing resource (and the command fails):Expected behavior
When running
terraform plan
(orapply
), on an existing config with no changes, after successfully refreshing the state of all existing resources, it should result in the following success message:Code samples and commands
Additional context
The Snowflake provider in the Terraform config is configured only with a
role
attribute. All other connection properties come from ENV variables (which have been confirmed to be accurate):SNOWFLAKE_ACCOUNT
SNOWFLAKE_REGION
SNOWFLAKE_USER
SNOWFLAKE_WAREHOUSE
Note: In the provider documentation (and code) in this release, it looks like there is new handling for a
warehouse
attribute and/or use of theSNOWFLAKE_WAREHOUSE
ENV variable. However, I have been successfully using this ENV var for a long time, with the expected result, as I think the underlying Snowflake connector checks for it. (It's definitely utilized, because the intended user does not have a default warehouse defined.) Perhaps there is a conflict with the updated provider configuration/code?I also attempted removing the
SNOWFLAKE_WAREHOUSE
ENV variable and using the new providerwarehouse
attribute instead, but that resulted in the same error(s).