Closed JESCHO99 closed 1 month ago
Hey @JESCHO99. Thanks for reaching out to us.
View resource is a newly reworked resource, so we will analyze this with a high priority. cc: @sfc-gh-jmichalak
Hi @JESCHO99 👋, We're missing a diff suppression for columns, so when the columns are determined by the referenced table, they are available in DESC and stored in the state. This will be fixed in the next release.
As a workaround, please specify columns in the config, like:
column {
column_name = "ID"
}
column {
column_name = "NAME"
}
Hey @sfc-gh-jmichalak, thanks for looking at this issue this fast. We have over 1000 views in our code base and putting all those columns into the config (for sure the code could be generated) means work for our team which we want/need to roll back when the problem is solved. Do you have an idea when the next release will be there so we can decide if we can wait so we do not need to implement the hotfix and then roll back the code changes?
Best regards, Jens
I think we can release the fix ~after Wednesday next week.
Hitting the same issue, also appreciate the fast fix. Trying to stay current as the provider nears 1.0 🤞 Thanks!
Also encountering this issue - any update on the release date for the fix? @sfc-gh-jmichalak
This will be released today or tomorrow.
Hi all, we've released v0.96.0, which includes the fix and acceptance test for the column list in views. Please upgrade with the migration guide.
Thanks for all your work on this, much appreciated.
v0.96.0 fixes the issue for me when no column
blocks are specified, but I'm still hitting a plan loop when updating to v0.96.0 when specifying columns along with comments, for example, the following resource creates the correct view in Snowflake:
resource "snowflake_view" "example" {
name = "EXAMPLE"
database = snowflake_schema.this.database
schema = snowflake_schema.this.name
comment = snowflake_table.event_v0.comment
## The column identifiers in the SELECT list are unquoted
statement = <<-SQL
SELECT ${join(", ", [for c in local.view_columns_v0 : c.name])}
FROM ${snowflake_table.source.name}
SQL
dynamic "column" {
for_each = local.view_columns_v0
content {
column_name = column.value.name
comment = column.value.comment
}
}
}
... but on subsequent runs, I'm seeing same plan loop as before, with the perma-diff being on the show_output
attribute of the view in the plan.
This seems to only be an issue when explicitly specifying the columns in the SELECT in statement
, and also specifying a comment
along with a column_name
in the column
blocks. For instance, a view using SELECT *
appears to be unaffected.
This feels like the same issue with the v0.96.0 release containing a partial fix for the plan loop under some circumstances, but can file a new issue if not (or maybe I'm missing something obvious?)
Hi @gws 👋
I tried to reproduce this, but I have no permadiff. Could you also provide the referenced table definition, the output of terraform plan,
and logs with TF_LOG=DEBUG
?
@sfc-gh-jmichalak Thanks, this is a weirder one than I thought initially.
I think I can do you one better and get you something that reproduces the issue consistently for me on v0.96.0, I think the issue is related to ()
(with anything or nothing in between) appearing in the text of the comment on the view.
Not as minimal as it could be but should be self-contained enough to repro:
locals {
db_name = "TEST"
schema_name = "GH3073"
columns = [
{
comment = "COL1 comment"
name = "COL1"
nullable = false
type = "VARCHAR"
},
{
## This appears to be the source of the issue
comment = "COL2 comment has issues when the following (parentheses) appear."
name = "COL2"
nullable = false
type = "VARCHAR"
},
]
view_columns = [for c in local.columns : c if c.name != "COL1"]
}
resource "snowflake_schema" "this" {
database = local.db_name
name = local.schema_name
}
resource "snowflake_table" "test" {
name = "T_TEST"
database = snowflake_schema.this.database
schema = snowflake_schema.this.name
dynamic "column" {
for_each = local.columns
content {
comment = column.value.comment
name = column.value.name
nullable = column.value.nullable
type = column.value.type
}
}
}
resource "snowflake_view" "test" {
name = "V_TEST"
database = snowflake_schema.this.database
schema = snowflake_schema.this.name
statement = <<-SQL
SELECT ${join(", ", [for c in local.view_columns : c.name])}
FROM ${snowflake_table.test.name}
SQL
dynamic "column" {
for_each = local.view_columns
content {
column_name = column.value.name
comment = column.value.comment
}
}
}
Terraform plan output:
Terraform will perform the following actions:
# module.gh3073.snowflake_view.test will be updated in-place
~ resource "snowflake_view" "test" {
id = "\"TEST\".\"GH3073\".\"V_TEST\""
name = "V_TEST"
~ show_output = [
- {
- change_tracking = "OFF"
- comment = ""
- created_on = "2024-09-20T22:32:09.199-07:00"
- database_name = "TEST"
- is_materialized = false
- is_secure = false
- kind = ""
- name = "V_TEST"
- owner = "SYSADMIN"
- owner_role_type = "ROLE"
- reserved = ""
- schema_name = "GH3073"
- text = <<-EOT
CREATE OR REPLACE VIEW "TEST"."GH3073"."V_TEST" ("COL2" COMMENT 'COL2 comment has issues when the following (parentheses) appear.') AS SELECT COL2
FROM T_TEST
EOT
},
] -> (known after apply)
~ statement = <<-EOT
- appear.') AS SELECT COL2
+ SELECT COL2
FROM T_TEST
EOT
# (9 unchanged attributes hidden)
# (1 unchanged block hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
If you still can't reproduce with the above I'll get the logs over to you as well but I suspect we won't need them and they're pretty verbose.
Thanks! We need to parse manually the text
field to get view statement. This seems to be an issue with the parser. As a workaround, please don't use ()
in the comment until this is fixed.
We're seeing lots of SQL compilation error: Invalid column definition list
errors now on 0.96 because the provider doesn't seem to be updating the column definition list correctly. Is that related to this issue or separate?
For example, the PR screenshoted below added a column to a view called "SNPS_GROUP_NUM" but it doesn't appear in the definition list so the query fails.
Please make sure you have a matching column list with the columns returned from the statement. I encountered many errors like this during testing, and usually, the problem was bad configuration. If it doesn't work, please provide the whole resource configuration.
Hi @sfc-gh-jmichalak, we aren't setting the column
block list at all.
We have a hundreds of views that use a module that makes it so people w/ little to no terraform experience can easily manage views by just managing one .SQL file.
The module takes sql files in a directory like database_name/schema_name/view_name.sql
and generates the database name, the schema name, and the view name from the file path and uses the contents as the definition, and parses out comments from the top of the file
resource "snowflake_view" "views" {
for_each = {
for path in fileset(var.base_dir, "**/*.sql") : path => {
database = upper(split("/", path)[0])
schema = upper(split("/", path)[1])
name = upper(trimsuffix(split("/", path)[2], ".sql"))
comment = format("Managed by Terraform: %s", try(trimspace(regex("//(.*)", file("${var.base_dir}/${path}"))[0]), ""))
}
}
database = each.value.database
schema = each.value.schema
name = each.value.name
comment = each.value.comment
statement = file("${var.base_dir}/${each.key}")
# or_replace = true
is_secure = "false"
is_recursive = "false"
is_temporary = "false"
change_tracking = "false"
}
variable "base_dir" {
type = string
}
Hi, I just tested your configuration with a simple SELECT id FROM "DB"."SCHEMA"."TEST_2";
, and it works on my side. My suggestion is to
TF_LOG=DEBUG
SELECT ...
) in Snowsight returns no errors.When the column definition is empty in the config (as in your case), the column list in CREATE
should be omitted and inferred on the Snowflake side from the statement.
@sfc-gh-jmichalak Thanks, will do if it comes up again!
Hey @sfc-gh-jmichalak this just happened again.
Terraform changed the actual definition but not the column_list, which is a very confusing experience for the end users.
Here's the change:
Here's the TF Plan:
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# module.views.snowflake_view.views["analytics/omx/member_status_history_view.sql"] will be updated in-place
~ resource "snowflake_view" "views" {
id = "\"ANALYTICS\".\"OMX\".\"MEMBER_STATUS_HISTORY_VIEW\""
name = "MEMBER_STATUS_HISTORY_VIEW"
~ show_output = [
- {
- change_tracking = "OFF"
- comment = "Managed by Terraform: View for Materializing member_status_history"
- created_on = "2024-09-19T17:19:03.057Z"
- database_name = "ANALYTICS"
- is_materialized = false
- is_secure = false
- kind = ""
- name = "MEMBER_STATUS_HISTORY_VIEW"
- owner = "ANALYTICS__OMX__WRITER"
- owner_role_type = "ROLE"
- reserved = ""
- schema_name = "OMX"
- text = <<-EOT
CREATE OR REPLACE VIEW "ANALYTICS"."OMX"."MEMBER_STATUS_HISTORY_VIEW" ("ticket_id", "member_id", "CREATED_AT", "DATE_KEY", "LAST_DD_DT", "LAST_DD_SOURCE", "Member Status at interaction", "Member cancellation reason at interaction", "Member suspension reason at interaction", "Limited access tier", "Step up auth") COPY GRANTS comment = 'Managed by Terraform: View for Materializing member_status_history' AS //View for Materializing member_status_history
SELECT distinct
zendesk_ticket."ID"::string AS "ticket_id",
zendesk_ticket.member_id AS "member_id",
zendesk_ticket."CREATED_AT" ,
member.DATE_KEY,
member.LAST_DD_DT,
member.LAST_DD_SOURCE,
coalesce(member.user_status,member2.status) AS "Member Status at interaction",
coalesce(member.cancellation_reason, member2.cancellation_reason) as "Member cancellation reason at interaction",
coalesce(member.suspension_reason, member2.suspension_reason) AS "Member suspension reason at interaction",
iff(lat.user_id is not null , TRUE, false) as "Limited access tier",
iff(SUA.user_id is not null , TRUE, false) as "Step up auth"
FROM analytics.looker.zendesk_tickets_base AS zendesk_ticket
LEFT JOIN edw_db.core.member_details_history AS member ON zendesk_ticket.member_id= member.user_id AND replace(TO_CHAR(TO_DATE(zendesk_ticket."CREATED_AT" ), 'YYYY-MM-DD'),'-')=date_key
LEFT JOIN chime.finance.members AS member2 ON zendesk_ticket.member_id=member2.id AND zendesk_ticket.created_at>=member2.updated_at
left join analytics.omx.limited_access_tier as LAT on zendesk_ticket.member_id = lat.user_id and zendesk_ticket.created_at between LAT.lat_added_timestamp and LAT.lat_removed_timestamp
left join analytics.omx.step_up_auth as SUA on zendesk_ticket.member_id = SUA.user_id and zendesk_ticket.created_at between SUA.sua_status_created_at and coalesce(sua.login_success_timestamp,sua.canceled_by_agent_timestamp)
;
EOT
},
] -> (known after apply)
~ statement = <<-EOT
//View for Materializing member_status_history
SELECT distinct
- zendesk_ticket."ID"::string AS "ticket_id",
- zendesk_ticket.member_id AS "member_id",
- zendesk_ticket."CREATED_AT" ,
+ zendesk_ticket."ID"::string AS ticket_id,
+ zendesk_ticket.member_id AS member_id,
+ zendesk_ticket.CREATED_AT ,
member.DATE_KEY,
member.LAST_DD_DT,
member.LAST_DD_SOURCE,
- coalesce(member.user_status,member2.status) AS "Member Status at interaction",
- coalesce(member.cancellation_reason, member2.cancellation_reason) as "Member cancellation reason at interaction",
- coalesce(member.suspension_reason, member2.suspension_reason) AS "Member suspension reason at interaction",
- iff(lat.user_id is not null , TRUE, false) as "Limited access tier",
- iff(SUA.user_id is not null , TRUE, false) as "Step up auth"
-
+ coalesce(member.user_status,member2.status) AS member_status_at_interaction,
+ coalesce(member.cancellation_reason, member2.cancellation_reason) as member_cancellation_reason_at_interaction,
+ coalesce(member.suspension_reason, member2.suspension_reason) AS member_suspension_reason_at_interaction,
+ iff(lat.user_id is not null , TRUE, false) as limited_access_tier,
+ iff(SUA.user_id is not null , TRUE, false) as step_up_auth
FROM analytics.looker.zendesk_tickets_base AS zendesk_ticket
LEFT JOIN edw_db.core.member_details_history AS member ON zendesk_ticket.member_id= member.user_id AND replace(TO_CHAR(TO_DATE(zendesk_ticket."CREATED_AT" ), 'YYYY-MM-DD'),'-')=date_key
LEFT JOIN chime.finance.members AS member2 ON zendesk_ticket.member_id=member2.id AND zendesk_ticket.created_at>=member2.updated_at
left join analytics.omx.limited_access_tier as LAT on zendesk_ticket.member_id = lat.user_id and zendesk_ticket.created_at between LAT.lat_added_timestamp and LAT.lat_removed_timestamp
left join analytics.omx.step_up_auth as SUA on zendesk_ticket.member_id = SUA.user_id and zendesk_ticket.created_at between SUA.sua_status_created_at and coalesce(sua.login_success_timestamp,sua.canceled_by_agent_timestamp)
- ;
EOT
# (10 unchanged attributes hidden)
# (11 unchanged blocks hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
Here's the Query generated by Terraform:
CREATE OR REPLACE VIEW "ANALYTICS"."OMX"."MEMBER_STATUS_HISTORY_VIEW" ("ticket_id", "member_id", "CREATED_AT", "DATE_KEY", "LAST_DD_DT", "LAST_DD_SOURCE", "Member Status at interaction", "Member cancellation reason at interaction", "Member suspension reason at interaction", "Limited access tier", "Step up auth") COPY GRANTS COMMENT = 'Managed by Terraform: View for Materializing member_status_history' AS //View for Materializing member_status_history
SELECT distinct
zendesk_ticket."ID"::string AS ticket_id,
zendesk_ticket.member_id AS member_id,
zendesk_ticket.CREATED_AT ,
member.DATE_KEY,
member.LAST_DD_DT,
member.LAST_DD_SOURCE,
coalesce(member.user_status,member2.status) AS member_status_at_interaction,
coalesce(member.cancellation_reason, member2.cancellation_reason) as member_cancellation_reason_at_interaction,
coalesce(member.suspension_reason, member2.suspension_reason) AS member_suspension_reason_at_interaction,
iff(lat.user_id is not null , TRUE, false) as limited_access_tier,
iff(SUA.user_id is not null , TRUE, false) as step_up_auth
FROM analytics.looker.zendesk_tickets_base AS zendesk_ticket
LEFT JOIN edw_db.core.member_details_history AS member ON zendesk_ticket.member_id= member.user_id AND replace(TO_CHAR(TO_DATE(zendesk_ticket."CREATED_AT" ), 'YYYY-MM-DD'),'-')=date_key
LEFT JOIN chime.finance.members AS member2 ON zendesk_ticket.member_id=member2.id AND zendesk_ticket.created_at>=member2.updated_at
left join analytics.omx.limited_access_tier as LAT on zendesk_ticket.member_id = lat.user_id and zendesk_ticket.created_at between LAT.lat_added_timestamp and LAT.lat_removed_timestamp
left join analytics.omx.step_up_auth as SUA on zendesk_ticket.member_id = SUA.user_id and zendesk_ticket.created_at between SUA.sua_status_created_at and coalesce(sua.login_success_timestamp,sua.canceled_by_agent_timestamp)
Heres' the current definition after the TF apply.
CREATE OR REPLACE VIEW "ANALYTICS"."OMX"."MEMBER_STATUS_HISTORY_VIEW" ("ticket_id", "member_id", "CREATED_AT", "DATE_KEY", "LAST_DD_DT", "LAST_DD_SOURCE", "Member Status at interaction", "Member cancellation reason at interaction", "Member suspension reason at interaction", "Limited access tier", "Step up auth") COPY GRANTS comment = 'Managed by Terraform: View for Materializing member_status_history' AS //View for Materializing member_status_history
SELECT distinct
zendesk_ticket."ID"::string AS ticket_id,
zendesk_ticket.member_id AS member_id,
zendesk_ticket.CREATED_AT ,
member.DATE_KEY,
member.LAST_DD_DT,
member.LAST_DD_SOURCE,
coalesce(member.user_status,member2.status) AS member_status_at_interaction,
coalesce(member.cancellation_reason, member2.cancellation_reason) as member_cancellation_reason_at_interaction,
coalesce(member.suspension_reason, member2.suspension_reason) AS member_suspension_reason_at_interaction,
iff(lat.user_id is not null , TRUE, false) as limited_access_tier,
iff(SUA.user_id is not null , TRUE, false) as step_up_auth
FROM analytics.looker.zendesk_tickets_base AS zendesk_ticket
LEFT JOIN edw_db.core.member_details_history AS member ON zendesk_ticket.member_id= member.user_id AND replace(TO_CHAR(TO_DATE(zendesk_ticket."CREATED_AT" ), 'YYYY-MM-DD'),'-')=date_key
LEFT JOIN chime.finance.members AS member2 ON zendesk_ticket.member_id=member2.id AND zendesk_ticket.created_at>=member2.updated_at
left join analytics.omx.limited_access_tier as LAT on zendesk_ticket.member_id = lat.user_id and zendesk_ticket.created_at between LAT.lat_added_timestamp and LAT.lat_removed_timestamp
left join analytics.omx.step_up_auth as SUA on zendesk_ticket.member_id = SUA.user_id and zendesk_ticket.created_at between SUA.sua_status_created_at and coalesce(sua.login_success_timestamp,sua.canceled_by_agent_timestamp)
Here's the Describe statement after the tf-apply:
Here's another issue, but this time the apply fails.
Change:
Plan:
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# module.views.snowflake_view.views["hightouch_db/chime_data/conversion_signals_v2_view.sql"] will be updated in-place
~ resource "snowflake_view" "views" {
id = "\"HIGHTOUCH_DB\".\"CHIME_DATA\".\"CONVERSION_SIGNALS_V2_VIEW\""
name = "CONVERSION_SIGNALS_V2_VIEW"
~ show_output = [
- {
- change_tracking = "OFF"
- comment = "Managed by Terraform: "
- created_on = "2024-09-23T20:42:22.804Z"
- database_name = "HIGHTOUCH_DB"
- is_materialized = false
- is_secure = false
- kind = ""
- name = "CONVERSION_SIGNALS_V2_VIEW"
- owner = "HIGHTOUCH_DB__CHIME_DATA__WRITER"
- owner_role_type = "ROLE"
- reserved = ""
- schema_name = "CHIME_DATA"
- text = <<-EOT
CREATE OR REPLACE VIEW "HIGHTOUCH_DB"."CHIME_DATA"."CONVERSION_SIGNALS_V2_VIEW" ("COMP_KEY", "USER_ID", "DW_PROCESSING_TIMESTAMP", "GCLID", "WBRAID", "GBRAID", "EM", "PH", "FBC", "FBP", "CONTEXT_IP", "CONTEXT_USER_AGENT", "FN", "LN", "EXTERNAL_ID", "EVENT_ID", "CONVERSION_TIME", "CONVERSION_TYPE", "RAW_CONVERSION_VALUE", "CONVERSION_VALUE") comment = 'Managed by Terraform: ' AS select
comp_key,
user_id,
dw_processing_timestamp,
gclid,
wbraid,
gbraid,
em,
ph,
fbc,
fbp,
context_ip,
context_user_agent,
fn,
ln,
external_id,
event_id,
conversion_time,
conversion_type,
raw_conversion_value,
conversion_value
from edw_db.marketing.conversion_signals_v2
EOT
},
] -> (known after apply)
~ statement = <<-EOT
select
comp_key,
user_id,
dw_processing_timestamp,
gclid,
wbraid,
gbraid,
+ click_id,
em,
ph,
fbc,
fbp,
context_ip,
context_user_agent,
fn,
ln,
external_id,
event_id,
conversion_time,
conversion_type,
raw_conversion_value,
conversion_value
from edw_db.marketing.conversion_signals_v2
EOT
# (10 unchanged attributes hidden)
# (20 unchanged blocks hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
apply:
Terraform v1.7.1
on linux_amd64
Executing pre-apply hook...
Enabling ECS task protection for this agent
{"protection":{"ExpirationDate":"2024-10-04T00:11:40.867Z","ProtectionEnabled":true,"TaskArn":"arn:aws:ecs:us-east-1:802476504392:task/d51c56121eaa44fc859986e22b373605"}}module.views.snowflake_view.views["hightouch_db/chime_data/conversion_signals_v2_view.sql"]: Modifying... [id="HIGHTOUCH_DB"."CHIME_DATA"."CONVERSION_SIGNALS_V2_VIEW"]
╷
│ Error: error creating view CONVERSION_SIGNALS_V2_VIEW err = 002026 (42601): SQL compilation error:
│ Invalid column definition list
│
│ with module.views.snowflake_view.views["hightouch_db/chime_data/conversion_signals_v2_view.sql"],
│ on ../../stack/snowflake/views/main.tf line 1, in resource "snowflake_view" "views":
│ 1: resource "snowflake_view" "views" {
│
╵
Executing post-apply hook...
Disabling ECS task protection for this agent
{"protection":{"ExpirationDate":null,"ProtectionEnabled":false,"TaskArn":"arn:aws:ecs:us-east-1:802476504392:task/d51c56121eaa44fc859986e22b373605"}}Operation failed: failed running terraform apply (exit 1)
and the column in question, click_id, definitely exists in the source table.
Hi @liamjamesfoley
Thanks for the detailed logs. This is indeed an error on our side (both of the cases). It happens because the old column list is still populated during CREATE OR REPLACE, which gets called when the statement is changed. This causes incorrect column definition in this SQL statement. This column list should be empty because it's not specified in the config.
The workaround here is to manually define the column
list in the Terraform so that the names in the config match the names in the statement (please take care of proper quoting: in the statement, you use unquoted columns, so I believe in the column config they should be upper case). The other way is to get rid of the old state: remove the resource from the state and import it back. This should ensure that the columns
in the state are correct.
Currently, I'm working on fixing this behavior.
@sfc-gh-jmichalak Awesome, thanks for the confirmation! Do you think the fix will be part of 0.97?
Hey, @liamjamesfoley we have just released v0.97.0 version of the provider (release notes, migration guide) containing the fix for this behavior.
Confirmed this fixed it for us - thank you!
Fixed for us as well - we appreciate the fast turn-around!
Thanks @sfc-gh-jmichalak !
Thanks for the quick feedback! As the fix was confirmed by a few users, I'm closing this issue. If this happens again, please create a new one.
Terraform CLI Version
1.9.1
Terraform Provider Version
v0.95.0
Terraform Configuration
Category
category:resource
Object type(s)
data_source:views
Expected Behavior
The following terraform configurations should be able to execute a terraform apply without any error and after a successful apply of this code there should not be any open changes left when a new terraform plan is executed after the successful apply
Actual Behavior
After a successful apply when a terraform plan is executed there is always a change for the view object for the column list. Even is the column list is not touched in the code this behaviour leads to an update/replacement of all views in every terraform plan executed.
This is the change/update which shows up after every successful apply in the next terraform plan when using this terraform version and view resources.
Steps to Reproduce
How much impact is this issue causing?
High
Logs
No response
Additional Information
We are not able to upgrade to version 0.95.0 because of this issue right now and for us it would be important because we need the hotfix for table data types which is implemented in the 0.95.0 release
Would you like to implement a fix?