Open mpcarter opened 6 hours ago
I believe I found a fix. In the star macro, the table_identifier
needs to be normalized.
excluded_names = {
normalize_identifiers(excluded, dialect=evaluator.dialect).name
for excluded in exclude.expressions
}
quoted = quote_identifiers.this
table_identifier = normalize_identifiers(alias or relation, dialect=evaluator.dialect).name # use this instead
# table_identifier = alias.name or relation.name
columns_to_types = {
k: v for k, v in evaluator.columns_to_types(relation).items() if k not in excluded_names
}
I copied the code for @STAR
into a user-defined macro to test it and it rendered my example like this for snowflake dialect
SELECT
CAST("INCREMENTAL_MODEL"."ID" AS INT) AS "ID",
CAST("INCREMENTAL_MODEL"."ITEM_ID" AS INT) AS "ITEM_ID",
CAST("INCREMENTAL_MODEL"."EVENT_DATE" AS DATE) AS "EVENT_DATE"
FROM "SQLMESH"."SQLMESH_EXAMPLE"."INCREMENTAL_MODEL" AS "INCREMENTAL_MODEL"
When using the
@STAR
macro with a snowflake model, it mismatches the case of the quoted alias used for the table (relation) and those used in the expansion of the columns.Using snowflake dialect it render as
The table alias is
"INCREMENTAL_MODEL"
, but the table alias used with the columns is lowercase and quoted,"incremental_model"
. So it fails to deploy.Using duckdb it renders as