Closed dbeatty10 closed 7 months ago
Calling it for a known, explicit type would look like this:
{{ cast(expression, type_integer()) }}
Calling it for a variable data type would look like this:
{{ cast(expression, api.Column.translate_type(data_type)) }}
The default implementation might look like this:
-- core/dbt/include/global_project/macros/utils/cast.sql
{% macro cast(expression, data_type) %}
{{ adapter.dispatch('cast', 'dbt') (expression, data_type) }}
{% endmacro %}
{% macro default__cast(expression, data_type) -%}
cast({{ expression }} as {{ data_type }})
{%- endmacro %}
Calling it for an explicit type would look like this:
{{ cast(expression, "integer") }}
Calling it for a variable data type would look like this:
{{ cast(expression, data_type) }}
The default implementation would look like:
-- core/dbt/include/global_project/macros/utils/cast.sql
{% macro cast(expression, data_type) %}
{{ adapter.dispatch('cast', 'dbt') (expression, data_type) }}
{% endmacro %}
{% macro default__cast(expression, data_type) -%}
cast({{ expression }} as {{ api.Column.translate_type(data_type) }})
{%- endmacro %}
@dbeatty10 Love the way you're thinking about this.
I like Option Y. It's elegant, and it should return the right result given all inputs. The only case where it will be incorrect is if the adapter's type translation code is wrong, by asserting that two types are aliases when they're not identical.
The only downside I can think of is surprise for end users. Imagine the following scenario on BigQuery:
# user writes this code
{{ cast(some_column, "integer") }}
# expects to see
cast(some_column as integer)
# actually sees (in logs / compiled code)
cast(some_column as int64)
Which is ... actually correct! Just potentially surprising, that there's an implicit type translation happening. (To be fair, BigQuery now supports integer
as an alias for int64
, so the original value would have worked too.)
I don't think we need a "back-out" to avoid that. It would be a bug in the adapter's Column
implementation, and should be treated as such.
Documenting a little more research here.
dbt.safe_cast()
Column.literal()
dbt.safe_cast()
We already have a safe_cast
macro:
https://github.com/dbt-labs/dbt-core/blob/a02db03f458aa19529fe3a28f62efa6f78ed3c87/core/dbt/include/global_project/macros/utils/safe_cast.sql#L5-L9
Column.literal()
Also, there there is a literal()
method on the Column class:
https://github.com/dbt-labs/dbt-core/blob/0ef9931d197c996ccd4e3caf641729176eceda99/core/dbt/adapters/base/column.py#L104-L105
At least a couple adapters override this method, namely dbt-bigquery and dbt-spark.
It seems like the safer default implementation would have been cast({} as {})
. Then adapters like dbt-postgres, etc could override with {}::{}
if desired.
It appears to have supported dbt snapshots back when they were called "archives" by casting a string 'null'
into a NULL
value of the applicable timestamp data type:
{{ timestamp_column.literal('null') }} as tmp_valid_to
(NULL
is used by dbt snapshots to indicate "not yet"/ "the distant future" / "never" / "infinity".)
Based on its provenance, it's possible that Column.literal()
is no longer in use and could be a candidate for deprecation.
See also:
@dbeatty10 I'm working trying to implement this functionality in my dbt project right now to cross Redshift and SQL Server for a few very specific examples.
Could you please explain this more? "Calling it for a variable data type would look like this:" Why/when would I use a variable data type that isn't something like type_integer()
? Why would I ever want to use an api call to get a type? e.g. api.Column.translate_type
Which also makes me wonder: what are the risks of that api being changed over time?
One thing I would highlight for you and @jtcohen6 is that strings and integers are easy and anything about timestamps and timezones are HARD. They're a better concept to use for anything cross-SQL dialect. I was in charge of scheduling software for two years of my life, please believe me when I say timezones may be the most bug-prone, infuriating area of programming. Knowing that, I did this analysis last year for Redshift because each column below does something different. 😭 😭 😭
Ideally, we want it to align to the output of current_timestamp() as Redshift(which should be using UTC in each cluster). That would save us a lot of logic in diff functions between created_s3_at and created_redshift_at over time. That said, as long as we can identify what the difference is and it is consistent, then we can handle it. I evaluated the subtitles of time in Redshift the other day. For some insight, into the considerations:
select
current_setting('timezone')
, current_date --current_date() errors out
--No milliseconds and no timezone
, getdate() --getdate does not exist
--milliseconds no timezone (because UTC is the currently set timezone)
, sysdate --timestamp format
, current_timestamp at time zone 'UTC' as ct_utc
, current_timestamp::timestamptz at time zone 'UTC' as ct_tz_utc
, sysdate::timestamptz at time zone 'UTC' as tz_utc
, sysdate::timestamptz at time zone 'utc' as tz_utc_lowercase
--With milliseconds and timezone
, current_timestamp at time zone 'UTC'::timestamptz as ct_utc_tz
, now()
, sysdate at time zone 'UTC' as utc
, current_timestamp as ct
, current_timestamp::timestamptz as ct_tz
, sysdate::timestamptz --timestamptz format
-- UTC to CDT
-- In CDT with timezone in the value
, sysdate::timestamptz at time zone 'CDT'::timestamptz as tz_cdt_tz
-- In CDT no timezone in the value
, sysdate::timestamptz at time zone 'cdt' as tz_cdt_lowercase
-- CDT to UTC
-- In CDT time(the next day) with timezone in the value
, sysdate at time zone 'cdt' as cdt
, sysdate at time zone 'CDT'::timestamptz as cdt_tz
From this evaluation I picked the current_timestamp() generated by Redshift with the default(s) set to UTC as what to standardize to.
(Now to throw a wrench in the whole thing Redshift is deprecating current_timestamp() for getdate(). Because of course when you want a timestamp you'd call a date function... :facepalm: )
For your two options above, my first impression was Option X was better because type_timestamp()
and current_timestamp
and convert_timezone
and now
(note this isn't the main branch but is tsql-utils pinned version) already have overrides in other database adapters. But maybe Option Y is better because putting "integer" in the model file is more declarative, more control, fewer keystrokes, and more assessable to dbt beginners. I think the answer to my question above would help me and further clarify the ticket.
My very specific cases at the moment are:
A) Cast a null value to a timestamp to make sure the table creates that field with the right data type. (I seem to remember BigQuery being the worst when having to deal with this.)
null::timestamp
. However, SQL Server doesn't support ::
.cast(null as timestamp)
datetime
not timestamp
.B) Cast the current_timestamp(or a field with a value that's already a timestamp) to a timestamp. Think coalesce() and concat() cases, or in the future timestamp to timestamptz.
current_timestamp::timestamp
or cast(current_timestamp as timestamp)
or cast(getdate() as timestamp)
cast(getdate() as datetime)
C) Cast a timestamp to a timestamp that either maintains or adds its timezone. D) Cast a timestamp without timezone to a timestamp with a different timezone then the original undeclared one.
Note that:
{{ dbt_date.date_part() }}
failed as a cast hack for me, at least at one point.cast()
in the model itself for now.{{ dbt_date.now('UTC') }}
doesn't work because it assumes current_timestamp
, doesn't have a dispatch and hence no override. There's also a 'dict object' has no attribute 'now'
error.
Is this your first time submitting a feature request?
Describe the feature
Per conversation with @jtcohen6 we are interested in creating a
cast()
macro meant to behave like theCAST
operator described in the SQL standard.Usage likely to be something like:
Which some databases might render as:
Describe alternatives you've considered
Who will this benefit?
Like all cross-database macros, we anticipate this being most relevant to dbt package maintainers.
It would also be useful in implementing this:
Are you interested in contributing this feature?
Yep!
Anything else?
N/A