Open emmyoop opened 7 months ago
@graciegoheen Can you take a look at spike outcome here and see whether you think this ticket is worth pursuing?
We should document it do not work if we are not moving forward on this ticket.
I think:
expect:
rows:
- {payment_id: 3, dollar: .333} # expected
- {payment_id: 4, dollar: 0.333323, _dbt_skip_casting: ["dollar"]}
is the best place to put this configuration
The configuration is:
expect
(?)I'll sync with unit testing squad on what interface would be best here
What would be the consequences if we cast the expected output to the widest scale/precision/character size for the relevant data type? i.e., numeric
without precision and scale rather than numeric(10, 2)
?
It might give a failing unit test the way that we are after.
My understanding is that we use "wide" data types by default when enforcing model contracts. I'm not sure if there's prior art we could re-use here.
I tried changing this line to be column.dtype
instead of column.data_type
.
It worked when I tried it, but only if I quoted the "0.50"
portion.
expect:
rows:
- {payment_id: 2, amount: "0.50"}
- {payment_id: 3, amount: 0.33}
Otherwise, I got this error:
I didn't try any other adapters other than dbt-postgres. The behavior might depend on the way each Python driver renders the value to a string 🤷.
Using the sql
format like this also worked along with the data_type
-> dtype
change:
expect:
format: sql
rows: |
select 2 as payment_id, cast(0.50 as numeric(10, 2)) as amount union all
select 3 as payment_id, cast(0.33 as numeric(10, 2)) as amount
We ran into this in and had a related discussion on the dbt slack here
What would be the consequences if we cast the expected output to the widest scale/precision/character size for the relevant data type?
For Redshift at least casting to just Numeric results in (18,0)
An option to use the types defined in our contract for the model would have been a nice way to solve this from our perspective. Being not too familiar with the implementation I thought the contract was the source of the truth for the expected data types. But that may not be possible if the contract uses "wide" data types
Housekeeping
Short description
We cannot currently unit test precision. The expected outputs get rounded to be the expected datatypes and therefore are not correctly parsed.
from @MichelleArk in #9627
Acceptance criteria
Suggested Tests
model.sql
failing unit test yaml
passing unit test yaml
Impact to Other Teams
none
Will backports be required?
no
Context
Outcome of spike in https://github.com/dbt-labs/dbt-core/issues/9627
This will required a change to the macros in
dbt-adapters
and tests indbt-core
. The tests indbt-core
will not pass until a new version ofdbt-adapters
is released.