Closed drewbanin closed 3 years ago
Not sure if this belongs here or in a separate issue, but it'd be great if the test output printed out in the order of the DAG (and grouped by the model they reference). So the tests for the base
models should be at the top, and those for the most downstream (and more abstract) models should be at the bottom.
might not be applicable for all use cases, but it could be useful to also output a dbt output (with appropriate "refs") into a "Test Results" folder. I can see this being helpful if a user would want to:
Some great thoughts on this Discourse post that I wanted to expand on.
To me, the fundamental issue is that the query used to test is often different than the query needed to debug (I think "debug" is a more accurate term for this than "audit").
This means the dbt user has to take a lot of steps to get to the bottom of a test failure:
This is especially problematic when debugging a CI failure, where the compiled test queries aren't directly available to the user unless they have saved them as artifacts of the build.
It would be great if dbt did all of this and presented the test failures clearly!
For example:
@drewbanin, you suggested implementing a separate debug query from the test query, either as a CTE or as a comment. I think this is a great idea. However, this only eliminates step 3 above. Why not set this up so dbt runs the debug query on test failure, returns the first 2-3 results to the command line, and logs the full result to an audit table (re: #903)?
Something like this?
dbt test --models orders
Failure in test unique_orders_order_id (models/orders/schema.yml)
Got 2 results, expected 0.
order_id count
------------------------
1745291294829100 2
1384819819499118 10
compiled test SQL at target/compiled/.../unique_orders_order_id.sql
compiled debug SQL at target/compiled/.../unique_orders_order_id.sql
Complete test results logged to table dbt.test_failures
I second @joshtemple's idea of having a debug query that gets kicked off when a test fails. It would be especially useful with time-based data tests. I have several tests that only look at the last 24 hours' data, so it's a pain to convert the test SQL into debug SQL AND fix the dates so they cover the same 24-hour period as the failed test (which might've been a few days ago). A debug query kicking off right after a failed test would be able to look at the exact same time period and retain the results.
@joshtemple I'm relatively happy to have dbt run a query here and print the results to stdout, though I do think it could get kind of messy for some types of (mostly custom) tests. The example shown here (uniqueness) makes a ton of sense, but it's less clear to me what the output should be for not_null
tests! Maybe that particular test is the odd one out, I just want to make the claim that it won't always be obvious what the "results" of a failed test should be.
I still love the idea of persisting test failures in tables, but I'm unsure about the mechanisms that dbt should employ to persist & manage these tests. Databases like BQ support table expiration, but on other databases, dbt will need to clean up these test tables in order to not make a really big mess of things.
Some very practical (and tractable) questions for us to consider:
{{ model.schema }}__tests
) -- should it drop that schema before invoking tests?schema.yml
? How do we make this environment aware (do it in prod but not in dev, or similar)?Curious about your collective thoughts on these questions, or any other ideas/questions that are conjured up as you think about this one!
So I started in on a branch to do a very basic first pass at this, the idea being to remove the count wrapper and dump the test results into the node agate_table
at test run, and then print the first 3-5 rows in the failure summary. To make this work for schema tests I was planning on expanding the wrapped_sql to get a full count * on the test query (so the status still displays the total number of fails), then getting a few full records to union in so we don't end up ETL'ing the whole warehouse during test runs... But now I'm scratching my head on how to cleanly do an ANSI-safe limit
that will play nicely with all the adapters. I really want to keep this as part of the canonical test run output if possible... Just not 100% clear on how yet.
On the subject of materialising test results in a folder or table, can I suggest that this is probably better served with a set of audit
models? it seems like a good philosophical line to draw that the sole responsibility of tests is to support the development and deployment cycle, and anything that creates consumable data (especially audit data) belongs in a model where it can be correctly governed with post hooks, docs, positioning in the DAG etc etc. It is generally easy to write a macro that gets called in both the model and the test to keep things dry, but we don't want to slow down testing with data governance concerns, or compromise data governance for testing. In the sample failing rows branch I have them printing directly to stdout with a debug log line "sample data displayed in console" so we aren't dumping actual records into log files, slippery slope and whatnot.
We did this log layer a while back https://github.com/norton120/kimball_dbt
And are doing a fresh implementation this winter at my new gig, which we can try to release as a package. That may help?
Consider compiling a simple
select
statement (w/ alimit
) that can be directly run to find test failures. Right now, you need to change aselect count(*)
toselect *
and sometimes remove groups, add limits, etc.