Open anna-geller opened 1 year ago
Yes could be nice, but I think for project with quite a lot of models (800+ in our case) would start to be overwhelming.
Would be very useful to focus on warnings and errors for models & tests.
Displaying logs for the successful models built is enough IMO.
Dbt cloud displays metrics like this and is super useful for analysts etc :
Adding quick video showing our current needs in terms of UI : https://www.loom.com/share/0556984bd9564b328ea4cf413954a514
Based on this feedback, this issue is deprioritized in favor of this more generic log-level representation https://github.com/kestra-io/kestra/issues/2045
Extend the
dbtRun
,dbtBuild
anddbtCll
tasks with 2 new properties parse andoutputRunResults
:parse
(new property) is set totrue
outputRunResults
indicating whether to output dbt models with custom metadata like duration, which will allow later to expose those as events — ideally, an ION file with one row per dbt model or test (data can be extracted fromrun_results.json
generated by dbt after dbt Build/Run completion)Context
dbt models and tests are difficult to read and inspect in the Gantt view:
We would like to see those in the Execution Topology view:
Simple reproducer to inspect/iterate on the design (you can run it directly with no setup):