Open guyr-ziprecruiter opened 1 month ago
Hey @guyr-ziprecruiter!
First: Are you regularly running dbt compile
for all the models in your project? If so, what's the reason why you're doing this? I wonder if you're running compile
when a simple parse
might do, or if you instead want to just do something like dbt compile --select specific_model
.
As far as your proposal:
dbt compile --no-introspect
Hello again!
I need the SQL query created by the model. So dbt parse
is not applicable in my use case if I'm not mistaken.
I switched to --no-introspect
:
$ dbt compile --no-introspect --profiles-dir=profiles/ --profile=athena --models foo
But it still took a couple of minutes. So seems like that's not what I was looking for.
Ideally I would like to work with no connector, just get the jinja templates to render and get my query back. Is that possible?
Thanks!
@guyr-ziprecruiter Thanks for giving it a try! In that case, I'm not sure what the source of slowness would be. Two questions:
--threads
are you running dbt compile
with?--models foo
?logs/dbt.log
), what does it look like dbt is spending most of the time doing?I'm not using --threads
. My profiles.yml
says 4 threads.
python -c 'from multiprocessing import cpu_count as cc;print(cc())'
says 12.
This did seem to help a bit - thanks!
It might be nice if dbt compile
would suggest an better number (if there is such a way to tell it).
grep -c '{{ ref(' foo.sql
gives 3. But how many models do these models refer, recursively? I can't tell. That might actually be a nice thing the tool can suggest.
I executed:
$ dbt --debug --log-cache-events --log-path ~/dbtlogs/ compile \
--threads 12 \
--profiles-dir=profiles/ --profile=athena \
--models itm_applies_revenue_attribution > /dev/null
and got me 55MB worth of logs (8.1MB gzipped). So let me get back to you about it :)
Again - a built in profiler seems like a worthy addition if these are the questions arising. I'll post my results soon. Thanks!
I wrote a dbt logs parser in Python (dbtlogs.py.txt) and came up with this:
action | estimated_duration | percent |
---|---|---|
dump after adding | 0 days 00:00:04.261488 | 49.64% |
adding relation | 0 days 00:00:02.935445 | 34.19% |
dump before adding | 0 days 00:00:01.375737 | 16.02% |
Connection | 0 days 00:00:00.006651 | 0.08% |
Timing info for model.main... | 0 days 00:00:00.001911 | 0.02% |
Sending event | 0 days 00:00:00.001520 | 0.02% |
Compiled node | 0 days 00:00:00.001466 | 0.02% |
Command | 0 days 00:00:00.000737 | 0.01% |
Concurrency | 0 days 00:00:00.000382 | 0.0% |
total | 0 days 00:00:08.585337 | 100.0% |
Not sure if this doesn't represent some kind of cached run where things happen faster. Also, due to the logs structure I had to take some liberties - pick the action names by some heuristic, and assume the delta between each two consecutive events is the prior event's duration (added 0 as the last event's duration).
That was actually pretty dumb of me, I could just --log-format json
.
Or could I...?
Trying to run with --log-format json
gives me:
Traceback (most recent call last):
File "/Users/guyr/miniconda3/envs/sections/lib/python3.10/site-packages/dbt/events/functions.py", line 217, in msg_to_dict
msg_dict = MessageToDict(
TypeError: MessageToDict() got an unexpected keyword argument 'including_default_value_fields'
During handling of the above exception, another exception occurred:
(repeat until `RecursionError: maximum recursion depth exceeded`)
Attached is the output file: dbt-log-json-format-stderr.txt.gz.
Since locale.getlocale(locale.LC_TIME)
is mentioned, here's also the output for locale
on my machine:
$ locale
LANG=""
LC_COLLATE="C"
LC_CTYPE="UTF-8"
LC_MESSAGES="C"
LC_MONETARY="C"
LC_NUMERIC="C"
LC_TIME="C"
LC_ALL=
Thanks!
@guyr-ziprecruiter could you open a bug report for the "maximum recursion depth exceeded" you mentioned in https://github.com/dbt-labs/dbt-core/issues/10155#issuecomment-2127489585 ? That way we can handle that unexpected error separately than your feature request.
I gave dbt compile --log-format json
a quick attempt using dbt-core==1.8.0 and dbt-postgres==1.8.0 and it worked fine, so it must be something specific to your versions and/or environment.
Will do. For the meantime, let me rephrase my question differently:
During a dbt compile
run on my laptop, the connectors are used, making every call slow and pretty demanding. Is there a way whatever is collected by the connectors become cached, allowing to run a consecutive call (say 1 second after the first one finished) end much faster?
@dbeatty10 my versions seem to be:
$ pip list | grep -i dbt
dbt 1.0.0.35.1
dbt-artifacts-parser 0.5.1
dbt-athena-community 1.5.1
dbt-core 1.5.0
dbt-duckdb 1.5.1
dbt-extractor 0.4.1
dbt-semantic-interfaces 0.4.2
dbt-spark 1.5.0
dbterd 1.12.0
sqlfluff-templater-dbt 2.3.5
So I believe this bug has already been fixed - it's my bad working with older versions 🤷
Even a simple expansion such as
$ dbt compile --profile athena --profiles-dir=profiles/ --inline '{{ source("foo", "bar") }}'
(real source(...)
parameters replaced with foo, bar) takes 2 minutes :(
Is this your first time submitting a feature request?
Describe the feature
A simple
dbt compile
run may take several minutes. I gather this is since the connectors are applied and things are verified for correctness (e.g. are the tables and columns referenced really are there). That's wonderful! But if I'm running the samedbt-compile
multiple times, I'm willing to take the risk and assume nothing has changed. Hence I suggest a run mode in which whatever collected by the connectors gets cached, and I can reuse it instead of waiting 3 minutes every time I try to compile my model to Athena. Thanks!Describe alternatives you've considered
Writing a jinja2 template rendered myself, and using jinja2 without dbt. Both did not work.
Who will this benefit?
DBT users compiling their queries to test (e.g. in the Athena console) and wishing not to spend hours waiting (it adds up!).
Are you interested in contributing this feature?
Not impossible if no one better is willing to!
Anything else?
No response