Open dbeatty10 opened 5 months ago
Worth confirming would this happen by itself without unittest
This works fine for me if I do not have any dbt unit tests defined:
dbt build -s +model_f
But if I add a dbt unit test and re-run that same command, then it gives the error.
I can confirm that I'm experiencing this as well.
I'm using dbt-core 1.8.3 with dbt-snowflake 1.8.3 on macOS Ventura 13.6.
I'm developing a custom dbt package which defines a suite of macros. I'm experiencing the error in my package's integration_tests
project. I have an example model which calls my_package.my_macro(...)
in its logic. I've also written an example unit test for the model to verify its functionality. When running the project with dbt run
, the model runs as-expected and the compiled code in target/run
and target/compiled
reflects the expected macro output.
When running either dbt build
or dbt test
on the project, however, I get the "macro is not defined" error mentioned above.
Please let us know when you've root-caused this bug, since it will be a huge roadblock for me and my team to pivot to using native DBT unit tests as opposed to the EqualExperts unit testing framework (which doesn't seem to suffer from this bug).
Thanks in advance!
Thanks for sharing some of the specifics of your scenario @dsillman2000 👍
If a constant return value is sufficient for the purposes of your unit test, a workaround is to use an overrides
configuration like this:
macros:
# explicity set star to relevant list of columns
dbt_utils.star: col_a,col_b,col_c
Otherwise, there isn't a known workaround.
You can stay subscribed to this issue in GitHub to get notified of updates regarding further root cause or resolution.
I am having the same issue, I have a macros named "test_macro.sql":
{% macro test_macro() %} {{ return('This is a test macro') }} {% endmacro %}
in dbt_project.yml I have set up the path correctly and call the macros:
macro-paths: ["macros"] ... generate_schema_name: "{{ test_macro() }}"
But when I run dbt debug I got this error:
Could not render {{ test_macro() }}: 'test_macro' is undefined
Have anyone firgured out with happened?
@Goal1803 it sounds like you're dealing with a different issue, unrelated to unit tests.
You cannot call a macro directly in the dbt_project.yml
as these are parsed at different times. There's a discussion on this here -> https://github.com/dbt-labs/dbt-core/discussions/9172
Thanks for sharing some of the specifics of your scenario @dsillman2000 👍
If a constant return value is sufficient for the purposes of your unit test, a workaround is to use an
overrides
configuration like this:macros: # explicity set star to relevant list of columns dbt_utils.star: col_a,col_b,col_c
Otherwise, there isn't a known workaround.
You can stay subscribed to this issue in GitHub to get notified of updates regarding further root cause or resolution.
I'm running into this issue again, in a new project, with a macro defined within the project (more similar to the minimum reproducible example above). Unfortunately, even setting an overrides
configuration for the macro does not fix the issue, as it is still complaining that it's unable to find the macro even when I give it a direct, constant override value to return.
I don't know if this should be filed as a separate bug with the overrides
configuration not working, since I've been able to override macros in other projects, but not when encountering the "macro not found" bug. Even when defining an override for the "missing" macro, the parser is still unhappy.
"Overrides" section of the subject unit test case:
unit_tests:
- name: parse_legacy_tool_events
description: |-
Shall correctly parse tool events from legacy software versions into a consistent
"SupportLogEvent" format.
model: stg_support_log_tool_events_legacy
overrides:
macros:
legacy_support_log_sw_versions: 'my_constant_value'
given: ...
After running dbt test
:
'legacy_support_log_sw_versions' is undefined. This can happen when calling a macro that does not exist. Check for typos and/or install package dependencies with "dbt deps".
Running the same model with dbt run
works fine, since the macro is valid & defined within the project. But it cannot be found during unit testing, and apparently the override does not apply to it.
As with my testimony above, I'm using dbt-core 1.8.3 and dbt-snowflake 1.8.3 macOS Ventura 13.6.
Moments after posting above, I had a breakthrough @dbeatty10 ! Please try it in your minimum reproducible example project to see if it works / helps you root cause the issue.
Work-around:
{% set legacy_versions %}{{ legacy_support_log_sw_versions() }}{% endset %}
select * from {{ source(...) }} where software_version_id in ({{ legacy_versions }})
Note that this also allows the overrides
value to propagate correctly. So the bug must be upstream of the overriding stage of dbt's parser. It's also noteworthy that using the return value of the macro in a set tag (i.e. {% set value = macro_name() %}
) does not work, but using the set block syntax does work.
This suggests to me that the broken link must be in how dbt is resolving macros differently in a set block context as opposed to other contexts. Please let me know if you're able to reproduce this work-around in your example case above!
Is this a new bug in dbt-core?
Current Behavior
Sometimes (but not always!), I get this compilation error when a model contains a macro:
Expected Behavior
This seems like it should work without needing to add an override for any macros.
Steps To Reproduce
macros/my_macros.sql
models/_unit_tests.yml
models/model_f.sql
Build and see that everything works just fine:
Update
models/model_f.sql
to add{% set ab_values = a_values + b_values %}
anywhere within the model defintion:Now re-build and see the error:
Relevant log output
Environment
Which database adapter are you using with dbt?
postgres
Additional Context
Found while researching https://github.com/dbt-labs/dbt-core/issues/10139.