Open matanmichaely opened 1 month ago
@matanmichaely apologies for my delay in reviewing this; I've been very busy prepping 1.9.0 release and getting ready/going to Coalesce. This is an interesting proposal, and I think the primary thing holding me back from wholehearted approval is that I don't know how we would write a functional test that validates your actual use case, since presumably it would have to be invoked from a notebook. If you can include one or more functional tests to validate the behavior, we can move forward on this PR.
Problem
When executing a dbt python model, users must choose between an
all-purpose cluster
or ajob cluster
to run Python models (see docs).This requirement limits the ability to execute dbt models inline within an existing notebook, forcing model execution to be triggered outside of Databricks.
On the contrary , SQL models can leverage the session connection method, allowing them to be executed as part of an existing session. This separation of model logic from job cluster definitions enables orchestration systems to define clusters based on different considerations.
Request:
We propose introducing a similar session option for Python models. This feature would allow users to submit Python models to be executed within a given session, thereby decoupling model definitions from job cluster specifications.
Solution
The PR offers a new submission method - session. When selecting this method, the DBT python model compiled code will be executed on the same process DBT is being executed - assuming a spark session is available - this solution is equivalent to the session method
Notes
Related issue in
dbt-spark
, linkSynced with @dkruh36