Open dkruh1 opened 2 months ago
Thanks for your pull request, and welcome to our community! We require contributors to sign our Contributor License Agreement and we don't seem to have your signature on file. Check out this article for more information on why we have a CLA.
In order for us to review and merge your code, please submit the Individual Contributor License Agreement form attached above above. If you have questions about the CLA, or if you believe you've received this message in error, please reach out through a comment on this PR.
CLA has not been signed by users: @dkruh36
resolves # https://github.com/dbt-labs/dbt-spark/issues/1062 docs dbt-labs/docs.getdbt.com/#
Problem
When executing a dbt python model, users must choose between an all-purpose cluster or a job cluster to run Python models (see docs). This requirement limits the ability to execute dbt models inline within an existing notebook, forcing model execution to be triggered outside of Databricks.
In contrast, SQL models in dbt can leverage the session connection method, allowing them to be executed as part of an existing session. This separation of model logic from job cluster definitions enables orchestration systems to define clusters based on different considerations.
Request: We propose introducing a similar session option for Python models. This feature would allow users to submit Python models to be executed within a given session, thereby decoupling model definitions from job cluster specifications.
Solution
The PR offers a new submission method - session. When selecting this method, the dbt python model compiled code will be executed on the same process dbt is being executed - assuming a spark session is available - this solution is equivalent to the session method
Checklist