Originally I wanted to do the full data load process in one session/transaction but there are blocking issues with mixing ORM and metadata style inserts. Data sources will still remain consistent and the full process is wrapped in a try/catch to exit out early if there is an issue loading any of the sources.
This still could probably use some more granular error checking and consistent logging but automates the data pull to prepare for executing the pipeline.
Originally I wanted to do the full data load process in one session/transaction but there are blocking issues with mixing ORM and metadata style inserts. Data sources will still remain consistent and the full process is wrapped in a try/catch to exit out early if there is an issue loading any of the sources.
This still could probably use some more granular error checking and consistent logging but automates the data pull to prepare for executing the pipeline.