All ETL compute will be handled on top of persistent data files duck.db.
A lot of the logic to operate here, will be similar to PartitionedDataStore.
PersistentDataStore deliver the requirements we need for [Process 1 + Process 2] of the ETL Pipeline. We'll then create a similar middleware to abstract common logic, such that we can update our ETL pipelines to use this.
DoD:
[x] Implement a base class for DuckDB
[x] Implement a child class called as "PersistentDataStore" for wrapping duckdb persistent databases
Motivation
All ETL compute will be handled on top of persistent data files
duck.db
. A lot of the logic to operate here, will be similar to PartitionedDataStore.PersistentDataStore deliver the requirements we need for [Process 1 + Process 2] of the ETL Pipeline. We'll then create a similar middleware to abstract common logic, such that we can update our ETL pipelines to use this.
DoD:
All middleware commands have been implemented and tested