Open mildbyte opened 2 years ago
For the sake of documenting offline discussion:
the approaches I see are:
plan_query
function for any DDL/DML query, while using the basic PG/SQLite pool as the executor as we do now for the rest. The PITA here is that we should then propagate the executor (tx or pool) down into all invocations of catalog function calls and any intermediary functions (and ultimately down to repository function calls). This seems like an overly verbose solution to me.We've agreed on going with #2, but there one of the main obstacles was making our repo code generic enough for both SQLite and PG, as well as support the transactions, which are supposed to be passed via re-borrowing.
Furthermore, our Repository
trait is Sync
, but SQLiteConnection is not, meaning that we won't be able to use SQLite transactions in our Catalog/Repo as is.
So for the time being (given that this is not a critical issue) I'm putting this on the backlog, until some of the circumstances change.
Alluded to in https://github.com/splitgraph/seafowl/issues/48.
Start a transaction before planning a batch of Seafowl statements, roll it back on error and commit on success (before returning a result): https://docs.rs/sqlx/latest/sqlx/struct.Transaction.html . Useful for:
CREATE staging + DELETE current + ALTER staging RENAME to current
, other readers will either see the old or the new version of the table when queryingcurrent
)