only-cliches / Nano-SQL

Universal database layer for the client, server & mobile devices. It's like Lego for databases.
https://nanosql.io
MIT License
783 stars 51 forks source link

provide a way to pass around transaction object #154

Open alidcast opened 5 years ago

alidcast commented 5 years ago

Looking at current transaction API - looks like it only accepts an array of transactions. It'd also be useful if there was a way to pass a transaction object around to each query execution.

example API:

nSQL().transaction(async function (trx) {
   await nSQL("table").transacting(trx).query("upsert", {row: data}).emit()
   await nSQL("table2").transacting(trx).query("delete").where(["id","=","something"]).emit()
})

(the above API is inspired by Knex.js' transacting method)

alidcast commented 5 years ago

being able to pass a transaction around is also useful for automatically cleaning up the database when testing - you can pass a transaction object in place of a regular connection, and easily rollback changes after each test.

this might require making sure nested transactions can occur - in the sense that if the top level database connection gets replaced with a transaction during testing, then transactions inside application logic should also be work.

only-cliches commented 4 years ago

Maybe the transaction system needs to be rethought, I was initially setting it up to speed up writes & deletes plus batch them together atomically.

It looks like a majority of the time we're not going to be able to make the transactions atomic at the adapter level, especially across tables/indexes. I think a two step write/commit system needs to be setup to accomplish this, it's the only way to provide actual data integrity with systems like Redis and RocksDB.

Basically we'd have a new hidden "transaction" table that stores each transaction and its commit state.

When a transaction is performed we'd have to do these steps:

  1. Add the transaction queries to the transaction table in a group.
  2. Read and save the current state of all effected rows, including recording rows that will be added and removed by the transaction.
  3. Perform the transaction queries on the database.
  4. Mark the transaction "complete" in the transaction table, possibly deleting its queries as well.

We'd also need to setup a check on database load to see if there are any partially committed transactions in the transaction table and restore all effected rows to their state saved before the transaction was committed.

This is a good deal of additional code though, might be better in a plugin. Also, the planned history plugin kinda does 90% of these actions already, might make more sense to add transactions to the planned history plugin.

I don't see transactions making sense a majority of the time, they'll slow down writes and deletes dramatically while promising that your queries are marginally safer than before. Almost certainly something people can opt into if needed but isn't there otherwise.