Closed JornWildt closed 2 years ago
Personally, I think transactions require some thought and at times there is a need to have control over the scope of what does and does not execute inside that transaction, because transactions have overhead, can lock resources and cause deadlocks. As an example, this is particularly exaggerated if you are mixing database code with external web service calls, where a timeout can cause your transaction to hang for however long your timeout period is.
I asked a similar question on stack overflow (almost 10 years ago!) and got a good answer from Marc Gravell from who I'll quote here:
generally you want the transaction to span several operations, but also: you might not want one at all. I don't think too much boiler-plate helps here... transactions need thought
So, how do we work with transactions? Well, each command takes care of itself, opening a transaction when required and committing it when completed. You can use our transaction scope manager to co-odinate multiple commands using an outer "parent" transaction, so any nested commands will use the ambient transaction if it is available. This is exactly how .NET TransactionScope
works, and in fact, since it was re-introduced into .NET Core that is what we use in our default implementation. This is explained in the transactions documentation but perhaps we need to better explain the support for nested transactions as it seems you might be unaware of that support here?
Seems like I something to learn here too. Thanks :-)
This is explained in the transactions documentation but perhaps we need to better explain the support for nested transactions as it seems you might be unaware of that support here?
Yes, nested transactions are new to me (and so is the Entity Framework). Thanks for taking the time to point it out. It seems like it solves my issues.
And, yes, calling web services or sending mails directly without a queue, or generating huge PDF's, or doing OCR scanning, just to mention a few, are problematic inside transactions. I've seen my share of those 😉 To share some background, I'm working on a system where these thing happens - but the general approach has been to enforce transactions everywhere, making the default behavior safe - no half committed related entities here - and then solve the performance bottlenecks when we stumble into them.
Docs updated with clarification and will be included in the v0.10 release.
The transaction example on https://www.cofoundry.org/docs/framework/data-access/transactions shows how to start a transaction inside a command:
But what happens if one command issues other commands itself - and these commands start transactions which are then committed or failed without noticing the parent transaction (if nested transactions are possible at all)?
And what if the programmer forgets to setup the transaction boiler plate code in all handlers?
And should the transaction be setup in the web API controller - ensuring what ever internal code is executed, it is always inside the transaction.
The systems I work with, all have some sort of central middleware that always ensures a global transaction is ready before application code is invoked and committed on success - or rolled back when an exception is thrown (which is also handled by global middleware). - And we also have "post transaction handlers" which can be added on the fly when needed.
So, question, what would be best practice - a centralized ambient transaction mechanism or specific transaction handling in all relevant places?
Personally I prefer the ambient transaction which ensures my application always uses transactions - not only when someone randomly remembers to add them appropriate places.