Open glmnet opened 8 years ago
The purpose of "delaying" the SaveChanges-call in a nested scope is to allow the top level scope to control where the transaction ends and when the actual changes are committed to the database.
This is useful for example in a situation like this:
Bot of these methods call the SaveChanges which commits the transaction to the database but only if we're not in a nested scope.
Now we add a method TransferGold(Player from, Player to, int amount). This method creates a parent scope and then calls first the AddGold and then the RemoveGold of the respective players. If the RemoveGold method fails, then also the AddGold is rolled back in the database because those "partial saves" were denied.
The benefit of this approach is that all of those three methods just work. You can call just the AddGold-method without a parent scope and it commits the changes directly to the database. But you can also use that same method as part of a larger transaction (TransferGold) which either succeeds or fails as a whole. And the TransferGold can of course also be part of even larger transaction as well...
The only problem with this approach are the autogenerated ids as you already have noticed. You've already found a workaround, but just remember that you need to wrap your logic in a TransactionScope if you want the operation to succeed of fail as a whole.
I disagree a bit here, SaveChanges is (now EF6 +) running in a transaction, however it does not limit itself to calling SaveChanges only once per session, it is perfectly ok to do so. Also if you really want such operation to happen in a transaction I believe it is way to magical, I do not expect the ContextScope to create a big transaction, otherwise I would use the TransactionScope I guess, may be this is where I am confused, I expect the ContextScope to work as a normal Context would. Anyway I respect the design, do you think is there any other harm by skipping the nested check? Thanks
@glmnet I'm not actually the author of this library so I cannot really say why it has been designed as it is. I think that managing the transactions automagically has both benefits (they are managed automatically) and drawbacks (you cannot easily have full control of them by yourself).
Calling the SaveChanges-method of the underlying DbContext directly is a little bit problematic from the design point of view. When you "hide" a direct SaveChanges-method inside the AddGold-method then the caller of that method has no way of knowing if he/she needs to wrap method calls inside a Transaction or not. The only way to check this would be to read the source code. And even if you do read the source code and see that there are no direct calls to the SaveChanges-methods, it does not guarantee that someone does not add that Transaction-breaking direct call in the future.
The other option would of course be to always create a separate transaction when you call more than one service methods.
I haven't actually decided which would be the better option. But the nested dbcontexts and the shared transaction between them seem to be one key design philosophy for this library. So I think you should either use the library as it is intehded or switch to some some other way to manage ambient dbcontexts.
PS. TransactionScope is no longer recommended in EF6, Microsoft recommends you to use Database.BegintTransaction()
instead (see: https://msdn.microsoft.com/en-us/data/dn456843.aspx)
Why this?
I find this problematic in the scenario I need an auto generated id from one insert to be logged for example. I did overcome the problem by calling the specific context's SaveChanges method, is there any problem?
I like how all this works, however will like to understand better what are the pros and cons of preventing partial SaveChanges.
Thanks