Open ProffP opened 4 months ago
what is the scenario where this repros? the code as shown just runs happily
Hey, when I run this code and then stop the redis server, then I get "Unobserved exception" printed continuously.
ah, sorry; I missed the "stop the redis server" step! one moment
OK; so... in a way, it is correct: an exception occurred, and you didn't observe it. This is specifically referring to the t3
line, and if we add FireAndForget
to t3
: we don't see the UTE.
But the better question, perhaps, is should this happen; it is a complex one - I can see merit in "not". In theory we can observe the exception automatically. We could also try marking the inner operations as "cancelled" rather than "faulted".
One for discussion next sync, @NickCraver @philon-msft ?
Hey, thanks for the prompt response. Yeah for my purposes I am okay with any approach, this was just a bug that I found in our solution which was a bit cryptic to locate as I had an incorrect assumption on how to use batches in redis. I personally like the idea of setting the tasks to cancelled because it would simplify the logic; Just to be 100% sure of my assumptions here. If I ignore the batch.Execute() exceptions, will all cases result in an exception being thrown by the tasks themselves? Because if not, then I will need to make logic to catch the exception on batch.Execute. Then await the tasks.
Note: in our solution we have unobserved exceptions rethrowing on the main thread, halting the application.
fundamentally, if you're running multiple things concurrently (which is what appears to be the case here) and you absolutely care that all are observed (which is more of a concern on .NET Framework than later versions of .NET), then you need to be careful - let's assume the batch succeeds - we could still have 4 inner tasks, A B C and D, and if you await them in turn, imagine that B throws - if C or D also throws, it will go unobserved.
Are you on .NET Framework here? Note also: batches often aren't as useful as you might expect - note that you could probably remove the batch completely here, and just invoke the commands separately. If it was a transaction (MULTI/EXEC) it would be different, but honestly: I tend to advise people to use Lua in place of MULTI/EXEC.
Interesting note on batches. I am interested in processing a sequence of commands in order, will look into using lua for this purpose. Some of our services are .net 4.8 yes.
I had a think about the batch Execute just now and I think that would I would have liked is for an executeAsync that would have guaranteed all inner tasks were complete. Similar to Task.WhenAll, that way I could just read the results directly off of the tasks with .Result synchronously.
Thanks for the input.
Sent from Proton Mail for iOS
On Fri, Feb 23, 2024 at 15:51, Marc Gravell @.***(mailto:On Fri, Feb 23, 2024 at 15:51, Marc Gravell < wrote:
fundamentally, if you're running multiple things concurrently (which is what appears to be the case here) and you absolutely care that all are observed (which is more of a concern on .NET Framework than later versions of .NET), then you need to be careful - let's assume the batch succeeds - we could still have 4 inner tasks, A B C and D, and if you await them in turn, imagine that B throws - if C or D also throws, it will go unobserved.
Are you on .NET Framework here? Note also: batches often aren't as useful as you might expect - note that you could probably remove the batch completely here, and just invoke the commands separately. If it was a transaction (MULTI/EXEC) it would be different, but honestly: I tend to advise people to use Lua in place of MULTI/EXEC.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
I have run into an issue where work started in a batch is not observable without having a try catch around batch.Execute and then awaiting the tasks individually.
My test setup is running the following code and then stopping the redis server.
I would have expected that
batch.Execute
would have stopped any work from continuing if it failed itself.