Open matthid opened 4 years ago
I'm not sure what this might be, but reading through the actual evaluation loop:
https://github.com/haf/expecto/blob/master/Expecto/Async.fs#L44
the Async.Sleep 10
is less than a default quanta (80 ms) and is effectively a spinlock. So if you only have a single thread or it is too slow, you may livelock since the Task never gets scheduled. This is my best guess.
Maybe you can reproduce and dump if you run docker with the --numcpus option? https://docs.docker.com/config/containers/resource_constraints/#cpu This is probably what Travis does.
I will not have time to help you debug this right now, however. I can assist from the sidelines.
Yes I assumed thread starvation because of https://github.com/haf/expecto/blob/48f07b340e4700a8e52fc643b8ab40075a059193/Expecto/Async.fs#L26 but increasing the threadpool didn't actually change anything. Now I'm doing one run with --no-spinner
because there seems to be quite some magic in there.
I try to get docker running but at the moment I don't have any working instance of it on my workstation :)
I will not have time to help you debug this right now, however. I can assist from the sidelines.
Thanks, to be honest I'm slowly running out of ideas and just doing some wild shots into the dark.
Since adding the --debug
flag it seems to fail on gitlab as well - very interesting
Just because you increase the threadpool doesn't mean you'll get more concurrency, if one thread starves the others and both are using green threads (Async.Sleep etc), since they are then failing at cooperation.
I think there are some possible edge cases here;
Since it's reproducible for you, we're in a pretty good position.
If all of this fails, I have a new major version brewing that uses Hopac instead of Async + Task that I can push if you want me to. Logary shows that Hopac is a saner model for programming concurrently -- I've actually run into this particular bug once or twice and that's when I decided this code-base needed a refactoring. I just haven't got the time to complete it (it's at about 500 LoC right now, mostly API surface area), and it's far from done.
I have 'inlined' expecto.fs and disabled some stuff (to be able to test some fixes within expecto more quickly) and changed to the silent printer. Now the build doesn't hang anymore... Let's see if we can make it hang again.
Adding the printer again makes it hang, so it is definitely that. Anything else you want me to try? I guess I can now workaround for FAKE and add my own simple printer, which doesn't use complicated coloring and logary.
It is not Logary that hangs; Logary is well tested. The progress pretty printing and locking and Tasks is not my creation. Hence the rewrite in progress.
I would like a dump of a hang if you can get it. Thanks! :) If you can help me fix the issue or contribute money towards a developer who fixes the issue, that would be helpful too.
Any idea how I can get one by scripting with a reasonable amount of time and then upload the dump somewhere within travis?
You can upload the dump in this thread.
I think you exec into your docker container when you've reproduced the problem and run createdump
; docs https://github.com/dotnet/coreclr/blob/master/Documentation/botr/xplat-minidump-generation.md#configurationpolicy
I guess I'd have to hardcode my github password/token and find some code which uploads files to github issues which is not something I want to do :D
One interesting observation: While it seems to be due to the logging code (ie it works with a custom printer) on my initial "Hangs" I had no actual expecto output (as I was running expecto without --debug
), so the hang must be due to some noop/reset code.
For some reason I cannot reproduce in #368 not sure why
I'm going to keep this issue open until we've refactored any logic that can trigger this. I have some plans, and the split of files just merged, also helps.
@matthid, have you tried reproducing the bug with the latest version of Expecto and the .NET Core SDK/runtime?
My colleague encountered this hang on his Mac. A custom printf-based printer by @matthid helped (from this changeset). Thanks a lot, @matthid ! Expecto version: 9.0.2
currently observing hangs on Mac with 9.0.2...
it works in most of times, but almost no test try survives if I use --stress 1
See the related discussion in https://github.com/fsharp/FAKE/issues/2439. While FAKE indeed makes it harder to debug the issue, the underlying issue is that Expecto is not running my tests properly. I'm not sure if this is because of one of the tests or the expecto runner itself (but I strongly assume the later). This doesn't mean there is a bug here as it could be in any of the updated components (like the dotnet sdk 3.1) but I'm quite lost on how to debug this further and I hope to get some insights here. This is what I have tried:
This is what I noticed:
EXPECTO?! Summary...
and you'll see different results, never 4):--sequenced --debug
I can see that often it starts hanging after any test (ie the last bit of output is similar to[20:41:15 DBG] Fake.Core.Target.Tests/Initial Targets Can Run Concurrently passed in 00:00:00.0110000. <Expecto>
, examples:Any help on how to debug this further are welcome. Note: It seems to reproduce very easily so in theory it should be "easy" to debug, only downside is that it only happens on travis.
One potential source of this issue is that I upgraded to the 3.X version of the .net core runtime and added the roll-forward policy for the test projects, but I'd assume that we want expecto to be compatible with the latest stuff.
Let me know if you need additional information.