Closed Havunen closed 5 years ago
Ok. I have reproduced this issue in v4.0.5. The thing is there is no error, no stack overflow. But it seems it never ends, it uses 32GB ram on my computer and then starts using Disk as RAM until your computer is so stuck that nothing works.
In master branch things seem little different. Memory usage is not increasing, but threads are sleep waiting each other.
I think this bug (the current state in master branch) happens because of new lock scaling system. The id which is generated for type (factoryID) shouldn't it be so that its same one id per one full resolution chain? Scaling from smallest resolution to bigger? Now what happens is that all threads wait some other thread to finish because lock-ID changes and they conflict.
The id which is generated for type (factoryID) shouldn't it be so that its same one id per one full resolution chain?
Not sure I am following. The FactoryID is assigned when the factory is created / registered and doesn't change since then.
Yes, If I understood the code correctly, that factory object needs another factories to be able to complete the full resolve request. Now these factories depend on each other and their ID is incremented when its registered. This factory Id is also used for lock scaling, which will result into deadlock, because they wait each other to finish.
One idea to make this bullet-proof is to scale the locks based on dependency tree of service so it locks it until it has fully resolved the service type. And then increment the lock id based on these dependency trees. This way one thread can always finish its job, or the thread which is running can finish its job to free the one waiting.
Edit: However I'm not sure if this would work in case of circular dependencies
This issue seems to have disappeared, its no longer reproducible in master branch / Release 4.1.0-preview-01. I think we can close this issue, maybe the PR could be merged anyway because it expands the load test.
Hi,
We have an issue with DryIoc where development server gets stuck sometimes on application startup or quickly after it. We are now constructing Container withoutFastExpressionCompiler so that is not the cause here.
We have finally successfully created memory dump from the w3wp process when it got stuck. 54% of threads are blocked by thread-51 which is not resolved.
Thread 51
.NET Call Stack
Full call stack
The current work around to this problem is restarting application pool. And then it magically solves the problem. I will try to reproduce this in LoadTest.