This change is functionally equivalent, but in code utilizing ConCache locks, it does cause occasional timeouts with acquire limited to 500ms, while running tests - especially if the seed happens to cause lock congestion very early on after task startup.
I can also mitigate the issue locally, by warming up ConCache in test_helper.exs by doing a loop of enough isolated calls to cover all the partitions, but upstream change feels much better.
Even if the real issue is still elsewhere, the pure gain is still worth it I hope.
Ensure that when running tests where ConCache as a dependency, ConCache.Lock.Resource is loaded before the lock pool is started. In some cases, dynamic module resolution caused Resource.new to even add up 300ms of latency 🫣. I'm not sure if there's a cleaner way of making sure that module is loaded when the application starts?
I'm running my tests on AMD 24 cores/6.6.51-1-lts kernel.
My colleagues on macs could not ever reproduce the cold boot slowdown, one person with Intel linux could.
This change consists of two minor improvements.
MapSet.new/1
rather thanEnum.into/2
Apparently, on some architectures at least, this makes a significant difference on cold VM boot:
(clean
iex
required each time)This change is functionally equivalent, but in code utilizing ConCache locks, it does cause occasional timeouts with acquire limited to 500ms, while running tests - especially if the seed happens to cause lock congestion very early on after task startup.
I can also mitigate the issue locally, by warming up ConCache in
test_helper.exs
by doing a loop of enough isolated calls to cover all the partitions, but upstream change feels much better.Even if the real issue is still elsewhere, the pure gain is still worth it I hope.
ConCache
as a dependency,ConCache.Lock.Resource
is loaded before the lock pool is started. In some cases, dynamic module resolution causedResource.new
to even add up 300ms of latency 🫣. I'm not sure if there's a cleaner way of making sure that module is loaded when the application starts?Thanks for looking.