After some investigations (the issue appears when the system is under heavy memory pressure, a new AppVM is started and Qubes have to find memory). It seems that, when the meminfo key is set, the unikernel is considered by Qubes as taking part in the memory balancing process. Some VM have a meminfo entry, some haven't:
I've tried, without any luck, to write in meminfo our total_mem (we always want to have our memory, we're currently unable to downsize our memory) but it differs from the actual total memory (meminfo="27384" vs static-max="32768", still no idea why). I then tried never writing to meminfo (this commit) which solve the issue (at least on my laptop and rrn in the qubes-forum, https://forum.qubes-os.org/t/memory-allocation-problem-remains-in-low-allocation-for-minutes/18787/20) and doesn't seem to have any drawback.
… from us
Some qubes-mirage-firewall users have reported AppVMs being stuck at their lowmem value (https://forum.qubes-os.org/t/new-usability-issues-dom0-processes-making-system-unusable/18301/2 and https://forum.qubes-os.org/t/memory-allocation-problem-remains-in-low-allocation-for-minutes/18787). This causes the AppVMs to use swap and be very slow.
After some investigations (the issue appears when the system is under heavy memory pressure, a new AppVM is started and Qubes have to find memory). It seems that, when the
meminfo
key is set, the unikernel is considered by Qubes as taking part in the memory balancing process. Some VM have ameminfo
entry, some haven't:I've tried, without any luck, to write in
meminfo
our total_mem (we always want to have our memory, we're currently unable to downsize our memory) but it differs from the actual total memory (meminfo="27384" vs static-max="32768", still no idea why). I then tried never writing tomeminfo
(this commit) which solve the issue (at least on my laptop and rrn in the qubes-forum, https://forum.qubes-os.org/t/memory-allocation-problem-remains-in-low-allocation-for-minutes/18787/20) and doesn't seem to have any drawback.