Open MilesBHuff opened 3 weeks ago
If #46 is implemented, then this will need to become priority-aware. What this means, is that when de-allocating the smallest swapfile, the one it is replaced with will be a priority one lower than the largest swapfile. There will also need to be a minimum priority, to avoid conflicting with a hibernation swapfile (which should have negative priority). There's probably no reason to make this configurable, so it can just be set to 0
.
Because we will now be working around the default 32 swap device limit, we will need to have a much-higher starting priority than proposed in #46. Assuming we start from 32766
(one lower than the max), this results in 32767 possible swapfile allocations (with, of course, there never being more than 32 at any given time).
When priority 0
is reached, there are three options for the next allocation's priority: [1] ceasing to manually set priorities on new swapfiles, thus allowing priorities to go negative (which will only defer, not fix, the possibility of running out of numbers; and which will break hibernation for everyone who is not cleverly managing it with a custom systemd hook); [2] looping back around to the starting number (thus making it higher-priority than all pre-existing swapfiles (undesirable) but having any subsequent swapfiles continue the previous pattern (desirable)); or [3] making all subsequent swapfiles have the same priority (reverts to the same situation we have today).
Because Swapspace will now be using almost the entirety of the explicitly-settable priority range, the starting priority will need to become configurable, in case a user has multiple pre-existing swap devices they want to use before Swapspace kicks in.
The total number of swapfiles that Swapspace supports should be configurable. Users who hibernate will want to set the number to 31
(one less than the kernel's limit) so that they always have room for on-swapping a latent hiberfile.
When this algorithm wants to deallocate the smallest swapfile, it should first check whether it is equal in size to max_swapsize
; if it is, there is no benefit to be had from deallocation, and swapspace should not perform it.
So, updating what @jtv suggested earlier, the algorithm for this would now be:
After allocating a new swapfile: if
(
the number of active swap devices>=
Swapspace's configurable limit for swapfiles||
the number of active swap devices==
the kernel's limit)
&&
the size of the smallest swapfile<
max_swapsize
, then immediately deallocate the smallest swapfile (thus causing its contents to transfer first to RAM, and thence to other swap files).
Regarding "presumably mostly to the new one": This presumption is generally false without #46, because the kernel swaps evenly across all same-priority swapfiles, and without prioritization it is unlikely that the only swapfiles with any room are the newest swapfiles.
Well I'm assuming that the earlier swap files will be pretty full in that situation, is all. And the latest swap file is likely to be larger as well.
One point of pedantry: don't compare for equality to the maximum swap file size. I think it's also possible that the smallest swap size is greater than the maximum, if the user changed settings and restarted.
Well I'm assuming that the earlier swap files will be pretty full in that situation, is all. And the latest swap file is likely to be larger as well.
Ah, gotcha. Yeah, that'd be true: earlier swapfiles would be fairly full, but it's unlikely that any are totally so without prioritization, and as long as they have space, the kernel will continue swapping to them, even if a new one has been added that is mostly empty (hence the importance of #46). EDIT: Apparently they are prioritized, just automatically and with negative numbers, which causes issues for hibernation but which is afaik still a form of prioritization, so the kernel should already be filling up older Swapspace swapfiles before it gets to the new ones.
One point of pedantry: don't compare for equality to the maximum swap file size. I think it's also possible that the smallest swap size is greater than the maximum, if the user changed settings and restarted.
Oh, good point. Wasn't thinking. I'll update the above — thanks!
Oh, I think I realised why I did not take the limit on the number of swap files into account before... At the time I think you'd hit a limit on the amount of swap you could usefully have before you hit the limit on the number of swap files.
That will have changed.
Thanks for raising a number of issues @MilesBHuff ! I haven't worked on this project in a while and have been rather busy with other responsibilities the past few weeks so I have not had time to dig into the issues you've raise in detail until now.
The good news is that swapspace does have some awareness of the kernel limit for swap files as it holds a constant that tracks the total number of swap files it supports which happens to be 32: https://github.com/Tookmund/Swapspace/blob/1e7a2b2842bb926ba17770fad0e9870232467dfe/src/swaps.c#L278-L283
The bad news is that it uses this to size a metadata array that tracks all the swap files it creates https://github.com/Tookmund/Swapspace/blob/1e7a2b2842bb926ba17770fad0e9870232467dfe/src/swaps.c#L285-L288
Because of this, I would be hesitant to provide this as a configurable option without significant testing around how existing installations that might already have 32 swap files handle being asked to load only, say, 31 of them. I don't want to leave random old swapfiles around on disk.
Happy to hear any clever solutions you might have to this, and PRs are always welcome!
Glad to hear I didn't totally forget to deal with the limit. :-) It's embarrassing that I know so little about my own code, though it was a while ago.
Maybe just have an oversized metadata array, and allow configuration of the maximum number of swap files - so long as it's no greater than that array size?
(Continuing from #31)
@MilesBHuff
@jtv
EDIT: Updated algorithm here.