Open xeniorn opened 1 year ago
Currently vmtouch is not NUMA-aware at all. It might be possible, but vmtouch hasn't done anything special to enable it.
I don't think I have easy access to any hardware that would work for this testing. If you get any more results from your experiments, please let us know! It's possible that vmtouch could gain some new flags that let us do NUMA stuff, I just don't have the time to look into it now.
I see, thank you. I was actually hoping the numactl wrapper would ensure this happens regardless of whether vmtouch is aware of it or not, i.e. that it would disallow the vmtouch process to handle/touch the memory outside its allocated NUMA node, but it doesn't seem to be the case. Perhaps it only provides hints to NUMA-aware applications and doesn't enforce anything.
I will write here if I end up having any further insights.
One thing you might want to look into is how NUMA affects the page-cache. Your command is touching (what I assume is) a regular file, so it's locking memory backed by the page-cache.
Yes, it's a regular ~170G file. I was checking beforehand with vmtouch -v {file}, and it was saying 0/x pages are resident - would this have shown the "page cache" part too or it reports a different kind of "residence in RAM"?
vmtouch just mmap()
s the file and calls mincore()
, so for regular files it's always just reporting the residency status of the page-cache. You have to be doing something a bit special for it to report on any other type of memory.
Running
numactl --cpubind=0 --membind=0 -- vmtouch -ldw targetfile
shows locked memory over all available NUMA nodes.Expected behavior would be for all the memory to be membound to NUMA node0.
Possibly I'm doing it in a wrong way.
Would such functionality be possible with the current binaries? Otherwise, could it be considered as a feature?