Closed rhuddleston closed 7 years ago
@rhuddleston Uksm counts pages filled with zeros in /proc/meminfo. Sometimes zero pages are the most part of the saved pages.
And, the accurate meaning of pages_sharing for both KSM/UKSM is "the number of page table entries pointing to shared pages", so it does not always equal to the actually saved pages. For forked processes, this number can be easily doubled.
So, to evaluate properly, you should count the actual memory saved with the same workload in the same time frame and observe the CPU consumption at the same time.
@naixia thanks for the update. I understand the CPU consumptions part of the evaluation. Back on the memory savings end would be really nice to calculate the saving better.
"For forked processes, this number can be easily doubled."
"Uksm counts pages filled with zeros in /proc/meminfo"
@rhuddleston I mean, if a page is firstly shared through COW by two processes (one is forked by another), then after this page is merged by KSM/UKSM to another page, there are two PTEs(page table entry) sharing so pages_sharing is increased by 2 after this. However, there is actually one page saved/freed.
The counting of "KsmZeroPages" is accurate, that means, how much zero-filled pages were saved/freed. In your case, 7GB of zero pages are saved, which would cause swapping if UKSM was disabled.
You may want to take a look at the project benchmarks page: http://kerneldedup.org/en/projects/uksm/benchmarks/
I am closing this now. If you have any other questions, please open a new issue.
I have a fully containerized environment with dozens of scala microservices. We have many different test clusters so I tried deploying ksm with ksm_preload in one environment and installed a uksm kernel in the other environment. Both environments were using the same applications and versions etc but I noticed KSM's pages_sharing numbers were 3x more than uksm.
Any idea why this might be the case?
ksm enabled cluster:
uksm enabled cluster: