Open gsingh93 opened 1 year ago
This is a good optimization idea. Would you like to write a patch so that the range gets considered during parsing?
Note that I'm slowly working on an improvement of the project to write the core code in Rust: https://github.com/martinradev/pt-dump With it, parsing large page tables is significantly faster.
Would you like to write a patch so that the range gets considered during parsing?
Unfortunately I don't have time at the moment, and not sure when I'd be able to look into it.
Note that I'm slowly working on an improvement of the project to write the core code in Rust: martinradev/pt-dump. With it, parsing large page tables is significantly faster.
Awesome! If you could either test that the KASAN use case is reasonably fast when measuring performance or eventually implement this feature request in that version, that would be great.
My understanding of the code is that we first parse the entire page table, and then apply the filters. This makes sense when the purpose of the filters is just to limit the amount of information seen, but I need to filter out page ranges for performance issues.
I'm running a KASAN AArch64 image, which results in the following additional page table entries:
I don't really care about these, but gdb-pt-dump still tries to parse them, which ends up taking forever. I'm wondering if instead of filtering addresses after we've parsed the page table, we can skip addresses while the page table is being parsed, so I could specify a range that would completely skip this KASAN memory.