Hi:
I am profiling ddisasm for "large" size binary (> 50 MB) using ddisasm "--profile" flag. I noticed that ddisasm's profiler is based on souffle's interpreter mode, but interpreter mode of souffle can have completely different behavior from compiled souffle c++ file. For example when I profile lean4 theorem prover binary, profiling result tells computation of block_overlap takes almost all running time (> 40min), however if I take out this query and running in compilation mode of souffle, this query take only takes less than 5 seconds.
Maybe we need some other way to do profiling, otherwise current profile information will be misleading
Hi: I am profiling ddisasm for "large" size binary (> 50 MB) using ddisasm "--profile" flag. I noticed that ddisasm's profiler is based on souffle's interpreter mode, but interpreter mode of souffle can have completely different behavior from compiled souffle c++ file. For example when I profile lean4 theorem prover binary, profiling result tells computation of block_overlap takes almost all running time (> 40min), however if I take out this query and running in compilation mode of souffle, this query take only takes less than 5 seconds.
Maybe we need some other way to do profiling, otherwise current profile information will be misleading
Binary and profiling result in here: https://drive.google.com/file/d/1izSSquNhkmbvPDT0DjcTKXS-UR858DTe/view?usp=drive_link
The rule: