Open jkbonfield opened 1 year ago
Hi James,
good idea! I was actually doing some profiling yesterday (on a physical machine) and looked at that exact line in the perf
output, but didn’t know what to do about it. And thanks for confirming this is due to memory access; I had a suspicion but wasn’t quite sure about this. Which genome did you use for the profiling? I tested with D. melanogaster and didn’t get quite such a large percentage for that line. Also, the find_nams
function was only responsible for ~10% of the total runtime. But that only means that mapping against Drosphila won’t benefit as much if we can optimize this.
Perf record / report shows (I've no idea how to get this working with debug info and cmake):
You just need to run CMake with -DCMAKE_BUILD_TYPE=RelWithDebInfo
and then it’ll work. (At the moment, this also enables assertions, but I may change that, and it doesn’t slow down things anyway IIRC.)
[...]
auto ref_hit = index.find(q.hash); if (ref_hit_last != index.end()) { ... } ref_hit_last = ref_hit;
So while it's fetching the next hit it's processing the previous one.
I’ll try that.
I’m not sure entirely, but I think most of the time for my test dataset is not spent not in the hash table lookups, but in the code after it (the for (auto &it : hits_per_ref) {
loop). I need to look a bit closer, though, because perf report
output of optimized code is hard to interpret even with debug symbols. And I should test with a larger genome.
I made a mistake in my first assesment. The loop with the hash table lookups is responsible for a large chunk of the runtime of find_nams
after all.
Now the problem is that I need to figure out how to actually prefetch the next hash table entry. I tried your suggestion above, but there’s no difference. I think we would need to issue an actual prefetch instruction that makes the memory access in the background while the CPU continues to do its work.
I’ll need to come back to this later.
This is just an idea, which may go nowhere at all. :)
Profiling strobealign I see the most CPU hungry function is find_nams, and the most CPU intensive bit of that I think is
https://github.com/ksahlin/strobealign/blob/main/src/aln.cpp#L282
Perf record / report shows (I've no idea how to get this working with debug info and cmake):
Ie 39% of all CPU for this function is spent waiting on one of those memory moves (it's sometimes the instruction before or two than the one reported, due to pipelining). This is to be expected in any application which is using a large hash table and randomly jump around main memory. I expect perf stat would tell me it's frontend or backend idle times, but sadly this machine is virtual and isn't exposing individual hardware CPU counters.
I've had experience elsewhere on speeding up memory fetches by computing the address that's going to be used in a couple loop iterations time and manually doing a hardware prefetch. It's then in cache by the time we get around to using it.
Here we may even be able to do this by doing something like:
So while it's fetching the next hit it's processing the previous one.
I'm a low level C coder though and unfortunately know nothing about C++, so how to get things like next value out of an implicit iterator is beyond me.
Has anyone looked at the possibility of improving instruction pipelining by prefetching memory addresses? I can't say how much difference it'll make without trying it, but as I say doing that in C++ is beyond my knowledge.