Closed ghost closed 8 years ago
@divaykin There's unordered_map::resize()
that might help here. I'm open to using something else but we first need to come up with a benchmark to measure it.
see https://github.com/divaykin/hashmap-bench 42ns is non neglectable already.
i'd expect max number of orders in the book to be around 4-10k (to be verified) and ID to be incremented sequentially by the venue (not completely sure though).
of course we'd need a better hashmap but i'm trying to show that unordered_map
is expensive.
we can experiment with simpler hash function for unordered_map
but essentially even with mod
it'd be slow
A while ago I calculated the maximum number of concurrent price levels and the maximum number of orders per price level for all instruments trading on NASDAQ on a particular day. That might be interesting information regarding this project as well.
@jvirtanen that's very interesting to know indeed, thanks
according to perf report
this line https://github.com/penberg/helix/blob/master/src/nasdaq/itch50_session.cc#L132 takes a lot of time, just lookup by uint16_t
.
even though entire 14GB file was processed in about a minute
Fixed by 58546cde6ff59df2cf6f5761d6acb755fc7954db.
i think
unordered_map
is rather slow thing. can we consider something else? like own fixed size hash map? also you want to NOT resize at all cost