Closed zmstone closed 2 years ago
Please benchmark and see if it really helps. Note that the iolist_to_binary is going to be very expensive here.
Please benchmark and see if it really helps. Note that the iolist_to_binary is going to be very expensive here.
I have updated the PR description with a simple test result.
That shows it's a possible improvement, but you need to compare Gun itself with/without the patch. The main bottleneck here is likely to be the network, not the concatenation. It is also a good idea to compare memory usage and GC (garbage_collection
and garbage_collection_info
).
It might be better to make Gun send fewer messages instead or in addition to this patch.
You should also probably not be using await_body
for large bodies if that's what you are doing.
Sorry, my mistake. I went back to verify the scenario in which it was 10,000 times slower, turns out it’s not the concatenation to blame but bin_element in a case clause.
Disclaimer: I do not have hard prof on how much this will gain. but my experience is: when large data comes in small pieces, the concatenation can be very much expensive.
[update] with a simple test:
code: