librasn / rasn

A Safe #[no_std] ASN.1 Codec Framework
Other
183 stars 43 forks source link

Performance of PER codec #244

Open dudycz opened 2 months ago

dudycz commented 2 months ago

Hi. I have been playing with two most popular PER codecs: rasn and asn1-codecs and made some benchmark comparing their performance. One thing I have noticed is that rasn can be ~10x slower in encoding in some complex cases:

Codec Encoding (µs) Decoding (µs)
rasn 1916.7 826.8
asn1-codecs 208 145.5

I looked into flamegraphs and callgrinds but I couldn't figure out what contributes to this big difference. If you're interested I could try to collect some and attach here.

Link to repo with benchmark: https://github.com/dudycz/asn1_codecs_bench

XAMPPRocky commented 2 months ago

Thank you for your issue! Would you be able to attach the flamegraphs? That would be very helpful. In general the initial version I made prioritised correctness over performance so there's likely a lot of places where it could be more efficient. I already know that I haven't added the empty struct optimisation that was added to BER to PER.

dudycz commented 2 months ago

I had to encode sample.asn 256 times to get something recorded in flamegraph. Here it is: rasn

Nicceboy commented 2 months ago

Seems like your test bench uses mostly integers. asn1-codecs crate seems to use i128 internally for Integer type while rasn uses BigInts. I would guess that this plays big factor at least.

I think there was a plan to make inner implementation of integer types as feature, so that it would be possible to select different internal type for performance reasons if very big numbers are not required.

XAMPPRocky commented 2 months ago

Yeah, that's still a todo, I should write up an issue giving details in case someone else who has more time is interested.

Nicceboy commented 2 months ago

Yeah, that's still a todo, I should write up an issue giving details in case someone else who has more time is interested.

I think this is rather important optimisation in general and should not take too much work to implement, if we have, for example, just the i128 and BigInts as initial options. If you have time to write what you had on your mind, maybe I can try to do it.

Nicceboy commented 1 month ago

I have been reworking integer type (by using primitives (i128) by default, and switching to larger ones on overflows, or if big one is created manually) Hopefully I can open draft PR in the end of week. The resulting integer will be enum, and the type of the big integers does not matter that much anymore.

Not sure if it is the best approach, but it is the one I chose after trying quite many different things. Let's leave those comments for the PR. It can be completely changed still.

Seems like integers are not the only problem with UPER. The extensive use of new vector buffers and moving this data contributes more. Default low capacity's in vectors, a lot single pushes and overall creation of new buffers instead of sharing pointer of one or reusing existing allocations slows down quite much.

Some initial differences on M2 Pro from the integer change:

UPER:

image

COER (The difference was much more impactful)

image

I have also made initial rework for optimising allocations in COER (the results below are based on the int remake) Maybe UPER will follow if I have time.

image

So by changing the integer type and reducing allocations, it was already possible to get 3x speedup at least for COER, based on the benchmark base of @dudycz .

Allocations could be improved further but I am having painful issues with lifetimes.

repnop commented 2 weeks ago

@dudycz since #256 got merged a little while ago, are you able to rerun your benchmark, at least for the decoding side of things? it should be quite a bit better now.

dudycz commented 2 weeks ago

Yes, I can confirm big improvement in decoding time (on my "bench" setup I observed decrease from 457us to 341us!). Good job!