This change speeds up EncodeVarint by testing if the input writer implements io.ByteWriter and if so, goes to use our hand-rolled varint encoder, instead of using the awkward standard libary encoding/binary.PutVarint that requires a byteslice, which we also retrofitted using a bytearray pool.
While here, added parity tests to ensure that we get the exact same results as with the Go standard library's encoding/binary package with caution from https://cyber.orijtech.com/advisory/varint-decode-limitless and also added benchmarks whose results reflect the change in just the benchmark initially
$ benchstat before.txt after.txt
name old time/op new time/op delta
EncodeVarint-8 360ns ± 3% 245ns ± 3% -31.80% (p=0.000 n=10+10)
name old alloc/op new alloc/op delta
EncodeVarint-8 0.00B 0.00B ~ (all equal)
name old allocs/op new allocs/op delta
EncodeVarint-8 0.00 0.00 ~ (all equal)
Fixes #891
This is an automatic backport of pull request #917 done by Mergify.
This change speeds up EncodeVarint by testing if the input writer implements io.ByteWriter and if so, goes to use our hand-rolled varint encoder, instead of using the awkward standard libary encoding/binary.PutVarint that requires a byteslice, which we also retrofitted using a bytearray pool. While here, added parity tests to ensure that we get the exact same results as with the Go standard library's encoding/binary package with caution from https://cyber.orijtech.com/advisory/varint-decode-limitless and also added benchmarks whose results reflect the change in just the benchmark initially
Fixes #891
This is an automatic backport of pull request #917 done by Mergify.