Closed kcalvinalvin closed 6 months ago
Files with Coverage Reduction | New Missed Lines | % | ||
---|---|---|---|---|
peer/peer.go | 1 | 74.52% | ||
<!-- | Total: | 1 | --> |
Totals | |
---|---|
Change from base Build 7168733027: | 0.0% |
Covered Lines: | 27977 |
Relevant Lines: | 49897 |
The last 2 force pushes change the function signature of DoubleHashRaw
to accept a closure that returns an hash. This is done as I found that the
buf := make([]byte, 0, 32)
escapes to the heap with how it was implemented in the previous commits. It ended up allocating less memory than the current master but did allocate 2 objects per hash instead of 1. This change allows the compiler to keep the buf
on the stack and only do 1 allocation instead of 2.
I think this just needs a rebase!
Rebased
For ibd and block verification, I think https://github.com/btcsuite/btcd/pull/2023 is going to be better but that one's still in the works.
Removed commits that were changing the go.mod files
Will tag the new version, then make a new PR updating relevant go.mod
s, before a final PR that swaps this in everywhere.
When profiling the ibd, I noticed that
TxHash()
is allocating quite a bit of memory.I also noticed this with utreexod in the past and I resolved this by creating a new double hash function. I've benchmarked it and made is more efficient in this PR.
Benchstat for
BenchmarkTxHash()
showed a slight bit of increase insec/op
which is likely due to the overhead of serializing into ahash.Hash
instead of thebytes.Buffer
in the previousTxHash()
. However, for real life cases, the newTxHash()
will be faster as we're saving on the unnecessary serialization intobytes.Buffer
as sha256.Sum256()will callWrite()
into the hash anyways.The memory allocation savings is ~37% compared to the old
TxHash()
. This matters a lot becauseTxHash()
gets called a lot.For bigger transactions, the memory savings is even greater. I performed the same test but with
multiTx
instead ofgenesisCoinbaseTx
.Below is the benchstat for that test. We see less of a penalty for speed and see ~41% savings in memory allocated.
The benchmarks for
BenchmarkDoubleHash*
show a significant speedup forDoubleHashRaw()
. This is likely because the other hashes have to calldigest.Write()
insha256.Sum256()
while theDoubleHashRaw()
function gets to skip that.I can also post some before and after ibd profiling if requested.