This approach avoids the issue of an ever-growing array underpinning the d.buf []byte slice and gives a modest 10% speed up.
The idea is to avoid copying into d.buf as much as possible by only using d.buf as a buffer which is strictly <=1 block, and breaking the Write method down into 3 distinct phases:
fill d.buf with beginning part of p - if we we still don't have a full block's worth then we're done (we'll need to wait for subsequent calls to Write to fill d.buf. Otherwise, if we can make d.buf up to a full block, do so and call hash with it, the reset d.buf to empty (since we've "consumed" those bytes).
for as many full blocks as now remain in p, process them in one large call to hash.
if there is anything left in p (which must be < 1 block in size now), then make that be the new value of d.buf since it will need "topping up" with subsequent calls to Write.
This approach avoids the issue of an ever-growing array underpinning the
d.buf []byte
slice and gives a modest 10% speed up.The idea is to avoid copying into
d.buf
as much as possible by only usingd.buf
as a buffer which is strictly <=1 block, and breaking theWrite
method down into 3 distinct phases:d.buf
with beginning part ofp
- if we we still don't have a full block's worth then we're done (we'll need to wait for subsequent calls toWrite
to filld.buf
. Otherwise, if we can maked.buf
up to a full block, do so and callhash
with it, the resetd.buf
to empty (since we've "consumed" those bytes).p
, process them in one large call tohash
.p
(which must be < 1 block in size now), then make that be the new value ofd.buf
since it will need "topping up" with subsequent calls toWrite
.