summary:
generating a merkel proof and processing a tx will take 5/8ths as much time.
the merkel proof will be about 1/2 as long, so we can have 2x more txs per block.
This will make our hard drive requirements 1/14th as big
update the merkel tree database to be and order 2 radix tree, and group elements into pages for speed and efficiency.
old pages to read log(#accounts)/log(16)
number of pages to read log(#accounts)/log(page_size/32)
ratio is log(16)/log(4000/32) which is 5/8ths as many reads.
If the page size is 1000000, then it takes 2/7ths as many reads.
We are currently using 2 (108 + 32 (16 log16(number of users))) bytes of hard drive per spend tx, and more for other types.
With this update we could get it down to 2 (108 + 32 2 log(number of users)/log(2))
8*log(2)/log(16) is a factor of 2.
So merkel proofs will be 1/2 as big.
We are currently using 2 (108 + 32 16 log16 (number of users)) bytes per spend tx on the hard drive, and more for other types.
With this update we could get it down to 2 (108 + 32 2 log (number of users)/log (4000/32))
the improvement in the limit as number of users goes to infinity is is 8/log(16) * log(4000/32), which works out to be about an improvement factor of 14x.
At 1000000 users, it is around 10x improvement.
summary: generating a merkel proof and processing a tx will take 5/8ths as much time. the merkel proof will be about 1/2 as long, so we can have 2x more txs per block. This will make our hard drive requirements 1/14th as big
update the merkel tree database to be and order 2 radix tree, and group elements into pages for speed and efficiency.
We are currently using 2 (108 + 32 (16 log16(number of users))) bytes of hard drive per spend tx, and more for other types. With this update we could get it down to 2 (108 + 32 2 log(number of users)/log(2)) 8*log(2)/log(16) is a factor of 2. So merkel proofs will be 1/2 as big.
We are currently using 2 (108 + 32 16 log16 (number of users)) bytes per spend tx on the hard drive, and more for other types. With this update we could get it down to 2 (108 + 32 2 log (number of users)/log (4000/32)) the improvement in the limit as number of users goes to infinity is is 8/log(16) * log(4000/32), which works out to be about an improvement factor of 14x. At 1000000 users, it is around 10x improvement.