alexdovzhanyn / elixium_core

A privacy-preserving decentralized application network
https://www.elixiumnetwork.org/
281 stars 37 forks source link

Define concrete block attribute data sizes #76

Closed alexdovzhanyn closed 6 years ago

alexdovzhanyn commented 6 years ago

We need to define the exact size in bytes of each field so that we can more easily anticipate the size of blocks. In order for miners to be able to formulate blocks that properly fit within our block size limit, we need to enforce a constant size for block headers, so that the difference can be subtracted and miners are able to determine exactly how many bytes they have available to fill with transaction data.

As of current writing, given block:

%{
  difficulty: 18775743.58334009,
  hash: "0000000D0E89D9A13DE616FEC8B12A62B646B2B5D0E1419C1A460442474E2AF0",
  index: 2324,
  merkle_root: "9084BB7F7D5A722FC2075644E71C2A518545E33C27402376D19863721EF0063B",
  nonce: 32535345,
  previous_hash: "0000003987CE7A874B760D50B183232A8C910C29CC7342E858503656D687BEE2",
  timestamp: 1542634035,
  version: 1,
  transactions: []
}

The fields of this block can be represented as:

Attribute Size (bytes) Varies?
hash 64 No
previous_hash 64 No
merkle_root 64 No
difficulty 10 Yes
timestamp 6 No
nonce 6 Yes
index 6 Yes
version 3 Yes

An empty transaction array is 2 bytes, but this will largely vary and is not the focus of this issue so that is not noted in the table above. When adding all these fields, 64 + 64 + 64 + 10 + 6 + 6 + 6 + 3 + 2 = 255 bytes. This is the absolute minimum number of bytes needed to represent a block (technically not, since a valid block needs a coinbase transaction).

This 255 byte representation is not achievable with our current process of storing blocks, nor is it the goal. When representing a block as an elixir map, this block is 343 bytes in total size, which is 118 bytes more than our absolute minimal representation. This is because we're storing key value pairs representing the block, which means we are storing the keys as encoded atoms, resulting in more disk space usage per block. This 118 byte difference between our representation and the minimal representation is arguably minimal; if we store 118 bytes extra per block, over 2,628,000 blocks (~10 years) we'll amass a difference of about 295.73 MB of extra data. We may be able to enforce a stricter, slimmer encoding of blocks by substituting the keys in our map with a byte that we use to map values. This could work something like:

Key Byte
hash 0
previous_hash 1
merkle_root 2
difficulty 3
timestamp 4
nonce 5
version 6
index 7

Using the above mapping, we could map the above block like so:

<<0, "0000000D0E89D9A13DE616FEC8B12A62B646B2B5D0E1419C1A460442474E2AF0", 
  1, "0000003987CE7A874B760D50B183232A8C910C29CC7342E858503656D687BEE2",
  2, "9084BB7F7D5A722FC2075644E71C2A518545E33C27402376D19863721EF0063B",
  3, 18775743.58334009, 4, 1542634035, 5, 32535345, 6, 1, 7, 2324>>

This is obviously not literally how it would look in binary, the hashes would be binary rather than text in the real representation, I'm just showing it like this for readability. With this mapping, we'd not be using 118 extra bytes to represent the header, we'd only need 8 extra bytes. Over the course of 2,628,000 blocks, we'd amass an 'extra' data mass of 20.05 MB, which is just 7% of the amount that we'd be storing with our current method.

We can trim this even further by not storing the index on blocks, since blocks are implicitly indexed anyway since they all rely on a previous block hash. This would save us roughly 7 bytes per block, but it may be an unnecessary optimization that could make writing code more difficult, since there are factors of the chain that rely on block index being readily available in order to function properly, and may result in higher execution times if we need to calculate the index of a block every time we may need it.

It seems clear to me that the easy parts to define byte sizes for are the parts that are non-variable (or won't vary in size, at least. These fields are:

Attribute Size (bytes)
hash 64
previous_hash 64
merkle_root 64
timestamp 6
version 3

Which leaves us with the fields:

Attribute Size (bytes)
difficulty ?
nonce ?
index ?

The maximum value for difficulty can be 16^64, and difficulty can not be negative, which means we need a number of bytes that can represent at maximum 16^64 but still represent floating point numbers, since difficulty can be (and most often will be) a float. After looking at Erlang Term Format specifications here, it looks like the data type we want is IEEE Float, which stores our float as 8 bytes. There is a 2 byte overhead with the erlang data storage flags, so this totals to 10 bytes. We can reliably store our difficulty using 10 bytes.

Nonce values will always be an integer, and we can size nonces to be whatever size we want, realistically. If we only want to use 8 bytes (up to 18446744073709551615) to represent a nonce, this is fine -- if a nonce lapses over 8 bytes, a miner can allow it to overflow and instead update the blocks timestamp. We can represent the nonce literally as a binary rather than use an erlang datatype, this would look like: <<255, 255, 255, 255, 255, 255, 255, 255>>

The block index can be represented as 4 bytes, which gives us a maximum possible index of 4294967295. At an average of 2 minutes per block, it would take 16,343 years to reach this index, therefore I don't believe we'll need to represent more indices than 4 bytes worth.

We don't need to use erlang numbers to represent our block versions, we can use bytes directly. 2 bytes should suffice for versioning.

From this logic, we can now show our mapping table as:

Attribute Size (bytes)
hash 64
previous_hash 64
merkle_root 64
difficulty 10
timestamp 6
nonce 8
index 4
version 2

Making our minimum possible block size 64 + 64 + 64 + 10 + 8 + 6 + 4 + 2 = 222. If we combine this with our flag based block representation, we can depict a block as 222 + 8 bytes (one byte per attribute), bringing us to 230 byte block headers. Over the course of 2,628,000 blocks, block headers will accumulate to ~576.5 MB.