mikepound / cubes

This code calculates all the variations of 3D polycubes for any size (time permitting!)
MIT License
163 stars 43 forks source link

[Discussion] I made a graph that allows me to estimate about how big a cubes_n.npy file will get (in bytes) when given n cubes. #14

Open TheoCGaming opened 1 year ago

TheoCGaming commented 1 year ago

https://www.desmos.com/calculator/fea4uymhix According to this graph, file sizes will get ridiculously large even by the 12th iteration. Perhaps the storage format should be optimized?

VladimirFokow commented 1 year ago

Here is how to optimize the current storage code, but this still doesn't solve the fast growth of the sheer number of cubes:

np.save(cache_path, np.packbits(np.asarray(polycubes, dtype=np.int8), axis=-1), allow_pickle=False)

notes:

VladimirFokow commented 1 year ago

About storing the cubes

Note: Here I've ignored

For n=16 (the current record):

The theoretical minimum to store 50 billion DIFFERENT THINGS (not even polycubes, but at least their ids): To uniquely identify each of the 50 billion things - we need at least log2{50e9} = 36 bits for each id. Needed storage space: ~ 50e9 * 36 bits = 225 GB

For n=20:

Let's say we want to make progress until n=20.

Assuming the number of polycubes grows by a factor of 7 with each n.

n_cubes = 50e9 * 7**4  # approx.
n_bits = np.log2(n_cubes)
need_bytes = n_cubes * n_bits / 8 

need_bytes / 1e12  # terabytes

~ 700 TB

For n=30:

n_cubes = 50e9 * 7**14  # approx.
n_bits = np.log2(n_cubes)
need_bytes = n_cubes * n_bits / 8 

need_bytes / 1e21  # zettabytes

~ 317 ZB

This number is on the scale of the whole internet.

TheoCGaming commented 1 year ago

And not to mention that you have to store all of that in ram before you actually write it, meaning that if it remains uncompressed, your computer (or program) will crash if you make the polycube too big. Even then it's not a matter of "if", it's a matter of "when". Compressing it will only make it crash earlier and may make it run slower.

I'm not against the idea of compression, these are just things to consider.

VladimirFokow commented 1 year ago

you have to store all of that in ram before you actually write it

You don't actually have to store it all in RAM at the same time.

Polycubes can be processed and counted separately from each other (it will just take longer), and it can be distributed across multiple machines.

See an example algorithm I wrote here: https://github.com/mikepound/opencubes/pull/7#issuecomment-1636539509 (maybe an even better approach exists). There is also a link to the paper which describes useful ideas for reaching n=16.