Open TheoCGaming opened 1 year ago
Here is how to optimize the current storage code, but this still doesn't solve the fast growth of the sheer number of cubes:
np.save(cache_path, np.packbits(np.asarray(polycubes, dtype=np.int8), axis=-1), allow_pickle=False)
notes:
np.packbits
allow_pickle
by default is True
. But we don't have object arrays, so we don't need pickles.np.load
should also have allow_pickle=False
. Its output should then be processed with np.unpackbits()
with axis=-1
, and also you'll have to undo the effect of packing a size that is not a multiple of 8; possibly with size
parameter, or manually crop the last zerosNote: Here I've ignored
The theoretical minimum to store 50 billion DIFFERENT THINGS (not even polycubes, but at least their ids): To uniquely identify each of the 50 billion things - we need at least log2{50e9} = 36 bits for each id. Needed storage space: ~ 50e9 * 36 bits = 225 GB
Let's say we want to make progress until n=20.
Assuming the number of polycubes grows by a factor of 7 with each n.
n_cubes = 50e9 * 7**4 # approx.
n_bits = np.log2(n_cubes)
need_bytes = n_cubes * n_bits / 8
need_bytes / 1e12 # terabytes
~ 700 TB
n_cubes = 50e9 * 7**14 # approx.
n_bits = np.log2(n_cubes)
need_bytes = n_cubes * n_bits / 8
need_bytes / 1e21 # zettabytes
~ 317 ZB
This number is on the scale of the whole internet.
And not to mention that you have to store all of that in ram before you actually write it, meaning that if it remains uncompressed, your computer (or program) will crash if you make the polycube too big. Even then it's not a matter of "if", it's a matter of "when". Compressing it will only make it crash earlier and may make it run slower.
I'm not against the idea of compression, these are just things to consider.
you have to store all of that in ram before you actually write it
You don't actually have to store it all in RAM at the same time.
Polycubes can be processed and counted separately from each other (it will just take longer), and it can be distributed across multiple machines.
See an example algorithm I wrote here: https://github.com/mikepound/opencubes/pull/7#issuecomment-1636539509 (maybe an even better approach exists). There is also a link to the paper which describes useful ideas for reaching n=16.
https://www.desmos.com/calculator/fea4uymhix According to this graph, file sizes will get ridiculously large even by the 12th iteration. Perhaps the storage format should be optimized?