Closed dzmitry-lahoda closed 2 years ago
I like this idea. Haven't justified the effort yet though!
If I have generator of u32 (uint). Than having 1B of values in memory will take 4GB and perf of fib these will be optimal.
If I have u64(ulong) API only, than I have to case 4GB into 8GB (these will take time and space - 12GB at once in memory). And fib each number may appear to be non optimal.
So u32 version is must for production API.
Ok, so the lookup table will always have just 92 entries (~ 736b RAM), but are you meaning the values to be encoded? Those could be casted as required to do the same with a little more overhead. At this stage it's a bit much to optimize this project that far.
Each integer type would have its own lookup table inlined via ReadOnlySpan and has different count of entries and avoid cast from 64 bit to 32 16 and 8, and not consume time during calculation on start. It look like sane optimization for game networking, not sure that it is the case for other case. I.e. for games the overhead could be not so acceptable.
Having encoded map another issues. I will do that for 8 bit integers. I doubt will do that that for 16 bit integers. But it is another topic.
I'm going to close this issue in favour of https://github.com/invertedtomato/integer-compression/issues/11. I think that's a better approach that can work at scale. Thanks @dzmitry-lahoda
https://github.com/invertedtomato/integer-compression/blob/8b0b80f1e74340b90cc9726c1bf944dbca93b24d/Library/Compression/Integers/Wave3/FibonacciCodec.cs#L26