jpbrown / webgl-loader

Automatically exported from code.google.com/p/webgl-loader
0 stars 0 forks source link

Lossless Float Compression: add discussion to Wiki #26

Open GoogleCodeExporter opened 9 years ago

GoogleCodeExporter commented 9 years ago
Was thinking about the quantization step encoding floats as 10-14 bits. Was 
looking at lossless float compression and came across:
http://www.cs.unc.edu/~isenburg/lcpfpv/

Might be interesting to try implementing something like this for encoding the 
float arrays to avoid loss in precision. Might be able to get a nice savings at 
a small cost in decoding.

Original issue reported on code.google.com by jterr...@gmail.com on 5 Dec 2011 at 9:27

GoogleCodeExporter commented 9 years ago
Thanks, but I think that this is a naive approach that misses some big-picture 
issues (no offense intended).

If you care this much about float precision, then you probably have a really 
large model, and so you need to have some smart chunking. Chunking also implies 
variable precision, so you want refine-able sub-chunks. Each chunk gives you 
more implicit bits, so as you refine spatially, your precision naturally 
improves. So, for large models I would start there.

Of course with floating-point, you have to worry about that variable exponent. 
The other thing to do would be to support multi-pass attribute refinement. That 
is, send attributes in multiple planes, each of which are 10-14 bits of 
precision, that way you can control precision independently (and 
progressively), independently of chunk refinement.

The advantage with this approach is that it looks similar to:
- using triangle faces to predict normals,
- morph animations.

And I already plan on doing this. I guess I could have a "lossless" mode that 
makes sure all the scales are powers-of-two and generates as many refinement 
planes as necessary, but would anyone care? Maybe! Do you?

Original comment by wonc...@google.com on 6 Dec 2011 at 3:56

GoogleCodeExporter commented 9 years ago
But couldn't you still get better compression with their lossy method? What's 
nice about the technique is that its default is lossless (32-bit), but they 
have numbers (in the 04 paper) showing compression rates for 16, 18, 20, 22, 
and 24 bit quantization as well.

So, with this algorithm, couldn't you actually keep the 10-14 bit quantization 
and then additionally compress, achieving even better compression rates than 
just quantization alone? The question then is what the cost is at decode time.

Original comment by jterr...@gmail.com on 6 Dec 2011 at 4:02

GoogleCodeExporter commented 9 years ago
Or do you think 10-14 bit quanitization + gzip probably achieves similar 
compression rates anyway?

Original comment by jterr...@gmail.com on 6 Dec 2011 at 4:03

GoogleCodeExporter commented 9 years ago
Do you mean the parallelogram prediction? Because that's something I do plan on 
adding, and isn't the novel part of the paper. My skim suggests that it is a 
fieldwise transposed encoding of the floating-point values. Please correct me 
if I'm wrong.

Also, arithmetic coding is definitely going to be better than GZIP's Huffman, 
but I'm not eager to write an arithmetic decoder in JS. That being said, it 
seems like there is a missed opportunity in segregating the mantissa coders by 
sign/exponent. While different, the statistics are hardly independent.

This is a good discussion that will probably lead to a wiki page.

Original comment by wonchun on 6 Dec 2011 at 4:43

GoogleCodeExporter commented 9 years ago
Yeah, I think you're right about that. I think the important takeaway is that 
splitting into sign, exponent and mantissa and then using a parallelogram 
predictor can give benefit. From there, it could probably just use gzip instead 
of arithmetic encoding so you don't have to decode in JS.

Original comment by jterr...@gmail.com on 6 Dec 2011 at 5:46

GoogleCodeExporter commented 9 years ago
You can either transpose the fields in a float, or treat things like really 
large fixed point numbers and transpose those. When converting to fixed point, 
you have to take a little care, like having a Kahan-like error accumulators to 
avoid cancellation, but the resulting fixed point number should be extremely 
sparse, even if it is wide.

Original comment by wonchun on 6 Dec 2011 at 6:00