-
In https://github.com/lancedb/lance/pull/2662 we added support for bitpacking signed integers in LanceV2. In https://github.com/lancedb/lance/pull/2696, an optimization was made to exclude the sign bi…
-
### What happens?
```
The application panicked (crashed).
Message: called `Result::unwrap()` on an `Err` value:
0: Failed to INSERT INTO heighttxmapping: FATAL Error: Failed to create check…
-
For sorted stuff, having a super fast implementation is not necessary above `bitwidth = 8`. We can fallback to the scalar implementation.
For unsorted stuff, I think we can also restrict ourselves to…
-
Great work!
Any chance you add support for 3-bit ? I know the bitpacking is a bit tricky with 3-bit, but it would be great to a have a 3-bit kernel for linear quantization, since the only one availa…
-
Two core data-structure elements, `Operand` and `Use`, are both designed to fit a relatively large amount of information in one `u32`. This is a performance optimization that we have found to be relat…
-
I don't like the way that this library chooses the compression scheme for you: https://docs.rs/compressed_vec/0.1.0/compressed_vec/
The following seems dank as well: https://docs.rs/bitpacking/0.8.…
-
In Lance format v2, while the byte array of a string field can benefit from general compression, the offsets array for the string field remains uncompressed (not even bit-packed). As a result, if the …
-
What's the reason for using bitcode instead of [bincode](https://github.com/bincode-org/bincode), which is arguably more mature and has wider adoption in production scenarios?
-
In order to support sub-byte dtypes for quantization, I (and many others) believe that it is better to pack these smaller dtypes into existing pytorch dtypes in order to reduce memory bandwidth conten…
-
Really solid work!
May I ask what the actual compressed model size is, considering that it is a partial binarization way and there are some 8-bit parameters inside each weight matrix? Can we compre…