-
Recognizing Steps to controlling the size of the data going into storage
Currently the system has 4 TB or storage. It is important to contoll the size of the data that is used for the model when sor…
-
This is a causal inference technique for time series that employs complexity estimators based on lossless data compression algorithms. Relevant publications:
https://peerj.com/articles/cs-196/
https…
-
Current implementation of VectorBinding supports compression/decompression via GZIP and BZIP2. However, it might be useful to have more advanced compression methods as well, e.g. LZO, LZ4 or Snappy, s…
-
I would suggest that the protocol specification in some way could allow directory sends to use zstd compression instead of deflate. The speed and compression ratios are very impressive (https://facebo…
-
After spending 20-30 hours investigating, I'm disabling the `scan_hiberfile` scanner by default because I'm not convinced that it's actually doing anything. I've looked at feature files that find feat…
-
## Prerequisites
* [x] Put an X between the brackets on this line if you have done all of the following:
* Checked that your rule idea isn't already filed: [search](https://github.com/fireeye/…
-
Hello, this is such a cool project!
I was wondering if the compressed anndata objects could be shared on the website. For example, for the full dataset, saving like `write_h5ad(path, compression="g…
-
The current implementation only support compression for different algorithms in HDFS, but in local or other `java.nio.Path` implementations it does only check for `GZIP` and `BZIP2`. In addition, this…
-
`bsdiff` format uses two somewhat unrelated concepts that fit many, but not all use cases: **diffing algorithm** and **patch storage format**.
There are multiple diffing algorithms which could be …
-
## Description
As per title.
I think the reason is that, annoyingly, not all `pl.DataFrame.write_*` methods are equivalent: some can take a buffer, but some others can't.
Compare these two:
- …