timescale / timescaledb-parallel-copy

A binary for parallel copying of CSV data into a TimescaleDB hypertable
https://www.timescale.com/
Apache License 2.0
360 stars 54 forks source link

Allow to provide compressed input #49

Open REASY opened 3 years ago

REASY commented 3 years ago

Would be a great to have an ability to provide a compressed file in GZIP/ZIP

binakot commented 3 years ago

It's good issue.

Currently timescaledb-parallel-copy just separate an input file on batch of rows: https://github.com/timescale/timescaledb-parallel-copy/blob/master/cmd/timescaledb-parallel-copy/main.go#L195. Implementation of this feature is required partial decompressing and understanding where begin and end another batch of rows.

Full preliminary decompressing of the file will not have any effect, given that the file may not fit into RAM. Also, without this mechanism, parallelism will not work, because each of the workers will not know which piece of data it needs to extract.

jchampio commented 2 years ago

Is an unzip pipeline helpful enough? E.g.

$ gunzip my-data.czv.gz | timescaledb-parallel-copy ...

This will unzip only enough to fill up the OS buffer and then it'll wait for the utility to read more. Or is there a particular reason you'd like the utility to handle this internally?

leonardochen commented 1 week ago

For reference, the command that works is:

gunzip -c csv.gz | tail -n+2 | timescaledb-parallel-copy ...

-c outputs into stdout tail -n+2 ignores the first line