Open jeffbrennan opened 12 months ago
Parquet is a problem because it is not an append-friendly format, and we will lose the ability of out of memory generation. We may add AVRO as an option, it is append-friendly and compared to CSV it contains schema information.
Parquet is a problem because it is not an append-friendly format, and we will lose the ability of out of memory generation. We may add AVRO as an option, it is append-friendly and compared to CSV it contains schema information.
Yeah that makes sense. One workaround I'm thinking of is writing the parquet file in multiple parts so that when the full directory is read, the sum of rows and groups equals the specification. Let me know if you think that's worth pursuing.
I'll work on the AVRO implementation in the meantime.
is writing the parquet file in multiple parts so that when the full directory is read
In this case it may affect benchmarking very hard, because number of files and information in parquet headers may be used by spark and some other frameworks, but for example pandas cannot apply any optimization. @MrPowers what do you think about it?
I think it would be nice to have a way to generate a parquet file with the rust utility. I want to get more familiar with rust so I can start working on this.