MrPowers / farsante

Fake Pandas / PySpark DataFrame creator
43 stars 6 forks source link

Add ability to generate a parquet groupby dataset from h2o-data-rust/main.rs #7

Open jeffbrennan opened 12 months ago

jeffbrennan commented 12 months ago

I think it would be nice to have a way to generate a parquet file with the rust utility. I want to get more familiar with rust so I can start working on this.

SemyonSinchenko commented 12 months ago

Parquet is a problem because it is not an append-friendly format, and we will lose the ability of out of memory generation. We may add AVRO as an option, it is append-friendly and compared to CSV it contains schema information.

jeffbrennan commented 12 months ago

Parquet is a problem because it is not an append-friendly format, and we will lose the ability of out of memory generation. We may add AVRO as an option, it is append-friendly and compared to CSV it contains schema information.

Yeah that makes sense. One workaround I'm thinking of is writing the parquet file in multiple parts so that when the full directory is read, the sum of rows and groups equals the specification. Let me know if you think that's worth pursuing.

I'll work on the AVRO implementation in the meantime.

SemyonSinchenko commented 12 months ago

is writing the parquet file in multiple parts so that when the full directory is read

In this case it may affect benchmarking very hard, because number of files and information in parquet headers may be used by spark and some other frameworks, but for example pandas cannot apply any optimization. @MrPowers what do you think about it?