Open ZJONSSON opened 6 years ago
Ideally his might require pushing the shredding operation further down the stack, ideally when we are writing pages for each column chunk. Instead of pushing the fully shredded record on rowBuffer we could push a flattened record (i.e. de-nested) in the same order as the shredder would put them.
The encodeColumnChunk would receive the raw values and perform shredding of all records within each each page ( max/min can be easily accessed at this point and added to metadata for each row and each columnChunk). This would also make splitting into pages way easier as we wouldn't have to reverse-engineer the rLevel and dLevel.
Thoughts?
Here is a quick WIP of pages - passes integration tests https://github.com/ZJONSSON/parquetjs/commit/e643f7220a9f31cd97a0757b0dfdff8aa47c5138
Did a version of this library with support for statistics ever get out there? Maybe a fork somewhere? I'd like to have these available.
@dobesv ZJONSSON's fork has statistics implemented. parquetjs-lite https://github.com/ZJONSSON/parquetjs
I realize this post is quite old, but might be useful for others that stumble on it.
Statistics definition: https://github.com/ironSource/parquetjs/blob/master/parquet.thrift#L204-L212
DataPageHeader: https://github.com/ironSource/parquetjs/blob/master/parquet.thrift#L342-L356 DataPageHeaderV2: https://github.com/ironSource/parquetjs/blob/master/parquet.thrift#L379-L405 ColumnMetaData: https://github.com/ironSource/parquetjs/blob/master/parquet.thrift#L472-L508
This allows min/max to been seen immediately for given pages/row, avoiding scanning data outside of the area of interest for the column.