pola-rs / polars

Dataframes powered by a multithreaded, vectorized query engine, written in Rust
https://docs.pola.rs
Other
29.27k stars 1.85k forks source link

`read_parquet()` returns empty `list[i64]` or explicitly crashes if `use_pyarrow=True` #6428

Open oscar6echo opened 1 year ago

oscar6echo commented 1 year ago

Polars version checks

Issue description

I try to read a sample parquet file produced by another language (golang).
It works fine except for type list[int64] which returns empty while it is not on disk.
If I force use_pyarrow=True then it explicitly crashes.

NOTE: It seems somewhat connected to issue #6289 though this one is about reading parquet file while the other is about writing them.

1/ version 1:

2/ version 2:

Reproducible example

The sample parquet file [sample.pqt](https://github.com/oscar6echo/parquet-go-explo/blob/main/sample.pqt)

Expected behavior

The last column in the parquet file (list[int64]) should be returned by pl.ready_parquet():

{0 Warlockfir 22 true 50 2023-01-25 09:27:47.077523962 +0100 CET m=+0.001425235 [0 3]}
{1 Maskwood 23 false 50.6 2023-01-25 09:27:47.077562939 +0100 CET m=+0.001464212 [1 4]}
{2 Pixiecomet 24 false 51.2 2023-01-25 09:27:47.077569845 +0100 CET m=+0.001471119 [2 5]}
{3 Biterflame 25 true 51.8 2023-01-25 09:27:47.077575769 +0100 CET m=+0.001477042 [3 6]}
{4 Graspsalt 26 false 52.4 2023-01-25 09:27:47.077579569 +0100 CET m=+0.001480841 [4 7]}
{5 Scalewave 22 false 53 2023-01-25 09:27:47.077584464 +0100 CET m=+0.001485735 [5 8]}
{6 Singerorange 23 true 53.6 2023-01-25 09:27:47.077589038 +0100 CET m=+0.001490311 [6 9]}
{7 Takerfringe 24 false 54.2 2023-01-25 09:27:47.077593866 +0100 CET m=+0.001495142 [7 10]}
{8 Arrowcopper 25 false 54.8 2023-01-25 09:27:47.07759902 +0100 CET m=+0.001500296 [8 11]}
{9 Terrierrowan 26 true 55.4 2023-01-25 09:27:47.077604323 +0100 CET m=+0.001505596 [9 12]}

Repo oscar6echo/parquet-go-explo contains the code to produce this parquet file.

Installed versions

``` ---Version info--- Polars: 0.15.16 Index type: UInt32 Platform: Linux-5.15.0-58-generic-x86_64-with-glibc2.35 Python: 3.10.6 | packaged by conda-forge | (main, Aug 22 2022, 20:36:39) [GCC 10.4.0] ---Optional dependencies--- pyarrow: 10.0.1 pandas: 1.5.2 numpy: 1.24.0 fsspec: 2023.1.0 connectorx: xlsx2csv: deltalake: matplotlib: 3.6.2 ```
ritchie46 commented 1 year ago

If I force use_pyarrow=True then it explicitly crashes.

What is the crash? Can you read the file with pandas?

oscar6echo commented 1 year ago

The crash when using use_pyarrow=True is described in issue above. :point_up:

If I use pandas it crashes too:

See below.

1/ with pyarrow

2/ with fastparquet

ritchie46 commented 1 year ago

Then your parquet file is likely incorrect.

oscar6echo commented 1 year ago

Then your parquet file is likely incorrect.

Well that is what I thought too: the parquet file is corrupt or invalid in some way.
But it is not that clear cut. See below:

1/ If I read the file with pyarrow I do see the data, including the list[i64] data:

----read: pyarrow.Table name: large_string age: int64 sex: bool weight: double time: timestamp[us] array: large_list child 0, item: int64

name: [["Masterfog","Armspice"]] age: [[22,23]] sex: [[true,false]] weight: [[51.2,65.3]] time: [[2023-01-25 12:03:14.208962,2023-01-25 12:03:14.208962]] array: [[[10,20],[11,22]]]



2/
There appear to be a subtle difference between the golang library I use [segmentio/parquet-go](https://github.com/segmentio/parquet-go) and polars in the way they write nested fields to parquet.  

In this example each lib can read their own nested list field but not that of the other. They return empty lists instead.

3/
So contrary to what I thought parquet compatilibity can be partial :thinking: 

So I think there must be subtle differences in the writing to parquet.  
Where exactly in polars source code to you write nested fields ?  
I'll try and compare with the golang lib.  
oscar6echo commented 1 year ago

I'll add that one (of the many) benefit(s) of polars over pandas is the capability to hold lists and structs in cells. So this parquet issue is not negligible if you use python/polars in hybrid (multi language) data pipelines.

oscar6echo commented 1 year ago

In fact, and contrary to what I wrote above, polars and parquet-go produce incompatible parquet formats - for nested fields only.
To be more precise, both are valid parquet files, but have inconsistent schemas.

However the bridge does not seem completely insurmountable.

See parquet-go/issues/468 for the discussion.

@ritchie46, any opinion on the subject ?

tustvold commented 1 year ago

This likely relates to https://github.com/apache/parquet-format/blob/master/LogicalTypes.md#lists and relates to readers such as parquet2 not handling the backwards compatibility rules correctly.

For reference the logic to handle this in parquet can be found here and here

oscar6echo commented 1 year ago

After some trial and error, I found a way to read a polars produced parquet file in go, then save it as parquet and load the latter parquet file to a polars dataframe.

See https://github.com/oscar6echo/parquet-polars-go

I'll copy my conclusion:

Conclusion:

  • fraugster/parquet-go is the only lib that produces polars compatible format for complex types, but is is the slowest and most verbose to achieve that
  • xitongsys/parquet-go and segmentio/parquet-go are significantly faster but produce nested types that are not compatible with polars

It seems parquet format is quite permissive so different libs have generally little chance to be compatible beyond the most basic types. So it would be good if polars offered some flexibility in the parquet formatting of nested types to help compatibility with other ecosystems.