apache / arrow

Apache Arrow is the universal columnar format and multi-language toolbox for fast data interchange and in-memory analytics
https://arrow.apache.org/
Apache License 2.0
14.59k stars 3.54k forks source link

[Python] read_row_group fails with Nested data conversions not implemented for chunked array outputs #21526

Open asfimport opened 5 years ago

asfimport commented 5 years ago

Hey, I'm trying to concatenate two files and to avoid reading everything to memory at once, I wanted to use read_row_group for my solution, but it fails.

 

I think it's due to fields like these:

pyarrow.Field<to: list<item: string>>

 

But I'm not sure. Is this a duplicate? The issue linked in the code is resolved https://github.com/apache/arrow/blob/fd0b90a7f7e65fde32af04c4746004a1240914cf/cpp/src/parquet/arrow/reader.cc#L915

 

Stacktrace is

 

  File "/data/teftel/teftel-data/teftel_data/parquet_stream.py", line 163, in read_batches     table = pf.read_row_group(ix, columns=self._columns)   File "/home/kuba/.local/share/virtualenvs/teftel-o6G5iH_l/lib/python3.6/site-packages/pyarrow/parquet.py", line 186, in read_row_group     use_threads=use_threads)   File "pyarrow/_parquet.pyx", line 695, in pyarrow._parquet.ParquetReader.read_row_group   File "pyarrow/error.pxi", line 89, in pyarrow.lib.check_status pyarrow.lib.ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs

Reporter: Jakub Okoński

Related issues:

Note: This issue was originally created as ARROW-5030. Please see the migration documentation for further details.

asfimport commented 5 years ago

Wes McKinney / @wesm: I fixed some cases where this occurs in ARROW-4688, but it is still possible to hit this error for very large row groups (> 2GB of string data in a row group). I didn't see a follow up JIRA to this or ARROW-3762 so we can use this one for the issue

https://github.com/apache/arrow/blob/master/cpp/src/parquet/arrow/reader.cc#L915

asfimport commented 2 years ago

Judah: [~wesm_impala_7e40] I'm also running into this issue. Is this likely to be fixed / easy to fix? I'd be happy to give it a go but not really sure where to start.

Maxl94 commented 5 months ago

In case someone wants to load a pandas dataframe I want to share my workaround.

For me installing fastparquet and specifying the eninge='fastparquet' argument in the load_parquet function worked.

yuxi-liu-wired commented 2 months ago

In case someone wants to load a pandas dataframe I want to share my workaround.

For me installing fastparquet and specifying the eninge='fastparquet' argument in the load_parquet function worked.

Concurring. If the parquet file contains a dictionary/list/struct, then the following

import pandas as pd
df = pd.read_parquet(parquet_path)

throws an error "ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs"

But only if the parquet file is over 1 GB. If it is under 1 GB, then it loads with no problems.