Open jonimatix opened 1 month ago
You should edit into the title that the issue is reading from cloud. Please run this and post full output, maybe that'll help.
import os
os.environ['RUST_TRACEBACK']='full'
import polars as pl
parquet_file_name=...
df = pl.read_parquet(parquet_file_name, columns=col_list, use_pyarrow=False)
The team treats all panic errors as bugs but it's likely that you need to set storage_options
.
See here https://docs.pola.rs/api/python/stable/reference/api/polars.scan_parquet.html#polars-scan-parquet
and here
https://docs.rs/object_store/latest/object_store/aws/enum.AmazonS3ConfigKey.html
The environment variables that pyarrow via fsspec look for aren't 100% in sync with what polars via object_store looks for so that's probably how to fix the issue.
You should edit into the title that the issue is reading from cloud. Please run this and post full output, maybe that'll help.
import os os.environ['RUST_TRACEBACK']='full' import polars as pl parquet_file_name=... df = pl.read_parquet(parquet_file_name, columns=col_list, use_pyarrow=False)
The team treats all panic errors as bugs but it's likely that you need to set
storage_options
.See here https://docs.pola.rs/api/python/stable/reference/api/polars.scan_parquet.html#polars-scan-parquet
and here
https://docs.rs/object_store/latest/object_store/aws/enum.AmazonS3ConfigKey.html
The environment variables that pyarrow via fsspec look for aren't 100% in sync with what polars via object_store looks for so that's probably how to fix the issue.
Updated as suggested
object_store
As per your suggestion to use storage_options:
# WORKS
df = pl.read_parquet(parquet_file_name, columns=col_list, use_pyarrow=False,
storage_options = {
"aws_access_key_id": os.environ.get('AWS_ACCESS_KEY_ID'),
"aws_secret_access_key": os.environ.get('AWS_SECRET_ACCESS_KEY'),
"aws_region": AWS_REGION})
# DOES NOT WORK
df = pl.read_parquet(parquet_file_name, columns=col_list, use_pyarrow=True,
storage_options = {
"aws_access_key_id": os.environ.get('AWS_ACCESS_KEY_ID'),
"aws_secret_access_key": os.environ.get('AWS_SECRET_ACCESS_KEY'),
"aws_region": AWS_REGION})
The error message is
TypeError: AioSession.init() got an unexpected keyword argument 'aws_access_key_id'
The new error is because there isn't parity between what fsspec expects as key names and what object_store expects between key names. If you set your AWS_REGION as as an env var then does pl.read_parquet(parquet_file_name, columns=col_list,use_pyarrow=False)
work? I don't use S3 so I'm just guessing at that.
From here it looks like you should set the environment variable AWS_DEFAULT_REGION
to whatever it should be and then I think that pl.read_parquet(parquet_file_name, columns=col_list,use_pyarrow=False)
would work.
If you want a minimal reproduction of the panic:
import polars as pl
pl.scan_parquet("s3://foobar/file.parquet").collect()
Checks
Reproducible example
Log output
Issue description
Running the read_parquet when using use_pyarrow=False raises PanicException error.
I noticed that the below works OK, when I add use_pyarrow=True, but it seems very slow :
The file that I am reading is stored in S3. The S3 file path (parquet_file_name) has no spaces in it. If I download the parquet file locally, and open the file from local disk, Polars does not raise any issues.
Also note, before I upgraded polars version to 1.21, I was using version 0.17 and read_parquet did not raise any issues!
Expected behavior
Dataframe read without errors
Installed versions
--------Version info--------- Polars: 1.2.1 Index type: UInt32 Platform: Windows-10-10.0.22621-SP0 Python: 3.11.9 | packaged by Anaconda, Inc. | (main, Apr 19 2024, 16:40:41) [MSC v.1916 64 bit (AMD64)]
----Optional dependencies---- adbc_driver_manager:
cloudpickle:
connectorx:
deltalake:
fastexcel:
fsspec: 2024.6.1
gevent:
great_tables:
hvplot:
matplotlib: 3.9.1
nest_asyncio: 1.6.0
numpy: 2.0.1
openpyxl:
pandas: 2.2.2
pyarrow: 17.0.0
pydantic:
pyiceberg:
sqlalchemy:
torch:
xlsx2csv:
xlsxwriter: