pola-rs / polars

Dataframes powered by a multithreaded, vectorized query engine, written in Rust
https://docs.pola.rs
Other
29.13k stars 1.83k forks source link

ComputeError: unable to parse Hive partition value: "TRUE" #16381

Closed proever closed 2 months ago

proever commented 3 months ago

Checks

Reproducible example

from datetime import datetime
from pathlib import Path

import polars as pl
import pyarrow.parquet as pq

dataset_path = Path("./test_dataset")
test_df = (
    pl.DataFrame(
        {
            "timestamp": [datetime(2021, 1, 1), datetime(2021, 2, 1)],
            "data": [1, 2],
            "ticker": ["AAPL", "TRUE"],
        }
    )
    .with_columns(pl.col("ticker").cast(pl.Enum(["AAPL", "TRUE"])))
    .with_columns(
        pl.col("timestamp").dt.year().alias("year"), pl.col("timestamp").dt.month().alias("month")
    )
)

pq.write_to_dataset(test_df.to_arrow(), dataset_path, partition_cols=["year", "month", "ticker"])

df_in = pl.scan_parquet(dataset_path.rglob("*.parquet"))

df_in.filter(pl.col("ticker") == "AAPL").collect()

Log output

no logs it seems, error below:

Traceback (most recent call last):
  File "/home/proever/code/data_pipeline/mve.py", line 26, in <module>
    df_in.filter(pl.col("ticker") == "AAPL").collect()
  File "/home/proever/code/data_pipeline/.venv/lib/python3.12/site-packages/polars/lazyframe/frame.py", line 1816, in collect
    return wrap_df(ldf.collect(callback))
                   ^^^^^^^^^^^^^^^^^^^^^
polars.exceptions.ComputeError: unable to parse Hive partition value: "TRUE"

This error occurred with the following context stack:
        [1] 'parquet scan' failed
        [2] 'filter' input failed to resolve

Issue description

I have a large dataset of stock data, which includes ticker names. One of those tickers is "TRUE" (apparently it's the company TrueCar). I am trying to partition the dataset based on, among other things, the ticker. This seems to work fine, but when I try to load the dataset back in with scan_csv, polars complains about parsing the "TRUE" partition (see above).

Changing the value of "TRUE" to "TEST" (in the data and the Enum) fixes the issue. Changing it back to "FALSE" causes the issue to occur again, which leads me to believe there is some logic being applied to attempt to parse boolean values, causing the issue.

Expected behavior

code above should run with no errors, as it does when "TRUE" is replaced with "TEST" (for example)

Installed versions

``` --------Version info--------- Polars: 0.20.27 Index type: UInt32 Platform: Linux-6.1.0-20-amd64-x86_64-with-glibc2.36 Python: 3.12.3 (main, Apr 15 2024, 18:25:56) [Clang 17.0.6 ] ----Optional dependencies---- adbc_driver_manager: cloudpickle: connectorx: deltalake: fastexcel: fsspec: 2024.5.0 gevent: hvplot: matplotlib: nest_asyncio: 1.6.0 numpy: 1.26.4 openpyxl: pandas: pyarrow: 16.1.0 pydantic: 2.7.1 pyiceberg: pyxlsb: sqlalchemy: 2.0.30 torch: xlsx2csv: xlsxwriter: ```
proever commented 3 months ago

partitioning only on "ticker" causes the issue too, I just checked

proever commented 3 months ago

pq.ParquetDataset(dataset_path).read().to_pandas() works fine as well

proever commented 2 months ago

this seems fixed in 1.0, at least with the following changes:

from datetime import datetime
from pathlib import Path

import polars as pl

dataset_path = Path("./test_dataset")
dataset_path.mkdir(exist_ok=True)

test_df = (
    pl.DataFrame(
        {
            "timestamp": [datetime(2021, 1, 1), datetime(2022, 2, 1)],
            "data": [1, 2],
            "ticker": ["AAPL", "TRUE"],
        }
    )
    .with_columns(pl.col("ticker").cast(pl.Enum(["AAPL", "TRUE"])))
    .with_columns(
        pl.col("timestamp").dt.year().alias("year"), pl.col("timestamp").dt.month().alias("month")
    )
)

test_df.write_parquet(
    dataset_path, use_pyarrow=True, pyarrow_options={"partition_cols": ["ticker", "year", "month"]}
)

df_in = pl.scan_parquet(dataset_path, hive_partitioning=True)

print(df_in.filter(pl.col("ticker") == "AAPL").collect())