pandas-dev / pandas

Flexible and powerful data analysis / manipulation library for Python, providing labeled data structures similar to R data.frame objects, statistical functions, and much more
https://pandas.pydata.org
BSD 3-Clause "New" or "Revised" License
43.45k stars 17.86k forks source link

BUG: Empty object column writes to parquet as INT32 instead of BINARY L:STRING #37083

Open raginjason opened 3 years ago

raginjason commented 3 years ago

Code Sample, a copy-pastable example

import pandas as pd

pd.DataFrame({"nullable_col": ["some_val"]}, dtype="object").to_parquet("non-null-obj.parquet")

pd.DataFrame({"nullable_col": []}, dtype="object").to_parquet("null-obj.parquet")

Once that runs, parquet-tools illustrates the issue. I would expect a datatype such as OPTIONAL BINARY L:STRING, as in:

$ parquet-tools meta non-null-obj.parquet | grep nullable_col

extra:        pandas = {"index_columns": [{"kind": "range", "name": null, "start": 0, "stop": 1, "step": 1}], "column_indexes": [{"name": null, "field_name": null, "pandas_type": "unicode", "numpy_type": "object", "metadata": {"encoding": "UTF-8"}}], "columns": [{"name": "nullable_col", "field_name": "nullable_col", "pandas_type": "unicode", "numpy_type": "object", "metadata": null}], "creator": {"library": "pyarrow", "version": "1.0.1"}, "pandas_version": "1.1.3"} 
nullable_col: OPTIONAL BINARY L:STRING R:0 D:1
nullable_col:  BINARY SNAPPY DO:4 FPO:32 SZ:80/76/0.95 VC:1 ENC:PLAIN,PLAIN_DICTIONARY,RLE ST:[min: some_val, max: some_val, num_nulls: 0]

But instead got a datatype of OPTIONAL INT32, as in:

$ parquet-tools meta null-obj.parquet | grep nullable_col

extra:        pandas = {"index_columns": [{"kind": "range", "name": null, "start": 0, "stop": 0, "step": 1}], "column_indexes": [{"name": null, "field_name": null, "pandas_type": "unicode", "numpy_type": "object", "metadata": {"encoding": "UTF-8"}}], "columns": [{"name": "nullable_col", "field_name": "nullable_col", "pandas_type": "empty", "numpy_type": "object", "metadata": null}], "creator": {"library": "pyarrow", "version": "1.0.1"}, "pandas_version": "1.1.3"} 
nullable_col: OPTIONAL INT32 R:0 D:1
nullable_col:  INT32 SNAPPY DO:4 FPO:0 SZ:15/14/0.93 VC:0 ENC:PLAIN,RLE,PLAIN_DICTIONARY ST:[no stats for this column]

Problem description

Writing out a column with pandas type object with no values appears to create a parquet type of INT32 when I would expect it to be BINARY L:STRING or similar. I have a daily process that outputs a set of records to parquet and on days where there are no values in an object column the parquet datatype changes to INT32, thus breaking my process as the schemas has changed relative to previous days.

Expected Output

Output of pd.show_versions()

INSTALLED VERSIONS ------------------ commit : db08276bc116c438d3fdee492026f8223584c477 python : 3.7.2.final.0 python-bits : 64 OS : Darwin OS-release : 19.6.0 Version : Darwin Kernel Version 19.6.0: Thu Jun 18 20:49:00 PDT 2020; root:xnu-6153.141.1~1/RELEASE_X86_64 machine : x86_64 processor : i386 byteorder : little LC_ALL : None LANG : None LOCALE : en_US.UTF-8 pandas : 1.1.3 numpy : 1.19.2 pytz : 2020.1 dateutil : 2.8.1 pip : 20.2.3 setuptools : 50.3.0 Cython : None pytest : None hypothesis : None sphinx : None blosc : None feather : None xlsxwriter : None lxml.etree : None html5lib : None pymysql : None psycopg2 : None jinja2 : None IPython : None pandas_datareader: None bs4 : None bottleneck : None fsspec : None fastparquet : None gcsfs : None matplotlib : None numexpr : None odfpy : None openpyxl : None pandas_gbq : None pyarrow : 1.0.1 pytables : None pyxlsb : None s3fs : None scipy : None sqlalchemy : None tables : None tabulate : None xarray : None xlrd : None xlwt : None numba : None
raginjason commented 3 years ago

For other people running into a similar issue, I was able to work around this by mapping the object type columns in question to StringDtype(). Columns of this type appear to map to BINARY L:STRING in Parquet regardless of their contents.

This still seems like a bug to me though. I can understand there may be a need to default types, but I don't see how INT32 is a reasonable default for the catch-all Pandas type of object

jelther commented 3 years ago

This is happening in our processes as well. We have some Decimal values that we map to objects but when we try to read those on Spark 3.0 it breaks our pipelines.

BenjMaq commented 3 years ago

Hey @raginjason, I’m facing the same issue. Using StringDtype it writes correctly as BINARY, however my NULLs are being written as ‘None’ (i.e. the string representation), not a real NULL value. Did you face this as well? Wondering how you dealt with that. Thanks a lot!

MaStFAU commented 1 year ago

I can confirm this is still happening using pandas 1.4.2

reyesjx7 commented 2 months ago

Confirming this is still happening in Pandas 2.2.2