Open maxfirman opened 2 months ago
To summarize, given a table created with string
type schema and written to with string
type data, reading the table back returns pyarrow dataframe with large_string
type.
Expected: return pyarrow dataframe with string
type, matching the table schema
Confirmed the above issue on main
branch.
The issue above is was mentioned here https://github.com/apache/iceberg-python/pull/986#discussion_r1706662170
On read, pyarrow will use large type as default. It is controlled by this table property (courtesy of #986) https://github.com/apache/iceberg-python/blob/9857107561d2267813b7ce150b01b4e6ac4b3e34/pyiceberg/io/pyarrow.py#L1365-L1371
As a workaround, you can manually set the table property to force the read path to use the string
type
from pyiceberg.io import PYARROW_USE_LARGE_TYPES_ON_READ
table = catalog.create_table(
"default.test_table",
schema=df.schema,
properties={PYARROW_USE_LARGE_TYPES_ON_READ: False}
)
Thanks @kevinjqliu. I can confirm that the workaround resolves the problem when using latest main branch but not v0.7.0 or v0.7.1.
Setting PYARROW_USE_LARGE_TYPES_ON_READ=False
will cause the test to fail the other way around, i.e a pyarrow table with a large_string
will be read back with a string
. I'm guess this is just a fundamental limitation in that Iceberg only has one string type.
I would be tempted to change the default value of PYARROW_USE_LARGE_TYPES_ON_READ
to True
, as I would consider pyarrow string
to be the more commonly used type compared to large_string
. This would also give backwards compatibility with pyiceberg <0.7.0
.
A further improvement would be to write some kind of type hint into the iceberg metadata that would tell pyiceberg whether the string column was supposed to be interpreted as a pyarrow large_string
.
I'm guess this is just a fundamental limitation in that Iceberg only has one string type.
Yea, there's a separation of Iceberg type and the Arrow/Parquet/on-disk type. Iceberg has one string type; Arrow has two. On iceberg table write, an Iceberg string type can be written to disk as Arrow large type or normal type. On iceberg table read, the Iceberg string type should read back as either Arrow large type or normal type based on the on-disk schema.
The problem here is that PYARROW_USE_LARGE_TYPES_ON_READ
defaults to True
, which for the scenario where an Iceberg string type is written as normal string type on disk, will read back as large string type.
Perhaps, instead of setting PYARROW_USE_LARGE_TYPES_ON_READ
to True
, we can leave it unset by default, which will then use the on-disk representation.
cc @sungwy / @Fokko I think #929 will help resolve this issue (based on this comment https://github.com/apache/iceberg-python/pull/929#discussion_r1684298458)
Apache Iceberg version
0.7.0
Please describe the bug 🐞
There is a regression in introduced in version 0.7.0 where arrow tables written with a "string" data type, get cast to "large_string" when read back from Iceberg.
The code below reproduces the bug. The assertion succeeds in v0.6.1, but fails in 0.7.0 because the schema is being changed from "string" to "large_string".