apache / iceberg-python

Apache PyIceberg
https://py.iceberg.apache.org/
Apache License 2.0
472 stars 173 forks source link

Regression in 0.7.0 due to type coercion from "string" to "large_string" #1128

Open maxfirman opened 2 months ago

maxfirman commented 2 months ago

Apache Iceberg version

0.7.0

Please describe the bug 🐞

There is a regression in introduced in version 0.7.0 where arrow tables written with a "string" data type, get cast to "large_string" when read back from Iceberg.

The code below reproduces the bug. The assertion succeeds in v0.6.1, but fails in 0.7.0 because the schema is being changed from "string" to "large_string".

from tempfile import TemporaryDirectory

import pyarrow
from pyiceberg.catalog.sql import SqlCatalog

def main():
    with TemporaryDirectory() as warehouse_path:
        catalog = SqlCatalog(
            "default",
            **{
                "uri": f"sqlite:///{warehouse_path}/pyiceberg_catalog.db",
                "warehouse": f"file://{warehouse_path}",
            },
        )

        catalog.create_namespace("default")

        schema = pyarrow.schema(
            [
                pyarrow.field("foo", pyarrow.string(), nullable=True),
            ]
        )

        df = pyarrow.table(data={"foo": ["bar"]}, schema=schema)

        table = catalog.create_table(
            "default.test_table",
            schema=df.schema,
        )

        table.append(df)

        # read arrow table back table from iceberg
        df2 = table.scan().to_arrow()

        # this assert succeeds with 0.6.1, but fails with 0.7.0 because the column type
        # has changed from "string" to "large_string"
        assert df.equals(df2)

if __name__ == "__main__":
    main()
kevinjqliu commented 2 months ago

To summarize, given a table created with string type schema and written to with string type data, reading the table back returns pyarrow dataframe with large_string type.

Expected: return pyarrow dataframe with string type, matching the table schema

Confirmed the above issue on main branch.

kevinjqliu commented 2 months ago

The issue above is was mentioned here https://github.com/apache/iceberg-python/pull/986#discussion_r1706662170

On read, pyarrow will use large type as default. It is controlled by this table property (courtesy of #986) https://github.com/apache/iceberg-python/blob/9857107561d2267813b7ce150b01b4e6ac4b3e34/pyiceberg/io/pyarrow.py#L1365-L1371

kevinjqliu commented 2 months ago

As a workaround, you can manually set the table property to force the read path to use the string type

        from pyiceberg.io import PYARROW_USE_LARGE_TYPES_ON_READ
        table = catalog.create_table(
            "default.test_table",
            schema=df.schema,
            properties={PYARROW_USE_LARGE_TYPES_ON_READ: False}
        )
maxfirman commented 2 months ago

Thanks @kevinjqliu. I can confirm that the workaround resolves the problem when using latest main branch but not v0.7.0 or v0.7.1.

Setting PYARROW_USE_LARGE_TYPES_ON_READ=False will cause the test to fail the other way around, i.e a pyarrow table with a large_string will be read back with a string. I'm guess this is just a fundamental limitation in that Iceberg only has one string type.

I would be tempted to change the default value of PYARROW_USE_LARGE_TYPES_ON_READ to True, as I would consider pyarrow string to be the more commonly used type compared to large_string. This would also give backwards compatibility with pyiceberg <0.7.0.

A further improvement would be to write some kind of type hint into the iceberg metadata that would tell pyiceberg whether the string column was supposed to be interpreted as a pyarrow large_string.

kevinjqliu commented 2 months ago

I'm guess this is just a fundamental limitation in that Iceberg only has one string type.

Yea, there's a separation of Iceberg type and the Arrow/Parquet/on-disk type. Iceberg has one string type; Arrow has two. On iceberg table write, an Iceberg string type can be written to disk as Arrow large type or normal type. On iceberg table read, the Iceberg string type should read back as either Arrow large type or normal type based on the on-disk schema.

The problem here is that PYARROW_USE_LARGE_TYPES_ON_READ defaults to True, which for the scenario where an Iceberg string type is written as normal string type on disk, will read back as large string type.

Perhaps, instead of setting PYARROW_USE_LARGE_TYPES_ON_READ to True, we can leave it unset by default, which will then use the on-disk representation.

cc @sungwy / @Fokko I think #929 will help resolve this issue (based on this comment https://github.com/apache/iceberg-python/pull/929#discussion_r1684298458)