Open ElenaKhaustova opened 1 month ago
This is slightly unscientific, but I trust the vibes in the industry enough to say Iceberg will clearly be the winner in long term.
Plus people saying things like this: https://www.linkedin.com/posts/michaelrosam_the-five-phases-of-a-successful-ai-data-strategy-activity-7252579389664587776-TgMo?utm_source=share&utm_medium=member_desktop
In my opinion this is a situation where we should really go all in on the technology rather than be super agnostic / on-size-fits-all. I'd love a future for Kedro where without much configuration persisted data defaults to this model.
@datajoely I actually took a stab on this a while ago. My experience with it is Delta has a more mature support than Iceberg at the moment in the Python ecosystem. for example the integration of ibis with iceberg is suboptimal. So from there I think Delta is gonna have a better performance with anything database related, AFAIK with iceberg it always load things in memory first.
One thing to note that these "versioning" are not as effective as we want. For example, an incremental change of adding 1 row will result in a complete rewrite in current Kedro dataset with Delta as well. For high-level versioning, they works very well with dataframe/table format.
The main challenge here I see is how to unify the "versioning" in Kedro, Kedro use a customisable timestamp, while Delta use a incremental version number (0, 1, xxx) or timestamp. Iceberg probably user something similar but I haven't checked.
Delta is 100% more mature, Iceberg is the horse to back.
This is the thread I was trying to find earlier:
https://x.com/sean_lynch/status/1845500735842390276
I also don't think we should be wedded to that timestamp decision. It was made a long time ago and also has a non-trivial risk of collision. If we were doing that again we'd be better off using a ULID...
^ To be more specific, I was referring mainly to the python binding, i.e. PyIceberg and rust-delta(python). Iceberg itself is fairly mature, especially with the catalog etc, but the python binding seems to be lacking behind a little bit.
Any chance I can take this ticket or work together on this? I have explored this a little bit a while ago and would be a great opportunities to continue on it.
@merelcht @ankatiyar
I agree with @datajoely is the horse to back, at least from an API perspective. PyIceberg is maturing (it has moved significantly in the past couple years).
Realistically, I don't think Kedro should dictate whether you use Iceberg or Delta (or Hudi); that is a user choice, just like whether to use Spark or Polars. This is where unified APIs will ideally make implementation easier.
So I'm actually being bullish and saying we should pick one of these when it comes to our idea of versioned data. We simply don't have capacity to integrate everywhere properly.
Super cool application of these concepts
Realistically, I don't think Kedro should dictate whether you use Iceberg or Delta (or Hudi); that is a user choice, just like whether to use Spark or Polars.
I'm with @deepyaman on this one. There should be a layer in Kedro that is format-agnostic. We can be more opinionated in a higher layer.
What's clear though is that the Apache Iceberg’s REST catalog API has won for sure https://github.com/kedro-org/kedro-devrel/issues/141#issuecomment-2264794234
I'm with @deepyaman on this one. There should be a layer in Kedro that is format-agnostic. We can be more opinionated in a higher layer.
I just want to warn against the noble pursuit of generalisation when there are times to pick a winner, I'd much rather pick a horse and do it well.
@ElenaKhaustova I have left some questions at the end since it's not a PR yet.
https://noklam.github.io/blog/posts/pyiceberg/2024-11-18-PyIcebergDataset.html
# Questions
- What does it means when we said " if we can use Iceberg to map a single version number to code, parameters, and I/O data within Kedro and how it aligns with Kedro’s workflow." Versioning code & parameters sounds more like versioning artifacts.
- How to efficiently version data? `overwrite` is a completely re-write. For SQL engine this is implemented by the engine that utilise API like `append`, `replace`. With pandas/polars it is unclear if it's possible. (Maybe be possible if it's using something like `ibis`)
- Incremental pipeline (and incremental data)
- Version non-table type, i.e. parameters, code(?), Iceberg support only these three formats out of the box: Apache Parquet, Apache ORC, and Apache Avro. Parquet is the first-class citizen and the only format that people use in practice.
@ElenaKhaustova I have left some questions at the end since it's not a PR yet.
https://noklam.github.io/blog/posts/pyiceberg/2024-11-18-PyIcebergDataset.html
# Questions - What does it means when we said " if we can use Iceberg to map a single version number to code, parameters, and I/O data within Kedro and how it aligns with Kedro’s workflow." Versioning code & parameters sounds more like versioning artifacts. - How to efficiently version data? `overwrite` is a completely re-write. For SQL engine this is implemented by the engine that utilise API like `append`, `replace`. With pandas/polars it is unclear if it's possible. (Maybe be possible if it's using something like `ibis`) - Incremental pipeline (and incremental data) - Version non-table type, i.e. parameters, code(?), Iceberg support only these three formats out of the box: Apache Parquet, Apache ORC, and Apache Avro. Parquet is the first-class citizen and the only format that people use in practice.
From the versioning research (https://miro.com/app/board/uXjVK9U8mVo=/?share_link_id=24726044039) pain points and summary, we concluded that users mention two major problems—versioning and experiment tracking. At first, we decided to focus on versioning. With it, the main user pain point was not to version a specific artefact as current kedro versioning allows so (not in an optimal way though) but to be able to retrieve a whole experiment/run. Meaning being able to travel back in time with your code and data and checkout to a specific version for the whole kedro project not just for the individual artifact.
Please see Kedro + DVC example for better understanding: https://github.com/kedro-org/kedro/issues/4239#issuecomment-2479038489
It's clear we can easily version artifacts (tabular data), but what about versioning catalogs/projects—more high-level entities and non-tabular data?
So the main questions are:
My view:
- Does it make sense to use it for anything else but versioning tabular data?
I'm willing to bet >95% of use cases fall into this.
- Can we map artifacts snapshots with the git commit hash for example to retrieve the full project state or is there any other mechanism for that?
Now I've seen how elegant the dvc integration can be, maybe that's the right paradigm?
It's clear we can easily version artifacts (tabular data), but what about versioning catalogs/projects—more high-level entities and non-tabular data?
So the main questions are:
Does it make sense to use it for anything else but versioning tabular data?
My short answer is no.
Can we map artifacts snapshots with the git commit hash for example to retrieve the full project state or is there any other mechanism for that?
You can create "branch" and "tags" with Iceberg Table, but again that's for tabular data only. https://py.iceberg.apache.org/api/#snapshot-management
Many experiment tracking tools start with metric tracking + git hash, then slowly adding data versioning & lineage as well. In general, a full reproducible experiment (whether or not this is important is a different story):
With Iceberg, the metadata needed to be handled externally, i.e. a SQLite db that keep tracks of runs git hash + load_version of all the data (or their snapshot_id from the Iceberg table). So when user need to do time travel, it needs to specify the load version like this:
SELECT * FROM your_table_name TIMESTAMP AS OF 'YYYY-MM-DD HH:MM:SS'; # can be id as well,
Description
At the current stage by versioning we assume mapping a single version number to the corresponding versions of parameters, I/O data, and code. So one is able to retrieve a full project state including data at any point in time.
The goal is to check if we can use Iceberg to map a single version number to code, parameters, and I/O data within Kedro and how it aligns with Kedro’s workflow.
As a result, we expect a working example of kedro project used with Iceberg for versioning and some assumptions on:
Context
https://github.com/kedro-org/kedro/issues/4199
Market research