Open Fokko opened 2 months ago
@Fokko thanks for your effort here
@marvinlanhenke No problem, thank you for all the work on the project. While compiling this I realized how much work has been done 🚀
Thanks for putting this together @Fokko! It's great to have this clarity on where we're heading. Let's go! 🙌
Hi, @Fokko About the read projection part, currently we can convert parquet files into arrow streams, but there are some limitations: it only support primitive types, and schema evolution is not supported yet. Our discussion is in this issue: https://github.com/apache/iceberg-rust/issues/244 And here is the first step of projection by @viirya : https://github.com/apache/iceberg-rust/pull/245
About the glue, hive, rest catalogs, I think we already have integrations: https://github.com/apache/iceberg-rust/blob/2018ffc87625bdff939aac791784d8eabc4eda38/crates/catalog/glue/tests/glue_catalog_test.rs https://github.com/apache/iceberg-rust/blob/ffd76eb41594416b366a17cdbc85112c68c01a17/crates/catalog/hms/tests/hms_catalog_test.rs https://github.com/apache/iceberg-rust/blob/d6703df40b24477d0a5a36939746bb1b36cc6933/crates/catalog/rest/tests/rest_catalog_test.rs
Also as we discussed in this doc, do you mind to add datafusion integration, python binding, wasm binding into futures?
Hi, @Fokko About the read projection part, currently we can convert parquet files into arrow streams, but there are some limitations: it only support primitive types, and schema evolution is not supported yet. Our discussion is in this issue: https://github.com/apache/iceberg-rust/issues/244 And here is the first step of projection by @viirya : https://github.com/apache/iceberg-rust/pull/245
Thanks for the context, I've just added this to the list.
About the glue, hive, rest catalogs, I think we already have integrations:
Ah yes, I forgot to check those marks, thanks!
Also as we discussed in this doc, do you mind to add datafusion integration, python binding, wasm binding into futures?
Certainly! Great suggestions! I'm less familiar on some of these topics (like Datafusion), feel free to edit the post if you feel something is missing.
Certainly! Great suggestions! I'm less familiar on some of these topics (like Datafusion), feel free to edit the post if you feel something is missing.
...for Datafusion I have provided a basic design proposal and implementation for some of the datafustion traits, like catalog & schema provider; Perhaps we can also move forward on this: #324
Certainly! Great suggestions! I'm less familiar on some of these topics (like Datafusion), feel free to edit the post if you feel something is missing.
...for Datafusion I have provided a basic design proposal and implementation for some of the datafustion traits, like catalog & schema provider; Perhaps we can also move forward on this: #324
Yeah, I'll take a review later.
Iceberg-rust 0.3.0
The main objective of 0.3.0 is to have a working read path (non-exhaustive list :)
field_summary
: Skipping data on the highest level by pruning away manifests:TableScan
in flight by @sdd in https://github.com/apache/iceberg-rust/pull/323102: partition
structpartition-spec
schema to the102: partition
struct and evaluates it.TableScan
TableScan
Blocking issues:
Nice to have (related to the query plan optimizations above):
DELETE
manifests that contain unrelated delete files.State of catalog integration:
For the release after that, I think the commit path is going to be important.
Iceberg-rust 0.4.0 and beyond
Nice to have for the 0.3.0 release, but not required. Of course, open for debate.
Commit path
The commit path entails writing a new metadata JSON.
Metadata tables
Metadata tables are used to inspect the table. Having these tables also allows easy implementation of the maintenance procedures since you can easily list all the snapshots, and expire the ones that are older than a certain threshold.
Write support
Most of the work in write support is around generating the correct Iceberg metadata. Some decisions can be made, for example first supporting only FastAppends, and only V2 metadata.
It is common to have multiple snapshots in a single commit to the catalog. For example, an overwrite operation of a partition can be a delete + append operation. This makes the implementation easier since you can separate the problems, and tackle them one by one. Also, for the roadmap it makes it easier since their operations can be developed in parallel.
Future topics
Contribute
If you want to contribute to the upcoming milestone, feel free to comment on this issue. If there is anything unclear or missing, feel free to reach out here as well 👍