apache / pinot

Apache Pinot - A realtime distributed OLAP datastore
https://pinot.apache.org/
Apache License 2.0
5.52k stars 1.29k forks source link

Request: Flink connector enhancements #12448

Open davecromberge opened 9 months ago

davecromberge commented 9 months ago

What needs to be done?

Nice to haves:

Other questions:

Why the feature is needed

Our particular use case involves using pre-aggregation before ingestion into Pinot using Apache Datasketches. These are serialized as binary and can be in the order of megabytes. These are appended to a Delta Lake. The idea is to stream records continuously from the Delta Lake using the Flink Delta Connector and have fine grained control over Pinot Segment generation. These segments are to be uploaded directly to Pinot. Our Pinot controllers are secured using Basic Authentication.

It is possible to clone and modify the existing connector and make modifications but some of these enhancements might benefit other users and discussing here is better.

Initial idea/proposal Discuss the points above and collaborate on implementation.

snleee commented 9 months ago

@davecromberge Does FlinkDeltaConnector emit CDC (change data capture) stream? How does it going to handle records update or delete? In other words, what would be the strategy to sync data between delta lake and pinot using Flink Delta Connector?

In order to sync with delta lake using Flink Delta Connector, we have 2 ways:

  1. Full Refresh: for each new version of delta table, we re-ingest entire data
  2. Incremental Updates using Pinot Upsert table: If FlinkDeltaConnector can emit CDC like stream, we can ingest data by enabling the upsert mode.
davecromberge commented 9 months ago

@snleee we do not plan to support CDC directly through this interface. This issue is more about using Flink to build segments and upload them directly to the controller with the intention of giving the user more control over the ingestion process.
Whilst a delta lake might support the operations you mention, we only currently support INSERT/append through our connector. In some sense, we are tackling this problem in increments - eventually we will have to consider the delta lake semantics and synchronise the state between Pinot and the Delta lake by one of the two methods you describe. However, this is out of scope for the issue here which is narrowed to Flink and Pinot. It is entirely possible that this could fall away if there was enough segment / input file lineage to make the sync a reality - which could even be done via a spark job which optimises the delta lake and applies mutation. For our use case having the building blocks in place allows us to replay the INSERT operations from the delta lake from a given checkpoint / version into Pinot should we need to rebuild a table from scratch.

sullis commented 7 months ago

this PR bumps the Flink version to 1.19.0

https://github.com/apache/pinot/pull/12659

rohityadav1993 commented 6 months ago

There are two more enhancements that can be considered:

  1. [Modernization] Flink Unified connector API refactoring for Pinot sink(ref)
  2. [Optimization] Upload segments via URI(via HDFS uri).
  3. New feature: External partition support for upsert realtime tables, see #13107 for more details.