-
**What is the user interaction of your feature**
A concise description of user interactions or user stories of your feature request
**Is your feature request related to a problem? Please describe.…
-
we use flink write data to the paimion catalog which location is on the cos, then we use starrocks to read the data, the cn will abort.
the mode of StarRocks is shared-data
### Steps to reproduc…
-
This change will amend the documentation to reflect that Flink 1.19 is now supported and will update the pr yaml to ensure that fixes are built against 1.19.
-
### Search before asking
- [X] I had searched in the [issues](https://github.com/DataLinkDC/dinky/issues?q=is%3Aissue) and found no similar issues.
### What happened
使用executjar执行doris同步mysql表失败
…
-
### Query engine
I have a question.
set iceberg.catalog.hadoop.type = hadoop;
set iceberg.catalog.hadoop.warehouse = hdfs://192.168.221.140:8020/tmp/sebu;
CREATE database if not exists iceberg_ha…
-
**Describe the problem you faced**
1. **Configuration Conflict in Flink Hudi Job**: When modifying the configuration of an existing Flink Hudi Job, if there is a conflict with the Table Config (hoo…
-
Please read this to ensure this is really a bug:
https://yauaa.basjes.nl/developer/reportingissues/#these-are-not-bugs
**Describe the bug**
The Flink table UDF is not compatible with pyflink as i…
-
I want to use flink and spark to write to the mor table, and use bucket CONSISTENT_HASHING for the index, but I find that spark is very fast to write the full amount and flink is very slow(flink write…
-
Currently the build produces the tarball but it also produces an empty jar we don't need.
```
[INFO] --- jar:3.4.1:jar (default-jar) @ flink-sql-runner-dist ---
[WARNING] JAR will be empty - no c…
-
I am using Hudi 0.15.0 and Flink 1.17.1, following are the steps to reproduce the problem:
From the flink-sql cli: do the following sql statements
```
CREATE CATALOG hudi_catalog WITH (
…