-
**Describe the problem you faced**
A flink write hudi job, we have hdfs jitter, cause flink task to fail over, and see this error
**To Reproduce**
Steps to reproduce the behavior:
*have ch…
-
**Describe the problem you faced**
Hi Folks. I'm trying to get some advice here on how to better deal with a large upsert dataset.
The data has a very wide key space and no great/obvious partiti…
-
Could this be because of this: https://issues.apache.org/jira/browse/PARQUET-1866 maybe?
```log
2022-12-01 00:56:14,469 ERROR [kafka-sink-connector-gcs|task-0] WorkerSinkTask{id=kafka-sink-connect…
-
**Is your feature request related to a problem? Please describe.**
I have some parquet files that I query using DBeaver. The database just contains views a la `create or replace view mytable as selec…
-
we sometimes run into an exception when closing a ParquetWriter instance:
```java
2024-06-10 10:44:01.398 org.apache.parquet.util.AutoCloseables$ParquetCloseResourceException: Unable to clo…
-
We ingest avro data from kafka produced by debezium.
Debezium dropped column for an optional column - it was backward compatible change according to schema registry.
This tool had not adjusted ice…
-
I have been running into a bug due to `parquet-format` and `parquet-format-structures` both defining the `org.apache.parquet.format.Util` class but doing so inconsistently.
Examples of this are sev…
-
**Describe the problem you faced**
I am unable to create a hudi table using the data that I have with POPULATE_META_FIELDS being enabled. I can create the table with POPULATE_META_FIELDS set to fal…
-
Hello , we just tried latest apache-arrow version 3.0.0 and the write example included in low level api example, but lz4 still seems not compatible with Hadoop . we got this error reading over hadoop …
-
```
2023-08-16T11:39:56.1182633Z [ERROR] io.trino.plugin.iceberg.TestIcebergV2.testOptimizeDuringWriteOperations -- Time elapsed: 12.72 s