-
**_Tips before filing an issue_**
- Have you gone through our [FAQs](https://hudi.apache.org/learn/faq/)?
- Join the mailing list to engage in conversations and get faster support at dev-subscri…
leesf updated
23 hours ago
-
### Backend
VL (Velox)
### Bug description
When running TPC DS query67, a large number of tasks failed. The reasons for failure are Compression type 2 not supported and OutOfMemoryException.
…
-
Hello Team,
We are running Glue streaming Job which reads from kinesis and writes to Hudi COW table (s3) on glue catalog.
The Job is running since ~1year without issues. However, lately we started…
-
**Describe the problem you faced**
We are incrementally upserting data into our Hudi table/s every 5 minutes.
We have set CLEANER_POLICY as KEEP_LATEST_BY_HOURS with CLEANER_HOURS_RETAINED = 48.
…
-
Most of Iceberg metadata is stored in the file system and is limited by NameNode performance. Storage engines such as RDBMS, Cassandra and mongodb can be supported through pluggable storage
melin updated
2 years ago
-
Hy,
I'm using Hudi CLI version 1.0; hudi version 0.11.0; Spark version 3.2.1-amzn-0 and Hive version 3.1.3-amzn-0.
the error i'm getting:
```
java.lang.ClassCastException: org.apache.hadoop…
-
**_Tips before filing an issue_**
- Have you gone through our [FAQs](https://hudi.apache.org/learn/faq/)?
- Join the mailing list to engage in conversations and get faster support at dev-subscri…
-
**_Tips before filing an issue_**
Spark SQL Hint
/*+ hoodie_prop('${tableName}', map('${key1}', '${value1}', '${key2}', ''${value2}'')) */
```
select
/*+
hoodie_prop(
'defa…
-
where schema-file is required for hudi 0.9 to run mor compation?
If I run one separate compaction job asynchronously on the table is ingesting by spark streaming whether will cause any data lost? is …
-
I implemented a custom payload based on HoodieRecordPayload.java, but there were problems. When I use incremental queries, record_time is the value of the incremental payload (incorrect). When running…