-
Description:
We need to develop a comprehensive roadmap to guide the ASTRA Program through the accreditation process. This roadmap will outline key phases, tasks, and milestones necessary to achiev…
-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Problem statement
Add external HMS URL to ucx inventory tables
### Motivation
Currently, we scan the sam…
-
## Context
Hive Process will integrate Stakwork Workflows and automation into the Hive Development Process. This will reduce workload on Product Managers by streamlining creative tasks and automating …
-
# Environment
**Delta-rs version**: 0.22 (Tested multiple versions up to latest)
**PyArrow version**: 18.0.0
- **OS**: Mac 14.4 (23E214)
- Delta table on S3
***
# Bug
**What happened**:
…
-
**Describe the problem you faced**
a table with col ts type is timestamp and it is a precombineKey
background:
flink streaming load and spark will sync to hive partitioned table every day.
q…
-
I use presto to read Parquet file in HDFS. The parquet file has enable Parquet modular encryption.
Reading small file is fine, but while reading large file, it fail at the decrypt function.
Presto s…
-
### Version
main branch
### Describe what's wrong
I want to use federation query using hive metastore stored in 2 hadoop clusters.
So we added two hive catalogues to metalake.
There is a diff…
-
Hi guys,
we are trying to use lake house sink connector to load the data into hudi table and now we are trying to integrate trino to read the data from trino, which requires hive meta store and those…
-
In HIve-BQ connector , the DATETIME datatype is mapped as Timestamp Type.
In Spark-BQ connector, the DATETIME datatype is mapped as StringType.
As there is a differences between both the connect…
-
**Describe the problem you faced**
Hi Folks. I'm trying to get some advice here on how to better deal with a large upsert dataset.
The data has a very wide key space and no great/obvious partiti…