-
### Query engine
Spark
### Question
# Background
Spark will pass `catalog` name to `renameTable` operations as part of its `to` identifier, and if that `catalog` name is not handled (i.e. stripped…
-
Currently, to work with catalog tables in Daft, users must connect to the catalog and fetch the catalog table information themselves before calling `daft.read_iceberg`, `daft.read_deltalake`, or other…
-
### Is your feature request related to a problem? Please describe.
According to [RFC: Combine Historical and Incremental Data](https://github.com/risingwavelabs/rfcs/pull/85)
we need to support in…
-
### Feature Request / Improvement
As we prepare for a major release, I think it would be great to hold our public APIs to a higher standard of documentation.
Many popular public classes, methods…
-
## Enhancement
In our situation, there are iceberg database with more than 2w tables under it. Data write into the iceberg table streamingly, and we have a lot of query on these tables.
We meet…
-
Hi there, I'm using `dbt snapshot` and just ran into this issue, `Caused by: io.trino.spi.TrinoException: Query exceeded maximum columns. Please reduce the number of columns referenced and re-run the …
-
### Query engine
Setup:
Spark: 3.3.3
Scala: 2.12.15 (OpenJDK 64-Bit Server VM, Java 11.0.20)
sparkConf=(SparkConf()
.set("spark.jars.packages", "org.apache.iceberg:iceberg-spark-runti…
-
### Query engine
Spark
### Question
## Background
This is another Hive/Hadoop and REST Catalog behavior discrepancies discovered while enabling integration test on REST catalog. The assumption her…
-
### Description
Currently, amoro enables default table properties under a catalog by merging them with underlying table's properties when loading a table. Sometime it requires to writte some defalut…
-
## Feature request
**Is your feature request related to a problem? Please describe.**
Time travel SQL Syntax is not supported in Iceberg Connector
**Describe the solution you'd like**
Su…