-
When I try to create iceberg table with iceberg bucketing through DatFrame v2 API, I got
```
Invalid partition transformation: iceberg_bucket2(`modification_time`)
org.apache.spark.sql.Analysis…
-
```
FAILED ../../../../integration_tests/src/main/python/hive_delimited_text_test.py::test_read_compressed_hive_text
FAILED ../../../../integration_tests/src/main/python/get_json_test.py::test_get_j…
-
Integration tests for `test_write_hive_bucketed_table` are failing with the following exception:
```
pyspark.sql.utils.IllegalArgumentException: Part of the plan is not columnar class org.apache.s…
-
## Expected behavior
Successfully read Chinese
## Actual behavior
Garbled code
## Steps to reproduce the problem
Partial code(Java):
```java
public static void main(String[] args) {
…
-
When providing a filter for partitions, it should be possible in some cases to use it to optimize file system list calls. This can greatly improve the speed for reading data from partitions because f…
-
The dbt-spark adapter uses CTAS to create tables.
https://iceberg.apache.org/docs/latest/spark-ddl/#create-table--as-select
```sql
REPLACE TABLE prod.db.sample
USING iceberg
PARTITIONED BY (pa…
-
Timezone implementation in Spark 3 leads to `Already closed files for partition` error.
To reproduce follow the next steps...
First, download the source file: [data-dimension-vehicle-20210609T22…
-
Dear team,
Today, I compile the package against Spark 3.1.1 and Scala 2.12.10, the result is failed.
The error messages are the following
```text
[ERROR] /Users/picomy/Downloads/spark-atlas-conn…
-
java.lang.ClassNotFoundException: org.apache.spark.sql.sources.v2.DataSourceV2
when I try to use iceberg-spark-0.9.1 in EMR 6.1.0 which comes with spark 3.0.
I don't see the DataSourceV2 present in …
-
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
### Search before asking
- [X] I have searched in the [issue…