-
https://github.com/trinodb/trino/actions/runs/5724341048/job/15510960140?pr=18478
```
Error: COMPILATION ERROR :
Error: /home/runner/work/trino/trino/plugin/trino-hive/src/test/java/io/trino/p…
-
### What happens?
I am using duckdb_jdbc v0.9.2 and I store a parquet file in the folder with the name DATA=2023-08-28 and this file has a column with the same name as the folder DATA. But this colum…
-
### Describe the usage question you have. Please include as many useful details as possible.
I want to read encrypted Parquet files using Arrow Java API to create an input stream for [DuckDB](https…
-
## Descriptions
It seems to have some problems if the Golang struct has a nested array field(e.g. [][]string). You can check the following example code.
`NestedStringArrayWithTag` has a `Field` wi…
-
**Is your feature request related to a problem or challenge? Please describe what you are trying to do.**
We want ability to read an arbitrary page in a column chunk, [`SerializedPageReader`](https:/…
-
**Describe the bug**
Data-prepper S3 source pauses SQS processing with exponential backoff when there is an issue in reading S3 object such as corrupted parquet file.
**To Reproduce**
Steps to r…
-
Hello!
We started to use this connector to sink data from Kafka to S3 in Iceberg format.
The Kafka topic has 375,522 messages wth 231 MiB on size.
The connector config looks as follows:
```…
-
**Describe the problem you faced**
Migrated to Hudi 0.14.0 from Hudi version 0.8. While attempting to read Hudi table in parquet format from AWS S3 using Spark receiving the following error:-
…
-
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/seatunnel/issues?q=is%3Aissue+label%3A%22bug%22) and found no similar issues.
### What happened
利用seatunnel …
-
We are trying to create a COW table using kafka as our source and s3 as our sink. The source comprises of a list of kafka topics.
The current checkpoint happens every 2 mins and when the checkpoint s…