-
In my `/opt/presto-server/etc/catalog/s3.properties`, I have these values (and running from local PC):
```
connector.name=hive-hadoop2
hive.metastore=file
hive.metastore.catalog.dir=s3://priyam/
…
-
Apache Thrift 0.12.0 is required. Building it reports unsupported .NET, etc.
Installing 0.13.0 using yum results in an error on mvn package.
**Reporter**: [Lutz Weischer](https://issues.apache.org…
-
Hello experts,
I followed the protocol example to build the reference server. The server generated the presigned URL when `table/query` endpoint is called.
Assumed that my `table_url` is `profil…
-
**Describe the bug**
I run the build and test command for gazelle_plugin 1.2, and got some errors.
code version as below:
1) arrow-4.0.0-oap-1.2.0-release.zip
2) gazelle_plugin-1.2.0-release.zip
…
-
I'm using the hdfs3 connector to consume data in kafka and stack it in hdfs3.
My Kafka confluent, sink connect version is below.
```
kafka-confluent:5.4.2-1
confluentinc/kafka-connect-hdfs3:late…
-
i have
```
inputs:
mydf:
file:
path: s3a://xx/a/b/c/
```
there are partition folders under s3a://xx/a/b/c/ path . and there are hudi parquets under them
i want the mydf to get th…
-
when i use pyarrow to connect my hdfs, I meet error
I use
from pyarrow import fs
print(fs.FileSystem.from_uri("hdfs://"))
the error shows loadFileSystems error:
(unable to get root cause …
-
### Bug description
Bug description
I built gluten+velox using branch-1.1, submitted a tpch query using spark-shell, and the data was stored in s3. However, the following error occurred during execu…
-
**Describe the bug**
The parquet specification states that the values for a MapArray are optional - https://github.com/apache/parquet-format/blob/master/LogicalTypes.md#maps.
However, the curren…
-
### Feature Request / Improvement
We are using Snowflake Iceberg to read the data from the S3 location and that is working fine for the non partitioned data.
But If the data is partitioned and th…