-
### Error writing file
### Run
```sh
docker run -it --rm --name=hadoopserver -p 8030:8030 -p 8040:8040 -p 8042:8042 -p 8088:8088 -p 19888:19888 -p 49707:49707 -p 50010:50010 -p 50020:50020 -p 500…
-
### Search before asking
- [X] I had searched in the [feature](https://github.com/apache/seatunnel/issues?q=is%3Aissue+label%3A%22Feature%22) and found no similar feature requirement.
### Descripti…
-
**Describe the problem you faced**
The library dependancies are too old. Spark 3.5 and other rely on Hadoop 3.3.4 or 3.4 and Hadoop AWS 3.3.4 or newer.
**To Reproduce**
Steps to reproduce t…
-
Today, the [Hadoop integration tools for Vespa](https://github.com/vespa-engine/vespa/tree/master/vespa-hadoop) support Hadoop and Pig for feeding and querying Vespa. The Pig feeder is a thin wrapper …
-
Hi,
Traceback (most recent call last):
File "", line 1, in
File "/petastorm_venv3.6/lib/python3.6/site-packages/petastorm/reader.py", line 120, in make_reader
resolver = FilesystemResol…
-
This happens when I restart one of the connect-distributed process. or when new sink instance was created, or some else happens. I encoutered this some times, and tried a full new installation of kafk…
-
Accessing HDFS as an NFS mount point is included in the base Hadoop and you do not need to use the MapR distribution to get that functionality: http://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist…
-
Tips before filing an issue
- Have you gone through our [FAQs](https://hudi.apache.org/learn/faq/)?
- Join the mailing list to engage in conversations and get faster support at dev-subscribe@hudi.apac…
-
When doing bulk imports, Accumulo clients need to retrieve a Hadoop configuration object with `fs.defaultFS` set. This is currently done by setting setting `$HADOOP_HOME/etc/hadoop` on the Java CLASS…
-
### Step to reproduce this issue:
Env: main branch
**1. Create the iceberg catalog:**
```
create external catalog icebergcat
PROPERTIES
("type"="iceberg","iceberg.catalog.type"="hive",
"hiv…