Open DongSeungLee opened 1 week ago
This is actually related to #11541 . Add Files uses some Hadoop Filesystem classes under the hood and because of this you currently must have a fully setup HadoopConfig in your runtime to do add_files. With #11541 completed we should be able to fix this for addfiles and use s3FileIO instead of hadoop filesystem classes
i appreciate your sincere answer.
Query engine
Spark 3.5.3
Question
for study, i run spark cluster standalone in my local, and i have developed my own IcebergRestCatalog. My IcebergRestCatalog Iceberg spec is based on 1.6.1 version for running add_files provided by spark, like below.
error occurs like below.
from my point of view, spark try to create staging metadata from location of which iceberg table metadata has. here, iceberg metadata location is started with
s3
, and scheme is fixed as s3. Spark try to access file system by hadoop S3AFileSystem, thus it seems scheme s3 is not supported, s3a should be right scheme. how can i overcome this issue? thanks, sincerely