Closed heziyi399 closed 3 months ago
@xloya would you please take a look at this issue?
Have reproduced the issue, will fix this tomorrow. @heziyi399 Thanks for reporting this.
@xloya I will now place the modified GravitinVirtualFileSystem code on the server and execute the command '/ gradlew :clients:filesystem-hadoop3-runtime:build -x test’, Afterwards, put the gravitino-firesystem-hadoop3-runtime-0.5.1.jar package into the/share/hadoop/common/lib directory and re request the HDFS command. The bug still exists. Is there anything wrong with this method?
@xloya I will now place the modified GravitinVirtualFileSystem code on the server and execute the command '/ gradlew :clients:filesystem-hadoop3-runtime:build -x test’, Afterwards, put the gravitino-firesystem-hadoop3-runtime-0.5.1.jar package into the/share/hadoop/common/lib directory and re request the HDFS command. The bug still exists. Is there anything wrong with this method?
Hi, you'd better confirm whether the relevant logic is really updated in the runtime jar you use. I have tested it in hadoop 2.7.3 and 3.1.0 according to the example in this issue, and I can get normal results in both.
Hadoop 3.1.0:
Hadoop 2.7.3:
@heziyi399 would you please check again to see if @xloya 's PR really fix your problem? Thanks.
@jerryshao yes,the problem has been resolved. May I ask another question?I want to know if the file system can be converted to Hadoop's Distributed FileSystem, because when I try the following code:
conf.set("fs.AbstractFileSystem.gvfs.impl","com.datastrato.gravitino.filesystem.hadoop.Gvfs"); conf.set("fs.gvfs.impl","com.datastrato.gravitino.filesystem.hadoop.GravitinoVirtualFileSystem"); conf.set("fs.gravitino.server.uri","http://localhost:8090"); conf.set("fs.gravitino.client.metalake","metalake_demo"); Path filesetPath = new Path("gvfs://fileset/test_catalog/hzySchema/example_fileset/"); FileSystem fs = filesetPath.getFileSystem(conf); DistributedFileSystem dfs = (DistributedFileSystem) fs;
an error message will appear "com.datastrato.gravitino.filesystem.hadoop.GravitinoVirtualFileSystem cannot be cast to org.apache.hadoop.hdfs.DistributedFileSystem"
@jerryshao yes,the problem has been resolved. May I ask another question?I want to know if the file system can be converted to Hadoop's Distributed FileSystem, because when I try the following code:
conf.set("fs.AbstractFileSystem.gvfs.impl","com.datastrato.gravitino.filesystem.hadoop.Gvfs"); conf.set("fs.gvfs.impl","com.datastrato.gravitino.filesystem.hadoop.GravitinoVirtualFileSystem"); conf.set("fs.gravitino.server.uri","http://localhost:8090"); conf.set("fs.gravitino.client.metalake","metalake_demo"); Path filesetPath = new Path("gvfs://fileset/test_catalog/hzySchema/example_fileset/"); FileSystem fs = filesetPath.getFileSystem(conf); DistributedFileSystem dfs = (DistributedFileSystem) fs;
an error message will appear "com.datastrato.gravitino.filesystem.hadoop.GravitinoVirtualFileSystem cannot be cast to org.apache.hadoop.hdfs.DistributedFileSystem"
@heziyi399 GravitinoVirtualFileSystem
extends the super abstract class FileSystem
, so you cannot cast it to the DistributedFileSystem
which also one of the child class for the FileSystem
.
Thanks @xloya for your fix, thanks @heziyi399 for your report, greatly appreciated. This PR will be merged in to main and branch-0.6.
Version
main branch
Describe what's wrong
now I want to use hadoop catalog,i hava create metalake,catalog,schema,fileset,the localtion is:
You can see that this location is the root directory。 I want to get file by using gravitino catalog,so I obtain files through the command line:
You can see that this result comes with a prefix and an error message“does not exist.”.But if I don't use the location of the root directory ,the result is normal:
Error message and/or stacktrace
How to reproduce
0.5.1
Additional context
No response