Open heziyi399 opened 2 weeks ago
@mchades it's best to block and report the user first, then delete those comments.
@mchades it's best to block and report the user first, then delete those comments.
@justinmclean OK. Can you explain in detail how to block and report?
I've already blocked and reported them to GitHub, and they reacted fairly quickly to the previous user I reported. You can do this by clicking on their name and following the "Block or report" link under their picture. There's a lot of this going around, and it seems a lot of people's accounts have been hacked.
@justinmclean Take a look at the comments above, it seems like the block didn't work?
I assume GitHub needs to look at each one, and the block is not automatic. I've also informed ASF Infra about the issue.
@heziyi399
Please provide more detail about the problem, for example, the result of hdfsSiteMap.getOrDefault("fs.defaultFS", "")
?
@yuqi1129 the result of hdfsSiteMap.getOrDefault("fs.defaultFS", "") is “hdfs://HDFS78000026"
It seems like the improper way to handle the last '/' that causes the problem.
@jerryshao Do we need to fix it in release 0.6.0?
We can fix it in 0.6.1, it's not urgent.
Version
0.5.1
Describe what's wrong
now I want to create a schema for fileset types,The code is as follows:
@PostMapping("/addCatalogAndSchema") public void addCatalogAndSchema(@RequestBody FilesetRequest request) { NameIdentifier nameIdentifier = NameIdentifier.of("tbds", request.getName()); Map<String, String> properties = ImmutableMap.<String, String>builder() .put("location", hdfsSiteMap.getOrDefault("fs.defaultFS", "")) .build(); Catalog catalog = GravitinoClient.builder(getGravitinoUrl()).withMetalake(metaLakeName).build() .createCatalog( nameIdentifier, Catalog.Type.FILESET, "hadoop", request.getName(), properties); SupportsSchemas supportsSchemas = catalog.asSchemas(); Map<String, String> schemaProperties = ImmutableMap.<String, String>builder() .put("location", hdfsSiteMap.getOrDefault("fs.defaultFS", "")) .build(); Schema schema1 = supportsSchemas.createSchema( NameIdentifier.of("tbds", request.getName(), request.getSchemaName()), "This is a schema", schemaProperties); }
Error message and/or stacktrace
[Exception] ErrorCode -> FailedOperation, RequestURI -> /datamanager/apiv3/addCatalogAndSchema,errorMsg->Failed to operate schema(s) [schema] operation [CREATE] under catalog [filesetCatalog], reason [Relative path in absolute URI: hdfs://HDFS78000026schema]
How to reproduce
0.5.1
Additional context
No response