Closed sagarlakshmipathy closed 1 month ago
This link says you could register the table using name
and metadata-location
in the request, in which name
would be passed down by the tableName
from the config.yaml
(I think?) and I'm not sure what should be the value for metadata-location
because it gets created based on the current sync status (because the version numbers could change)
This link says you could register the table using
name
andmetadata-location
in the request, in whichname
would be passed down by thetableName
from theconfig.yaml
(I think?) and I'm not sure what should be the value formetadata-location
because it gets created based on the current sync status (because the version numbers could change)
The metadata location is the location of the iceberg metadata (/metadata/
in most cases)
@sagarlakshmipathy the Polaris catalog does not fully support the Iceberg spec yet so you will have to wait until they allow these user provided arguments: https://github.com/apache/polaris/blob/main/polaris-service/src/main/java/org/apache/polaris/service/catalog/BasePolarisCatalog.java#L2002
@the-other-tim-brown thanks for the link, I was having a hard time finding where that error was coming from the other day !!
This doesn't seem to be xtable related, closing this
Search before asking
Please describe the bug 🐞
I ran into an issue while using Snowflake's polaris catalog. Documenting here.
Error
The sync did not completely happen at this point meaning the table gets created in target format in the catalog, but doesn't have data in it.
config.yaml
catalog.yaml
I could access the table using spark-shell using command, so the table is very much created. I could also create a table directly from the spark shell if needed. So I can say spark writes work directly from outside SF, there is something wrong with the catalog sync for an existing table.
directly creating table using spark
Are you willing to submit PR?
Code of Conduct