Open djouallah opened 1 year ago
Did some digging on this, it's likely we'll support abfs://...
paths before the lakehouse file api (/lakehouse/...
). There's some challenges around some unimplemented file system operations with blobfuse.
Notes for impl:
object_store
to explicitly close (drop) the file before calls to std::fs::rename
, otherwise the metadata is not flushed in time for the rename. I believe this is actually a bug in blobfuse since the metadata should be flushed on file create, but isn't.copy_if_not_exists
just fails. Not sure what to do here yet.As a vote or confidence, a onelake destination in glaredb would make me choose this over Fabric any day. Power BI is great, the concept of onelake to empower power BI is great. Fabric not so much.
That's fine, you don't need to like other Fabric Engines, OneLake is neutral and works with any Engine as long as it understand Delta table.
Exactly. Am currently working with databricks and having unity catalog in OneLake. Only remaining issue is how Unity writes a table name vs how OneLake prefers to see it.
any update on this, I presume it should be easy now as it is supported by delta_rs
any update on this, I presume it should be easy now as it is supported by delta_rs
We've made some changes to how we plumb stuff through to delta-rs, but I have not tested if this all works yet with Fabric (either via abfs://...
or through the filesystem api). We'll be checking on this over the next couple of days, and I'll follow up with an update.
Sounds great. Looking forward to it.
any update, I see that you are using now the latest version of Arrow rs, basically we need something like this
write_deltalake("abfss://Delta_Table@onelake.dfs.fabric.microsoft.com/Delta_Table.Lakehouse/Tables/fruit",
df,storage_options={"bearer_token": aadToken, "use_fabric_endpoint": "true"})
trying this code
I think you need the latest version of arrow-rs to make it works https://github.com/apache/arrow-rs/pull/4573