-
I have a parquet file in a fabric lakehouse that I would like to map in duckdb. It works well using the dfs URL
`az://onelake.dfs.fabric.microsoft.com/xxx/xxxx/Files/xxxx.parquet` (just copy pasting…
-
Hi,
I wonder if could be an idea to allow the user to define the folders in which the files from BC are placed in the receiving Fabric lakehouse?
As it is now the files will be placed in the roo…
-
### Describe the bug
Where was the Notebook you kept screenshots for this module https://learn.microsoft.com/en-us/training/modules/use-apache-spark-work-files-lakehouse/3-spark-code
### What happen…
-
For middling data AWS Athena is an excellent low cost warehouse/lakehouse that works with open table formats like iceberg, hudi and delta lake.
Would definitely get us to try quarry if Athena was su…
-
Hello,
On a new project, I'm testing if I can use this adapter for my workloads, but when I try to just debug the project, I hit a snag :
`Traceback (most recent call last):
File "/home/unix/d…
-
[Release Note 1.2.1](https://github.com/apache/doris/issues/15508)
## New Features
- Multi-Catalog
- Support automatic synchronization of Hive metastore
- https://doris.apache.org/docs/d…
-
Greetings,
dbfs root is used for at least the lakehouse retail demo. dbfs root is going away in favor of mounted blob store:
https://kb.databricks.com/dbfs/dbfs-root-permissions
Cheers,
Joe
-
### Discussed in https://github.com/facebookincubator/velox/discussions/8994
Originally posted by **majetideepak** March 6, 2024
We should implement a TPCDS connector in Velox to generate dat…
-
-
### Is this a new bug in dbt-core?
- [X] I believe this is a new bug in dbt-core
- [X] I have searched the existing issues, and I could not find an existing issue for this bug
### Current Behav…