-
instead of using an AAD Service Principal it would be much easier to simply use `mssparkutils.credentials` to retrieve a token for the currently logged in user
you can replace these lines https://git…
-
**Describe the bug**
The current function for lakehouse.get_lakehouse_tables supports reading tables from lakehouses with no schema enabled.
If there is a schema on the lakehouse the api `/workspaces…
-
### Discussed in https://github.com/apache/doris/discussions/16000
Originally posted by **tiger-hcx** January 17, 2023
doris的版本是1.2.1.
场景:hudi数据写入的时候并同步元数据到hive,然后在doris中创建catalog,然后查询hudi的数据…
-
Hi Bert,
Hope youre well. And thanks for creating this solution as it has helped me and my team immensely.
We used the Fabric Lakehouse version from your solution to pull data/tables from BC. I …
-
Create required assets for parking sensor ingestion process. These include, but not limited to:
- Fabric lakehouse
- Delta lake schemas and tables
- configuration folders
- Fabric environment (should…
-
○ Time: 1 week
○ Tools Required: Azure SQL Database
○ Steps:
1. Provision Azure SQL Database.
□ Follow Microsoft Azure’s provisioning guide to set up the Azure SQL Database instance.
…
zepor updated
3 months ago
-
first congratulation on the progress you made, chDB is substantially better than just 6 months ago, I am trying to read a folder of csv and export it to delta, current I am using df = sess.sql(sql,"Ar…
-
Hello,
Recently, the pyiceberg 0.6.0 version was released which allows writing iceberg tables without needing tools like Spark and Trino.
I was about to write a custom plugin to implement the wr…
-
**Describe the problem you faced**
We use Flink to ingest data into Hudi and the Hive Sync Tool to sync metadata to our Hive metastore. A field was declared as `Timestamp` type when writing to Hudi b…
-
### Describe the subtask
Support `PaimonCatalog` implementation to provide Gravitino catalog for Paimon table operation management.
### Parent issue
#1129