-
### Tarefas:
Criar notebooks no Databricks para importar os dados CSV da camada Landing.
Desenvolver pipelines que movam os dados da camada Landing para a camada Bronze.
Garantir que os dados…
-
we currently use CSV and ORC rather than Parquet for our data lake objects.
In an ideal world we probably migrate to parquet which would enable us to use this project, but that's currently a projec…
-
We have several data packages within our Data Lake -- a mix of data packages created with manifest files that point to individual files in S3:
```
{
"dataStore": [
{
"i…
-
## What intermediate & mart models support purchase reconciliation?
Here is a first pass at which dbt models will support purchase reconciliation.
### Mart Models
- mart__mitxonline__purchase_recon…
-
I am having memory issue with the running things. Everything works except that training bigger data crashes the kernel of jupyter notebook.
System Desktop
```
Ubuntu 22.04 on WSL2
Host: Windows …
-
### Is your feature request related to a problem? Please describe.
_No response_
### Describe the solution you'd like
_No response_
### Describe alternatives you've considered
_No response_
### …
-
-
Is there anyway to read the parquet converted record before writing it into parquet file.
My requirement is to create a parquet file directly into azure data lake without storing it locally.
-
We will need to pass at least (P-E = Precipitation minus Evapotranspiration) to the coupler for the dynamic lake work with mizuRoute.
On the mizuRoute side this is.
https://github.com/NCAR/mizu…
-
Hi,
I am having problems with the Automatic Schema Evolution for merges with delta tables.
I have a certain Delta table in my data lake with around 330 columns (the target table) and I want to u…