-
Hello Dr. Fetroe, hopefully this message finds you. I was reading your PhD. thesis and it mentions various contributions of yours such as StrandMaker, Skybox Generator, Pangen, IM2SKY, and SKYLSH. Are…
-
```
/usr/local/lib/python3.11/site-packages/camply/providers/going_to_camp/going_to_camp_provider.py │
│ :522 in list_site_availability …
-
My current requirement is to have the following data pipeline:
PostgreSQL (Source)
Air byte
Minio - S3 storage (Destination)
Apache spark configure with (Minio and Delta lake formatting) since spa…
-
Hello,
I deployed cdmutil as Azure Function and evertything is working fine. Views are created in Synapse Serverless Pool, I can query data and ejoy life.
Anyhow, I want to transfer data from D3…
-
Hi there!
I'm trying to write a basic CSV to our Azure Data Lake using NodeRed. I do a SQL read, then transform the data into a CSV, and use a function to covnert it into the correct format.
So …
-
Correctly resolving time travel is an important part of Delta Lake. One way to test that is to add additional parquet files to the expected data. Ones that aren't the current version could have files …
-
All paths to FHIR service, service bus queues, data lake containers should be configured in Function app settings instead of hard-coded. The functions can read from app settings configuration using en…
-
**Is your feature request related to a problem? Please describe.**
current cubestore support external bucket with csv file format . can support with parquet format directly
**Describe the solution y…
-
we currently use CSV and ORC rather than Parquet for our data lake objects.
In an ideal world we probably migrate to parquet which would enable us to use this project, but that's currently a projec…
-