-
I'm looking into the AIND format option for inputting data into the pipeline. The readme states
> aind: data ingestion used at AIND. The input folder must contain an ecephys subfolder which in turn…
-
* Data analysis flow to metadata
![Image](https://github.com/user-attachments/assets/ff16ca3c-76eb-4290-8d17-3060accccd1e)
* Data analysis
![Image](https://github.com/user-attachments/assets/7e6ce…
-
Improve the data pipeline such that it can ingest raw TIDES data from external data sources, parse the data and transform the data into mart tables to use for various analyses.
-
### Which OpenObserve functionalities are relevant/related to the feature request?
other
### Description
In the current UI, the label “Ingestion” is being used to represent data sources. To make th…
-
### User Story
As a user of Hubspot, I want the data in our Hubspot instance ingested into the Data Platform so I can use it for queries. I want to be able to preserve that data if we ever decide to c…
-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Environment
```markdown
- Milvus version:v2.4.12
- Deployment mode(standalone or cluster):standalone
- MQ t…
-
# Overview for the next few tasks
- Ingest all 2023 data sources relevant to the banking dashboard project
- Run the 2023 data through the 2022 data processing pipeline
- Inspect the process / re…
-
We would love to be able to ingest data continuously without having to do it manually like on a daily/weekly basis
-
In my current setup, there are some steps between ingesting data and producing the final data product table.
For example, I don't treat intermediate tables as data products. Instead, I created data…
-
As soon as we have pan-UKBB LD the most logical thing is to ingest pan-UKBB GWAS for respective ancestries.
https://pan.ukbb.broadinstitute.org/downloads
It includes ingestion and EFO mapping.