-
### Bug Description
I am trying to ingest data from PDF file to Astra DB using Azure OpenAI embeddings, but no matter if I use LangFlow on-line from datastax domain or via docker-compose the result…
-
**Describe the bug**
We are trying to automatically assign data products to datasets and their container during ingestion from S3. I have included the format of our transformer below:
**To Reprodu…
-
-
Test Transfer DAG Endpoint to be used in github ingestion automation: https://github.com/NASA-IMPACT/veda-data/issues/180
Example from event bridge: https://github.com/NASA-IMPACT/veda-data-airflow/b…
-
### Description
Update the `aws_cloudtrail_event_data_store` to add support for stopping ingestion of new CloudTrail events into the data store.
### Affected Resource(s) and/or Data Source(s)
* aws…
-
# watsonx.data에서 Data Ingestion하기 - 호롤리한 하루
Overview watsonx.data에서 스키마와 테이블을 생성하고 데이터를 적재하는 방법에 대해서 살펴보도록 하겠습니다.
[https://gruuuuu.github.io/ibm/data-ingestion-wxdata/](https://gruuuuu.github.io/ibm…
-
## Description
NO2 (#89) and Geoglam (#167, #173) datasets requires monthly ingestion as new assets are created. This is currently a manual process however should be automated. `veda-data-airflow` ha…
-
If you log a type other than a number for token counts the span ends up getting dropped:
```
Traceback (most recent call last):
File "/phoenix/env/phoenix/db/bulk_inserter.py", line 200, in _in…
-
Good morning everybody! Before writing this post i have seen this post https://github.com/open-metadata/OpenMetadata/issues/17751 and looks like someone else got my same issue but anyone explained how…
-
Currently conntroller and servers able to start with s3a path but while creating the segments during ingestion facing following error. The reason is while preparing file names we are prefixing the s3 …