-
**Description:**
Sharphound GPOLocalGroup collector data is not being ingested by Bloodhound.
**Component(s) Affected:**
- [ ] UI
- [ ] API
- [ ] Neo4j
- [ ] PostgreSQL
- [ ] Data Collect…
-
## Issue
Ingest issue(s) occurred, one of them is scrape_pdfs, via CronJob log record id: https://goadmin.ifrc.org/api/cronjob/24915.
## Expected behaviour
The python script runs once a day a…
-
llama_model_load_internal: ggml ctx size = 0.07 MB
llama_model_load_internal: mem required = 5407.71 MB (+ 1026.00 MB per state)
llama_new_context_with_model: kv self size = 1024.00 MB
AVX = 1…
-
```Traceback (most recent call last):
[...]
File "./dump_scielo.py", line 73, in run_article_ids
for ident in cl.documents_by_identifiers(only_identifiers=True):
File "/home/bnewbold/scrat…
-
From the title, **how do I remove the pdf? Where is it located?** inside privateGPT directory?
-
**Description:**
While trying to ingest JSONs collected via Sharphound 2.4.1 using the webUI, the ingestion fails with the backend reporting the following error: "Error reading ingest file /opt/blo…
-
I would like to give you guys an industry project for a startup company: https://www.ediligence.ai/
The project involves developing an interface and using ML to create a parser that ingests industry-…
-
This issue keeps track of data quality and parsing metadata documents. If there are issues processing certain documents, do let us know and provide an example document that we can test with.
1. Typ…
-
I work at DBT and have been improving an ETL pipeline for gov.uk content we have based on parameters the department needs. I'd like to configure it so it ingests and overwrites data that's changed rat…
-
[Sonic](https://github.com/valeriansaliou/sonic) is a fast, lightweight and schema-less search backend. It ingests search texts and identifier tuples that can then be queried against in a microsecond'…