Open k2ai opened 4 weeks ago
Interested. I am looking for the exact functionality, being able to save the indexed results to an actual database would be great for both storing data and optimizing the queries.
I believe there should be a dedicated CLI command or API that handles all preprocessing tasks up to community building and stores the data in either a PostgreSQL graph database or Neo4j. After completing the data preprocessing, there should be another distinct CLI command or API to create the final index.
It should also allow appending to an existing graph and able to delete nodes and relationships based on data changes. This will enable efficient processing by avoiding the need to reprocess the entire dataset, especially when dealing with large amounts of data.
Do you need to file an issue?
Is your feature request related to a problem? Please describe.
Is there any development regarding the integration of an adapter to index .parquet data into a PostgreSQL graph database (Apache AGE) and enable indexing within PostgreSQL for faster query performance?
Describe the solution you'd like
No response
Additional context
No response