Closed marcusschiesser closed 11 months ago
Is your feature request related to a problem? Please describe. LLM contexts are limited; we have to implement RAG to support multiple large documents.
Discussion Here's an example of how to use llamaindex: https://github.com/run-llama/LlamaIndexTS/blob/main/examples/vectorIndex.ts
We should consider removing whole documents from the LLM context and always store them in the vector store instead. We can then use the retriever to find chunks of documents that are related to the current context.
implemented in https://github.com/marcusschiesser/unc-llamaindex
Is your feature request related to a problem? Please describe. LLM contexts are limited; we have to implement RAG to support multiple large documents.
14 suggests using llamaindex.
Discussion Here's an example of how to use llamaindex: https://github.com/run-llama/LlamaIndexTS/blob/main/examples/vectorIndex.ts
We should consider removing whole documents from the LLM context and always store them in the vector store instead. We can then use the retriever to find chunks of documents that are related to the current context.