Open s0yabean opened 1 month ago
@ZiTao-Li @rayrayraykk @xieyxclack Please check this issue.
BTW, AgentScope supports to cache text embeddings in https://github.com/modelscope/agentscope/blob/main/src/agentscope/manager/_file.py#L286 , do we need to integrate this feature into RAG module?
Hi, I'm looking for a way to cache either the KnowledgeBank Object, or the Text Embeddings Produced when we load some data into embedding format to be ready for RAG.
Describe the solution you'd like I've tried to find the embeddings but not sure where they are being produced at, hence unable to save them. Also, it seems like the default example only loads from some files, instead of the produced embeddings themselves. I've tried pickle and dill the entire knowledge base, but some private attributes are not copied over.
This is very helpful for quick code iterations when building the LLM agents, when the data is the same throughout.
Thanks for such a useful library!
Since the RAG module in AgentScope is built with Llama-index, so the embedding storage follows the format of the Llama-index by calling the persist
methods. The default path of the embedding is ./runs/{knowledge_id}/default__vector_store.json
.
BTW, KnowledgeBank is more like a dispatcher and Knowledge(e.g., LlamaIndexKnowledge) is the one for embedding generation/retrieval.
Hi, I'm looking for a way to cache either the KnowledgeBank Object, or the Text Embeddings Produced when we load some data into embedding format to be ready for RAG.
Describe the solution you'd like I've tried to find the embeddings but not sure where they are being produced at, hence unable to save them. Also, it seems like the default example only loads from some files, instead of the produced embeddings themselves. I've tried pickle and dill the entire knowledge base, but some private attributes are not copied over.
This is very helpful for quick code iterations when building the LLM agents, when the data is the same throughout.
Thanks for such a useful library!