Closed Terry-cyx closed 5 months ago
Zhipu embedding seems not yet supported by Llama Index, need to create a custom embedding and pass to embed_model
.
For more info, you can take a look at RAG Module.
@seehi can you add some documentation to clarify this issue?
@seehi can you add some documentation to clarify this issue?
I'm going to do a work which need LLM to read PDF. I found it's easy to do in LangChain with RecursiveCharacterTextSplitter and embeddings. But in MetaGPT, it seems hard to do. I found embedding in config2.yaml, but do not know how to use it. I'm using zhipuai, and the ZhipuAIEmbeddings, ZhipuAILLM are customized. Here is my chain: embedding = ZhipuAIEmbeddings(zhipuai_api_key=api_key) vectordb = Chroma(persist_directory=persist_directory, embedding_function=embedding) llm = ZhipuAILLM(model="glm-4", zhipuai_api_key=api_key,temperature=0) retriever = vectordb.as_retriever(search_type="similarity", search_kwargs={"k": 6}) qa_interface = RetrievalQA.from_chain_type( llm, chain_type="stuff", retriever=retriever, return_source_documents=True, ) I want the function "qa_interface()" instead of self._aask() for compatibility, and sincerely ask for your help