Closed patrickporto closed 8 months ago
π€
Hey there @patrickporto! Good to cross paths with you again in the matrix of coding issues. I hope all's well in your code universe! πΎ
Based on the code you've provided, it seems like you're correctly setting up the ChatEngine
with RedisChatStore
as the chat memory. However, the issue you're experiencing might be due to the way the chat memory interacts with the vector store.
The ChatMemoryBuffer
class, which you're using as the chat memory, is designed to store the recent history of a chat session. When a new message is added to the chat, it's stored in the memory buffer and the oldest message is removed if the buffer's token limit is exceeded. The ChatMemoryBuffer
doesn't interact directly with the vector store - it's the responsibility of the ChatEngine
to use the chat memory and the vector store to generate responses.
The ChatEngine
uses the chat memory to provide context for the current conversation. When generating a response, it retrieves the recent chat history from the chat memory and uses this as the context for the GPT model. The ChatEngine
doesn't use the vector store to retrieve contextual information - the vector store is used to store and retrieve embeddings for the GPT model.
If you're trying to retrieve information about a person that was trained and stored in the vector store, you'll need to use the VectorStoreIndex
directly. The VectorStoreIndex
provides methods for querying the vector store and retrieving embeddings. Here's an example of how you might do this:
# Query the vector store for a person's information
query_embedding = embed_model.embed_text("information about person")
results = index.query(query_embedding, top_k=1)
# The results are a list of tuples, where each tuple contains an embedding and its metadata
for embedding, metadata in results:
print(metadata)
This will print the metadata for the closest matching embedding in the vector store, which should include the information about the person.
If you're still experiencing issues, could you please provide more details about the specific problem? For example, are there any error messages? Is the chat engine not responding at all, or is it just not providing the expected information? Any additional information would be helpful in diagnosing the issue.
Sources:
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a π if this is helpful and π if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
This particular chat mode calls retrieve()
on every user message and inserts the retrieved context into the system prompt
It can then respond using the retrieved context, or without it.
Maybe you'd want to modify the system prompt with additional instructions
chat_engine = index.as_chat_engine(
chat_mode="context",
memory=memory,
system_prompt=(
"You are a chatbot, able to have normal interactions, as well as talk"
" about an essay discussing Paul Grahams life."
),
)
Info on other chat modes is here https://docs.llamaindex.ai/en/stable/module_guides/deploying/chat_engines/usage_pattern.html#available-chat-modes
@logan-markewich context chat mode is not the problem because without the memory parameter it returns the expected response, such as the following example:
chat_engine = index.as_chat_engine(
chat_mode=ChatMode.CONTEXT,
# memory=chat_memory We need to comment this to the engine give me any response from the vector store
)
That chat engine does not return any information from the vector store when uncommenting the line about chat memory because it returns information from the conversation history only.
@patrickporto what does your service context look like?
I find the behavior to not be reproducible for me
Bug Description
Hello
I am developing a chat engine using Redis and PG Vector; however, the contextual information is left out when I put the Redis as chat memory:
Version
0.9.39
Steps to Reproduce
Relevant Logs/Tracbacks
No response