Open FredLiu0813 opened 1 month ago
Spring AI provides some built-in advisors to support chat memory. In your case, you can use either MessageChatMemoryAdvisor
or PromptChatMemoryAdvisor
. The configuration may look like below (untested code):
@Configuration
public class AppConfiguration {
@Bean
ChatMemory chatMemory() {
return new InMemoryChatMemory(); // Use in-memory chat memory
}
@Bean
MessageChatMemoryAdvisor messageChatMemoryAdvisor(ChatMemory chatMemory) {
return new MessageChatMemoryAdvisor(chatMemory); // create the advisor
}
@Bean
ChatClient chatClient(ChatClient.Builder builder, MessageChatMemoryAdvisor advisor) {
return builder.defaultAdvisors(advisor).build(); // create a ChatClient
}
}
Thanks for your answers.
I am considering a solution: before invoking the Flow, I send the Flow's description along with the user's question to the LLM. Let the LLM determine if the question is suitable for the Flow.
If it is not suitable, I will use the LLM's response directly. If it is suitable, I will proceed with the Flow.
In this process, each round of Chat uses ChatMemory and MessageChatMemoryAdvisor.
I am not sure if this approach is correct.
From your description, only the first time the user interacts with your service, you need to determine if the flow should be used. You can use another advisor to call the LLM and determine if the flow should be used. The result should be stored somewhere. Based on the result, you can choose to bypass ChatMemory in the advisor implementation.
On Mon, Sep 30, 2024 at 8:57 PM Fred @.***> wrote:
Thanks for your answers.
I am considering a solution: before invoking the Flow, I send the Flow's description along with the user's question to the LLM. Let the LLM determine if the question is suitable for the Flow.
If it is not suitable, I will use the LLM's response directly. If it is suitable, I will proceed with the Flow.
In this process, each round of Chat uses ChatMemory and MessageChatMemoryAdvisor.
I am not sure if this approach is correct.
— Reply to this email directly, view it on GitHub https://github.com/spring-projects/spring-ai/issues/1408#issuecomment-2382378647, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAI6AEGONDOYWZBD4UHHFMDZZEAAHAVCNFSM6AAAAABOZPKEPKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGOBSGM3TQNRUG4 . You are receiving this because you commented.Message ID: @.***>
-- Regards Alex Cheng
How to use "another advisor to call the LLM and determine if the flow should be used"? can you help me and give some sample codes?
@alexcheng1982 , is it possible to set MessageChatMemoryAdvisor for BedrockAnthropic3ChatModel ?
I am working on an agentic workflow project using SpringAI, now I have a question/problem with ChatMemory.
Question How can I implement the memories in a workflow?
Example I have an agentic workflow that includes the following steps:
Workflow: Start→GoogleSearch→OpenAI LLM→End
Problem When I running this workflow and enter "NBA" for the first time, and it returns an article about "NBA". However, when I input "please reduce the content of previous answer to 20 words" for the second time, the workflow does not retain the memory. Instead, It transmits the "please reduce the content of previous answer to 20 words" to Google and generates a new article.
Is there any way to solve this problem? I would greatly appreciate any responses.