Closed williamspindox closed 3 months ago
Just a suggestion :D
private void openAiChatRunUseInstruction() {
List<Message> messages = new ArrayList<>();
Prompt prompt = new Prompt(messages);
sendPromptAndUpdate(prompt, "What is the population of Canada?");
sendPromptAndUpdate(prompt, "What is the capital?");
prompt.getInstructions().forEach(System.out::println);
}
private void sendPromptAndUpdate(Prompt prompt, String userPrompt) {
prompt.getInstructions().add(new UserMessage(userPrompt));
ChatResponse resp = openAiChatClient.call(prompt);
// here add output which is the AssistantMessage to prompt so we can keep history of conversation
prompt.getInstructions().add(resp.getResult().getOutput());
}
Output:
UserMessage{content='What is the population of Canada?', properties={}, messageType=USER} AssistantMessage{content='As of 2021, the population of Canada is approximately 38 million.', properties={role=ASSISTANT}, messageType=ASSISTANT} UserMessage{content='What is the capital?', properties={}, messageType=USER} AssistantMessage{content='The capital of Canada is Ottawa.', properties={role=ASSISTANT}, messageType=ASSISTANT}
private void openAiChatRunWithList() {
List<Message> messages = new ArrayList<>();
String userPrompt1 = "What is the population of Canada?";
addMessageAndCallApi(messages, userPrompt1);
String userPrompt2 = "And what is the capital (only name)?";
addMessageAndCallApi(messages, userPrompt2);
messages.forEach(System.out::println);
/**
* UserMessage{content='What is the population of Canada?', properties={}, messageType=USER}
* AssistantMessage{content='As of 2021, the estimated population of Canada is around 38 million.', properties={role=ASSISTANT}, messageType=ASSISTANT}
* UserMessage{content='And what is the capital (only name)?', properties={}, messageType=USER}
* AssistantMessage{content='The capital of Canada is Ottawa.', properties={role=ASSISTANT}, messageType=ASSISTANT}
*/
}
private void addMessageAndCallApi(List<Message> messages, String userPrompt) {
messages.add(new UserMessage(userPrompt));
Prompt prompt = new Prompt(messages);
ChatResponse resp = openAiChatClient.call(prompt);
resp.getResults().forEach(r -> {
messages.add(r.getOutput());
});
}
Output:
UserMessage{content='What is the population of Canada?', properties={}, messageType=USER} AssistantMessage{content='As of 2021, the estimated population of Canada is around 38 million.', properties={role=ASSISTANT}, messageType=ASSISTANT} UserMessage{content='And what is the capital (only name)?', properties={}, messageType=USER} AssistantMessage{content='The capital of Canada is Ottawa.', properties={role=ASSISTANT}, messageType=ASSISTANT}
But I have the feeling that it need tools lib/helper so that you can control history and choose how many messages to include etc..
Hi @hemeda3 , we will start working on an official Spring AI ChatHistory abstraction very soon. Until then perhaps you can use something like this: https://github.com/tzolov/playground-flight-booking/blob/main/src/main/java/ai/spring/demo/ai/playground/services/ChatHistory.java
I don't think there is any implicit support for chat for now, the idea is to send the previous discussion as part of the prompt itself and this helps the chat bot maintain the context of the application. I made a video in details on this topic where I built a chat bot using open ai chat completion API. Here is the video if you wish to watch https://www.youtube.com/watch?v=83KTwxCCe9I
Moreover, please note that LLMs are stateless by default and by design. So all the chatbots like chat gpt or Gemini they implement the context awareness in the message thread on the application level. So effectively, maintaining the chat context/awareness/history over a LLM is a matter of having a wrapper of some sort. Chat GPT web portal does so by sending something called as a conversation_id
I don't think there is any implicit support for chat for now, the idea is to send the previous discussion as part of the prompt itself and this helps the chat bot maintain the context of the application. I made a video in details on this topic where I built a chat bot using open ai chat completion API. Here is the video if you wish to watch https://www.youtube.com/watch?v=83KTwxCCe9I
Moreover, please note that LLMs are stateless by default and by design. So all the chatbots like chat gpt or Gemini they implement the context awareness in the message thread on the application level. So effectively, maintaining the chat context/awareness/history over a LLM is a matter of having a wrapper of some sort. Chat GPT web portal does so by sending something called as a conversation_id
One downside of this approach is accumulation of large amount of tokens quickly which is both bad of being within the context window limit and the billing. However, one common optimisation is to either summarise the discussion so far (by using the chat api itself) so the context text becomes compact and smaller yet meaningful enough to convey the context or maintain context for only last K messages in a given session (chat window which is tracked by web ui of chat gpt). I have discussed all these information in more details in the video linked above.
Isn't the purpose of feature like "thread" provided by OpenAI? I guess abstraction could use it as implementation (or in-memory list of history messages, should it be decided by configuration?). See also https://github.com/spring-projects/spring-ai/issues/506
I don't think there is any implicit support for chat for now, the idea is to send the previous discussion as part of the prompt itself and this helps the chat bot maintain the context of the application. I made a video in details on this topic where I built a chat bot using open ai chat completion API. Here is the video if you wish to watch https://www.youtube.com/watch?v=83KTwxCCe9I Moreover, please note that LLMs are stateless by default and by design. So all the chatbots like chat gpt or Gemini they implement the context awareness in the message thread on the application level. So effectively, maintaining the chat context/awareness/history over a LLM is a matter of having a wrapper of some sort. Chat GPT web portal does so by sending something called as a conversation_id
One downside of this approach is accumulation of large amount of tokens quickly which is both bad of being within the context window limit and the billing. However, one common optimisation is to either summarise the discussion so far (by using the chat api itself) so the context text becomes compact and smaller yet meaningful enough to convey the context or maintain context for only last K messages in a given session (chat window which is tracked by web ui of chat gpt). I have discussed all these information in more details in the video linked above.
need Assistants API
Spring AI 1.0.0-M1 introduced support for chat history via the new ChatClient API: https://docs.spring.io/spring-ai/reference/api/chatclient.html#_chat_memory You can find an example here: https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/00-use-cases/chatbot
@williamspindox Can this issue be closed?
Closing, please reopen if necessary @williamspindox
Hi, can you please provide some examples to use chat history with chatcompletition? Thanks