Closed linqingfan closed 5 months ago
Just to make sure I understand this correctly: when the chat retrieves a response, you want to have a "continue" feature to get another response from the server which would contain the rest of the response?
If that is the case, then I would recommend using the handler
function which will allow you to manipulate the response just the way you want it and make as many calls as you need before displaying the result in the chat.
If you are concerned about token limitations, I would recommend the use of the requestBodyLimits
property which will automatically limit what is sent to the server. Additionally, you can use the requestInterceptor
to manipulate what is sent to the server.
If you are looking for something else, please provide more information about your use-case.
Thankyou!
Just to make sure I understand this correctly: when the chat retrieves a response, you want to have a "continue" feature to get another response from the server which would contain the rest of the response? If that is the case, then I would recommend using the
handler
function which will allow you to manipulate the response just the way you want it and make as many calls as you need before displaying the result in the chat.If you are concerned about token limitations, I would recommend the use of the
requestBodyLimits
property which will automatically limit what is sent to the server. Additionally, you can use therequestInterceptor
to manipulate what is sent to the server.If you are looking for something else, please provide more information about your use-case.
Thankyou!
Yes, from the screenshot shown, when the user click at the button, the intention is to clear the button and continue from the message. It MUST continue from "Grease" in the e.g. because the whole answer is in markdown table format, if not the format will be broken.
I notice there is a submit new message method but how to access the chat history and send it to the LLM to let it continue? If we can directly access the chat history during request or response, it will be easy. Increasing the max tokens (the LLM output limit) were not used intentionally. The e.g shown has max tokens of 30 for demonstration purpose. For practical use, max tokens of 250 is commonly used and long answer of more than 500 words are quite common also. This is better user experience to have the option to skip to other question instead of waiting for long complete answer especially in slow compute system.
BTW, to let LLM continue the answer, the chat history need to be sent to the LLM. If during request or response method we have accessed to the chat history and are able to modify the content during response, that will be great. whatever things modified will be shown on the chat instead of only having clearMessage method.
In order to send the full chat history to the server, set the maxMessages
property in requestBodyLimits
to 0.
There is no real way to manipulate the chat history, so if you want to remove the previous message - I can suggest the use of the overwrite
property in your Response messages.
The use of a continue button is not easy to do as the submitUserMessage
method expects text
as its argument, therefore the chat will display a new user message every time the method is called.
I probably have three suggestions that would work best for you which you can explore:
handler
function as a websocket
. You won't need a real websocket connection, however it will allow you to trigger the signals.onResponse(response);
message dynamically as many times as needed, meaning that when the user clicks continue you can have some javascript that would trigger a call to your service and the trigger the signals.onResponse
call.html
responses instead of text
, that way you will be able to add the continue button in the same markup and once it is clicked - trigger some javascript that would get a response and change that markup.I will try to help as much as I can, but your use-case appears to be very unique, hence the configuration is inherently complicated. Let me know if you need other help.
As I m new to javascript/web programming, I try to use easier way. Managed to manipulate the chathistory by using the combination of getMessages, clearMessages and addMessage. However, need to use older version in order to use addMessage. Thanks for the advice!
The addMessage
method is available in the newest Deep Chat versions, except it is called _addMessage
. The reasons why it was removed from the main API are explained in this thread.
May I ask you how you are using it, or why do you find it useful as we may include it in the main API in the future. Thanks!
For the button image in my previous post, I can delete the button and continue from the message by intercepting the response and calling the following code before returning the response:
function clearbtn(){
const elementRef = document.getElementById("chat-element");
chathistory=elementRef.getMessages() # store chat history
elementRef.clearMessages();
chathistory.slice(0,-2).map(elementRef.addMessage) //re-print chat history but exclude the previous ans with button
}
The response text consists of concatenated answer. In this way, I can manipulate the chat history any way I want. Not as efficient if I can access to the chat history directly.
Ohhhhh I see, that is very clever!
You could also potentially be able to reinstantiate the message history by setting the initialMessages
property. E.g. elementRef.initialMessages = chatHistory;
. This will cause a full re-render of the chat which may not be what you want, but certainly can be used as an alternative.
Thanks again for the excellent work!
Is it possible to manipulate the chat history directly? I m trying to implement "continue" feature. This is because when the max tokens is reached, the LLM will return partial answer. Need to implement continue feature so that the full answer can be concatenated to the last character in the last ai answer.