Closed blob42 closed 5 months ago
@blob42
Could you send a video or GIF explaining these features?
I have only seen continue generation
in official webuis, not in self-deployed webuis. Can you list projects that support this feature?
I can cite at least 2 popular web uis:
I will add a gif later
Note: in terms of implementation it would be trivial.
Continue is just sending the list of messages (assistant, user) with the last message being from the assistant. The LLM will try to complete it if it can predict more.
Edit is a special case of continue where we edit the assistant message.
We will support the continue generation
feature.
We will not support the edit the last answer from LLM
feature because this feature is not commonly used.
We will support the regenerate
feature.
We will not support the
edit the last answer from LLM
feature because this feature is not commonly used.
It's actually the most used feature for advanced LLM usage as it allows to program the LLM to answer in a specific style.
Would you accept a PR for this ?
No. I have only seen styles defined by prompt, but not by modifying the reply message.
Even the official ChatGPT website doesn't have this button.
While the TUI is not ideal for editing individual messages, it excels at editing entire session YAML files.
We can implement an .edit
command to:
I think this is the right way for TUI, rather than imitating WebUI.
Yes this .edit.
command would be perfect. Editing the whole session is even better than what GUIs do :+1:
Regarding OpenAI this feature is available in the playground where you can edit the LLM answers. Again this is the most advanced way to interact with an LLM. That is how for example the cli prompt of Llama.cpp works.
Thanks @sigoden :)
The .edit
works pefectly. However .continue
is failing with No incomplete response
. It supposed to continue by sending back all messages. The LLM might reply by appending to its previous answer or just send the end of output token.
The full usage scenario would be:
.edit session
.continue
And it should answer given the modified list of messages.@blob42 The situation of aichat is complicated. It is necessary to add some restrictions. The current implementation does not affect the use of this feature.
@blob42 The situation of aichat is complicated. It is necessary to add some restrictions. The current implementation does not affect the use of this feature.
Based on what I understand from the code, the ask
function is dependant on an input. The .continue
feature as I describe it means no input and this requires rethinking how the call to the API is constructed downstream. Is this the complication you are referring to ?
@blob42
Except for one case (load a new sesison, run .continue
). it has no effect on the others.
As long as you have a successful input/output, you can use .continue
. Is this limitation unbearable for you?
Don't try to convince me to remove this restriction. You can use it yourself, it's really not a problem.
Maybe the error message No incomplete response.
misleads you.
AIchat doesn’t actually check if the response is complete.
Except for one case (load a new sesison, run
.continue
).
Got it, I did not understand what restriction you meant before. Now I understand, it's fine for me.
Maybe the error message
No incomplete response.
misleads you.
You are setting the last_message
to None. Why not setting the last_message to the last message of the session ? That would fix the .continue
feature.
The input/message is not just text, but also images, videos, etc., which cannot be simply exchanged. I find it troublesome and not worth it
All features are implemented, so close the issue.
@blob42 Thanks for your contribution.
Something that is missing in
aichat
is the ability to:I believe these are essential features to unlock the capabilities of LLMs, all famous LLM web UIs have them.
I am willing to make PR if no one is working on those already.