LAION-AI / Open-Assistant

OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
https://open-assistant.io
Apache License 2.0
36.97k stars 3.23k forks source link

Allow cancellation of prediction while running prompt #2815

Open daanrongen opened 1 year ago

daanrongen commented 1 year ago

Open Assistant is great, but sometimes it will predict a long answer where I can spot a misinterpretation right away. Whether this is because my prompt was faulty and I realise this too late, or the model hallucinates. Either way, having to wait for the entire prediction to load significantly reduces the UX (due to waiting time). It would be useful to be able to abort the prediction.

Model OA_SFT_Llama_30B_6 
Top K 50 
Top P 0.95 
Temperature 0.9 
Repetition penalty 1.2 
Max new tokens 1024
yk commented 1 year ago

Note: this requires fairly deep understanding of the inference system

alexn-s commented 1 year ago

the issue is similar to #2647. can this be closed?

0xfacade commented 1 year ago

I would enjoy working on this, as it encompasses most of the stack and would give me a chance to quickly get to know the entire application. I could put together a proposal for how to implement this over the next week, then work on the implementation the week after that.

Forbu commented 1 year ago

Is this really up to OA to do this ? it looks like OA has a big dependency to https://github.com/huggingface/text-generation-inference for the inference. If we want to have a proper way of handling cancellation, we should perhaps make (or wait ...) a contribution on this repo. I can add an issue on their side and try to contribute here. In the openai api doc they don't have anything for cancellation, but they do have cancellation on their website.

Image

I wonder if this is just a UI thing (they let the prompt generation run on their server).

0xfacade commented 1 year ago

I was thinking we could implement the cancellation as a stopping criterion that could be influenced from outside the model. The interface already supports stopping criteria. I haven't spent too much thought yet on that aspect of it, but it seems like a viable solution to me.

I currently see more issues in turning the communication into a bidrectional channel. Currently, the communication happens over server side events which are uni directional. I'm trying to find out whether it could be an option to close the server side communication as an indicator to stop generating.

0xfacade commented 1 year ago

As a first step, I would implement only the backend part of this feature. Goal: stop generating tokens when the event stream is closed. We can use the exception that is raised in this case to indicate that generation should be stopped - the communication is uni-directional otherwise, so we can't really send any complex messages from the client to the server.

Necessary steps:

I've added comments to the necessary places in this MR in my fork: https://github.com/0xfacade/Open-Assistant/pull/1

Potential issue: if there's a proxy in between the UI and the server(s), that proxy would have to close the connection immediately when the client closes it. I don't think that is going to be an issue, though.

Let me know what you think. @andreaskoepf

yk commented 1 year ago

@0xfacade really nice analysis, thank you very much, I think it's a totally viable plan to move forward!

0xfacade commented 1 year ago

Great, thanks. Then I'll implement this over the coming evenings. Shouldn't take long!

axel7083 commented 1 year ago

What is the state of this issues ? A PRs has been merged and revert ?

0xfacade commented 11 months ago

Status update:

0xfacade commented 11 months ago

If someone else would like to work on this, I'd be happy to share my insights and plan on how to do this.

DanyRay420 commented 4 months ago

GO

DanyRay420 commented 4 months ago

Get it