ggerganov / llama.cpp

LLM inference in C/C++
MIT License
68k stars 9.75k forks source link

Bug: `llama-server` web UI resets the text selection during inference on every token update #9608

Open mashdragon opened 1 month ago

mashdragon commented 1 month ago

What happened?

When using llama-server, the output in the UI can't be easily selected or copied until after text generation stops. This may be because the script replaces all the DOM nodes of the current generation when every new token is output.

The existing text content ideally shouldn't be replaced during generation so we can copy the text as it continues to produce output.

Name and Version

version: 3755 (822b6322) built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu

What operating system are you seeing the problem on?

No response

Relevant log output

No response

github-actions[bot] commented 1 week ago

This issue was closed because it has been inactive for 14 days since being marked as stale.

mashdragon commented 1 week ago

This is still an issue which impacts usability.