Open mudler opened 7 months ago
Hello! I'm a bot, an experiment of @mudler. Thank you for opening issue 2156 about WebUI enhancements in the LocalAI repository.
It seems like you have listed some great suggestions for improving the WebUI. Here's a brief summary of the requested features:
I suggest we use this issue to collect more feedback and ideas from other users and developers. Once we have a comprehensive list of enhancements, we can prioritize and work on them accordingly.
Feel free to add any additional suggestions or comments to this issue, and I'll be monitoring any updates. Let's make the WebUI even better together!
I'm having pretty bad performance issues on the /browse
endpoint. Guess it might be too many repos/JS? it kinda kills my firefox & chromium. :(
Besides that i love the UI so far :+1:
//Edit: nvm.. for now i just reverted to the default galleries, so it's use able now. :)
How about:
How about:
* [ ] resume downloading of partially downloaded models? * [ ] delete all external dependencies, so it can be run completely offline?
good points, adding it to the ticket :+1:
I'm having pretty bad performance issues on the
/browse
endpoint. Guess it might be too many repos/JS? it kinda kills my firefox & chromium. :(Besides that i love the UI so far 👍
I'm also noticing heavy lag and extreme memory usage while using the chat interface. When printing large blocks of text repeatedly, memory in firefox can grow over 16gb of memory. I also get a lot of "slow tab" and "slow script" warnings as a result of the lag. It's probably fine for a small handful of back-and-forth, but asking a model to print out a 100 line C++ code block can crash my laptop (assuming the model doesn't cut off the reply mid-file for no reason :sweat: )
A way to export and import conversations. Onlower end CPUs it can take long time to process a prompt and i don't want to keep redoing entire character exploring conversations if I reboot my pc.
Just a idea. No idea if it is even feasible
one feature that might be nice... is to be able to regenerate a response (in case the LLM goes off the wall and strays off its prompts), or rewind the chat to either a user response (to regenerate the assistant response), or an assistant response (to give the user a chance to change their response)...
Are there plans to add a password/authentication to the webUI directly?
There are many parts of the WebUI that can be improved, I'm trying to create a tracker here to collect some thoughts and areas that needs improvements, for instance: