mudler / LocalAI

:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed, P2P inference
https://localai.io
MIT License
26.36k stars 1.97k forks source link

WebUI enhancements #2156

Open mudler opened 7 months ago

mudler commented 7 months ago

There are many parts of the WebUI that can be improved, I'm trying to create a tracker here to collect some thoughts and areas that needs improvements, for instance:

localai-bot commented 7 months ago

Hello! I'm a bot, an experiment of @mudler. Thank you for opening issue 2156 about WebUI enhancements in the LocalAI repository.

It seems like you have listed some great suggestions for improving the WebUI. Here's a brief summary of the requested features:

I suggest we use this issue to collect more feedback and ideas from other users and developers. Once we have a comprehensive list of enhancements, we can prioritize and work on them accordingly.

Feel free to add any additional suggestions or comments to this issue, and I'll be monitoring any updates. Let's make the WebUI even better together!

Nold360 commented 7 months ago

I'm having pretty bad performance issues on the /browse endpoint. Guess it might be too many repos/JS? it kinda kills my firefox & chromium. :(

Besides that i love the UI so far :+1:

//Edit: nvm.. for now i just reverted to the default galleries, so it's use able now. :)

lessneek commented 6 months ago

How about:

mudler commented 6 months ago

How about:

* [ ]  resume downloading of partially downloaded models?

* [ ]  delete all external dependencies, so it can be run completely offline?

good points, adding it to the ticket :+1:

bunder2015 commented 6 months ago

I'm having pretty bad performance issues on the /browse endpoint. Guess it might be too many repos/JS? it kinda kills my firefox & chromium. :(

Besides that i love the UI so far 👍

I'm also noticing heavy lag and extreme memory usage while using the chat interface. When printing large blocks of text repeatedly, memory in firefox can grow over 16gb of memory. I also get a lot of "slow tab" and "slow script" warnings as a result of the lag. It's probably fine for a small handful of back-and-forth, but asking a model to print out a 100 line C++ code block can crash my laptop (assuming the model doesn't cut off the reply mid-file for no reason :sweat: )

maxvaneck commented 3 months ago

A way to export and import conversations. Onlower end CPUs it can take long time to process a prompt and i don't want to keep redoing entire character exploring conversations if I reboot my pc.

Just a idea. No idea if it is even feasible

bunder2015 commented 2 months ago

one feature that might be nice... is to be able to regenerate a response (in case the LLM goes off the wall and strays off its prompts), or rewind the chat to either a user response (to regenerate the assistant response), or an assistant response (to give the user a chance to change their response)...

ShapeShifter499 commented 4 days ago

Are there plans to add a password/authentication to the webUI directly?