nbonamy / witsy

Witsy: desktop AI assistant
https://witsyai.com
Apache License 2.0
42 stars 4 forks source link

Resource issue #9

Open andy8992 opened 1 week ago

andy8992 commented 1 week ago

First this seems like a very helpful app and has been so far. I was using this for stable diffusion to refine prompts. The text replacement quick prompt feature is very nice for this. However I began to notice that after using this feature my stable diffusion speed is halved. The speed takes a big hit. That being said I don't see any model loaded into my VRAM so i don't know the cause.

But as soon as i kill the process for witsy, my speed doubles and returns to full speed. Often I have this issue with several apps/programs i've tried. I would love it to be able to use this without a hit to speed. I'm using ollama and the model doesn't seem to stay in my vram which is what i want So i'm not sure what causes this speed issue.

I also noticed what I hope is a typo on your website lol

"Witsy itself does collect any of your information. Not even performance data. Everything Witsy needs is saved on your computer and nowhere else. The models you use may have their own privacy policies, so make sure to check those out."

Witsy itself does collect any of your information.

nbonamy commented 1 week ago

Thanks for the typo. It is fixed. Not sure about memory usage: will look into it!

andy8992 commented 5 days ago

Thanks, I know other programs I use do not have this issue it seems. Unsure why, I use MSTY and it has it's own "keep alive" setting for ollama and it never seems to impact my stable diffusion performance. Perhaps a dedicated setting for this could help

andy8992 commented 5 days ago

Alright, I'll do some more testing, it seems to recover after a bit, I'll see if i can figure out why.

andy8992 commented 5 days ago

Okay, so this appears to only happen when I use the context menu, the "ai commands" feature.

If I use the regular LLM interface you made, it does not occur after using it and my performance returns to normal.

If I use the ai commands feature, the performance is significantly impacted, it persists until i kill witsy or I use the regular LLM interface and send a query.

image

Indeed, if I use the above window, it will not occur and if I use this window the issue recovers.

image

If I use this, the issue occurs.

I will add that the command I'm using is the insert below option. An option that brings up a new popup, it doesn't occur.