Vali-98 / ChatterUI

Simple frontend for LLMs built in react-native.
GNU Affero General Public License v3.0
557 stars 28 forks source link

Do generation settings actually work? #64

Closed mlsterpr0 closed 2 months ago

mlsterpr0 commented 3 months ago

I'm not sure. Maybe I'm doing something wrong, but somehow, when the model gives an answer, it's always exactly the same, no matter how many times i press regenerate button. Same question = same answer. Of course i tried to change all the settings like temperature, etc. But overall they are standard. Model is gemma 2 2b (local gguf). When i'm using this same model on PC in oobaboga or kobold, obviously it gives different answers every time you press regenerate. What could it be? Also, i really love this app, i just wish there was an option to change font size, it's way too small. Thanks!

Vali-98 commented 3 months ago

Generation settings do work, but some people do report them failing. I'm not sure what exactly causes it, but it does point to some default sampler setting being incorrect. If anything, try move about each sampler slider and see if it changes anything.

i just wish there was an option to change font size, it's way too small

This is actually a lot of work under the hood, and though I'd like to see it added, its unlikely to happen soon.

Vali-98 commented 2 months ago

Closed due to insufficient information.

Vali-98 commented 2 months ago

Update: b70a1ce732d02426736da1f4d109e5f066df7252

Its possible that consistent generations are due to either top_p, tfs_z or typical_p being 0 instead of 1.