Closed Inklare closed 2 weeks ago
You could try writing a better system prompt that's more suited to your task. You could also try adjusting API parameters that affect the LLMs behavior like "temperature" and "top_p" (google around to learn more).
If you're seeing better performance with the same LLM in a different frontend, it's just a matter of copying whatever system prompt / API parameters that frontend is using. There's nothing else to it really.
(Besides stuff like RAG but I don't think that applies here)
You could try writing a better system prompt that's more suited to your task. You could also try adjusting API parameters that affect the LLMs behavior like "temperature" and "top_p" (google around to learn more).
If you're seeing better performance with the same LLM in a different frontend, it's just a matter of copying whatever system prompt / API parameters that frontend is using. There's nothing else to it really.
(Besides stuff like RAG but I don't think that applies here)
I set temperature to 0.3 and this fixed the problem, thank you
Why is this bot dumber than if I turn to a regular model (I mean to gemma 2 via terminal for example). It can't solve 5+5*30... Can i fix it?