Closed rljacobson closed 4 hours ago
As someone who wants to try this out, my first question is, which LLM should I use? It's really cool that there are so many options, but surely some perform better than others. I understand there may be some subjectivity involved, but I would find even vague general guidance helpful.
I did read the readme, but it's possible that you have given such guidance and I've missed it, in which case never mind.
Hi, In readme I mentioned that best model now is gpt-4o
and for local model ollama like Phi-3-medium instruct (128k) and Mistral Large (2407) are good in the result but not very accurate like gpt-4o
.
As someone who wants to try this out, my first question is, which LLM should I use? It's really cool that there are so many options, but surely some perform better than others. I understand there may be some subjectivity involved, but I would find even vague general guidance helpful.
I did read the readme, but it's possible that you have given such guidance and I've missed it, in which case never mind.