At the moment, the sonnet model selected as default in the list of models, because of this, he constantly reaches the limit and doesnt respond. It is unnecessary loss of resources, since haiku, according to my personal exprience and some tests, is better than gpt-3.5 turbo and able to give not bad answers. If user needs a better model, he must select it himself.
At the moment, the sonnet model selected as default in the list of models, because of this, he constantly reaches the limit and doesnt respond. It is unnecessary loss of resources, since haiku, according to my personal exprience and some tests, is better than gpt-3.5 turbo and able to give not bad answers. If user needs a better model, he must select it himself.