Closed mcapizzi-cohere closed 10 months ago
The second model will likely perform faster. However, I have seen that the more interference is required by the LM Format Enforcer, the more likely you will get low quality answers. You will have to judge on your specific use case. The LM format enforcer's performance footprint is the same regardless of how much it has to change. Also, it supports LLM features such as beam search, so you are not forced into greedy decoding with it.
This is both (1) a very naive question and (2) probably best suited for another forum, but I'll ask it anyway.
Is there a relationship between (1) the underlying quality of the LLM used and (2) the time it takes to complete the prompt completion? Take two models for example:
Will one of those models complete the generation faster?
This question reveals my lack of detailed understanding of both (1) greedy decoding in general and (2) this implementation but I'd appreciate some more intuition on the question as it will help us decide if it's "worth" using a stronger (most likely larger w.r.t. parameters) model in our application.