Is your feature request related to a problem? Please describe.
This library looks amazing but I'm having trouble understanding which features I can expect to benefit from depending on the LLM model (or model provider) I use. I've looked through the issues but I'm still not clear.
Describe the solution you'd like
An addition to the readme: a table listing guidance features as rows, model / model providers as columns, and a ✅ or ❌ as values. For example:
Feature
OpenAI chat models (gpt-3.5-turbo, gpt-4)
OpenAI other models (text-davinci-003)
Hugging Face models
Partial completions in assistant role
❌
N/A
✅
Or some other way of documenting what is & isn't supported depending on the model used 🙂
Describe alternatives you've consideredlmql / no alternative
Is your feature request related to a problem? Please describe. This library looks amazing but I'm having trouble understanding which features I can expect to benefit from depending on the LLM model (or model provider) I use. I've looked through the issues but I'm still not clear.
guidance
features as rows, model / model providers as columns, and a ✅ or ❌ as values. For example:gpt-3.5-turbo
,gpt-4
)text-davinci-003
)assistant
roleOr some other way of documenting what is & isn't supported depending on the model used 🙂
Describe alternatives you've considered
lmql
/ no alternativeThank you for your great work!