Closed adarshpalaskar1 closed 2 months ago
Sorry for the delayed PR. Please let me know if this is the table format you are looking for.
Please let me know about the required changes.
This is great! Thank you for pooling this together.
On the function descriptions, you're right. It would be duplicative, but it would also keep the relevant information together for reference. I don't feel strongly either, so pick whichever you think is better for a new user.
I'm unsure if the catch-all implementation is correct and if LocalServerOpenAISchema should be included. I like how you added the footnotes. I'd probably mention that LocalServerOpenAISchema is specifically for seamless integration for Llama.jl. In general, it's different only by using the ENV variable for URL (making the integration easier in some workflows, eg, when you have nested calls and passing api_kwargs is harder).
@adarshpalaskar1 Thank you for implementing the changes!
Last ask, would you mind please:
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 92.90%. Comparing base (
d61cb67
) to head (9603e9e
). Report is 1 commits behind head on main.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
- adding the GroqOpenAISchema? It supports only aigenerate, aiextract
Yes, will do that.
- marking the resolved items as resolved? I don't want to untick them until I know you're done, so it's easier if you could mark it.
Sure, I was just waiting for your confirmation on the changes.
Let me know when it’s ready. It looks good to me!
Let me know when it’s ready. It looks good to me!
This should be ready to be merged😀
Awesome stuff! Thank you for contributing, @adarshpalaskar1 !
This commit addresses issue #121, adding a new doc page with a table for model providers vs supported functions.