Previously we drew a line for doing chat-completion inference tasks based on whether the language model was from OpenAI or different (llama, Phi, Cohere, etc.). I've now converged that code so that a user can use the exact same custom skill code to call whichever language model they prefer. All they have to do is modify their environment variables.
I tested this by running this code across text-summarization, entity recognition and image verbalization AND across different language model deployments (gpt-4o & phi 3-5)
Previously we drew a line for doing chat-completion inference tasks based on whether the language model was from OpenAI or different (llama, Phi, Cohere, etc.). I've now converged that code so that a user can use the exact same custom skill code to call whichever language model they prefer. All they have to do is modify their environment variables.
I tested this by running this code across text-summarization, entity recognition and image verbalization AND across different language model deployments (gpt-4o & phi 3-5)