Open AI have exposed -instruct variants of GPT3.5 via the API. If I understand correctly, these are closer to the base model, without the fine-tuning for conversation that the chat models have, which might mean they are better at following instructions. They are accessed using the older Completions API rather than the Chat Completions API.
A quick test suggests there isn't much difference using the instruct model rather than the chat model, the translation it produces is of comparable quality. Since it has a higher per-token cost than gpt-3.5-turbo and a smaller context window than gpt-3.5-turbo-16k, the negatives probably outweigh the positives.
However, it was a good opportunity to decouple the translation process from the specific API calls, introducing a layer of abstraction that should make it easier to support other models and APIs, including those from other companies.
Open AI have exposed
-instruct
variants of GPT3.5 via the API. If I understand correctly, these are closer to the base model, without the fine-tuning for conversation that the chat models have, which might mean they are better at following instructions. They are accessed using the older Completions API rather than the Chat Completions API.A quick test suggests there isn't much difference using the instruct model rather than the chat model, the translation it produces is of comparable quality. Since it has a higher per-token cost than gpt-3.5-turbo and a smaller context window than gpt-3.5-turbo-16k, the negatives probably outweigh the positives.
However, it was a good opportunity to decouple the translation process from the specific API calls, introducing a layer of abstraction that should make it easier to support other models and APIs, including those from other companies.