There's not much reason to choose to use an instruct model over the chat models, the quality of the translation is similar whereas the cost per token is higher and the maximum context window is smaller.
However, it serves as a proof of concept for decoupling the translation process from the OpenAI API, opening the door to supporting other language models and endpoints.
This could be a difficult merge for project forks, as the PySubtitleGPT module has been renamed and the ChatGPT classes moved into a new OpenAI submodule. The fixes needed to be compatible with the new structure should be straightforward in most cases though.
There's not much reason to choose to use an instruct model over the chat models, the quality of the translation is similar whereas the cost per token is higher and the maximum context window is smaller.
However, it serves as a proof of concept for decoupling the translation process from the OpenAI API, opening the door to supporting other language models and endpoints.
This could be a difficult merge for project forks, as the PySubtitleGPT module has been renamed and the ChatGPT classes moved into a new OpenAI submodule. The fixes needed to be compatible with the new structure should be straightforward in most cases though.