Closed Neoony closed 1 month ago
I also updated my api key in subtrans to those newer project API keys (I still had the old user api key there) But also no change
The model list is filtered by ids that start with "gpt" to exclude models like DALL-E, Whisper and the old Davinci/Babbage models. I can include "chatgpt-" too, but it looks like Open AI advise against using it for production use 🤔
As a workaround you could use the Local Server provider and specify Server Address https://api.openai.com/ and Endpoint v1/chat/completions - you can specify the model name manually then😉
The model list is filtered by ids that start with "gpt" to exclude models like DALL-E, Whisper and the old Davinci/Babbage models. I can include "chatgpt-" too, but it looks like Open AI advise against using it for production use 🤔
Yeah, Likely also because it keeps getting updated (it keeps changing) and probably also priority is not high so might be slower. But I have not noticed any issues at all using it.
https://lmarena.ai/ (the o1 works very differently, so not interested in that one for this use)
As a workaround you could use the Local Server provider and specify Server Address https://api.openai.com/ and Endpoint v1/chat/completions - you can specify the model name manually then😉
Ah right, in that case, thats good enough for me. Just tried and it works. However keep getting not matching lines / fuzzy translations (while some batches translate fine)
It was working almost perfectly with model gpt-4o-2024-08-06 and I didnt need any changes for my stuff using GPT to use this new model, but indeed this needs a lot more specific format from GPT, which I am not doing anywhere myself
Not sure if its the model or that something works differently when using Local Server (I see it also has a prompt template) Well I also used subtitles for a TV Series that I didnt translate yet, might also be the subtitles.
Gonna have to take a better look when I have more time. (I will also try my previous model on the Local Server method and on some subtitles I translated before to compare, just need to find the time)
But anyways, Thanks!
Ok I think its just the model. It seems to like to stop earlier than the gpt-4o-2024-08-06 If I use local server with gpt-4o-2024-08-06, it seems to be ok
I was actually using quite huge batches (min 100 max 400), since these models can all output 16k For gpt-4o-2024-08-06 it worked quite perfectly, only sometimes it would fail (I know that then it has to retry much bigger part and some other disadvantages of big batches)
But seems like chatgpt-4o-latest just likes to stop early before translating the whole thing (just a reason stop) Even though it still could have used more output tokens. Seems to work a lot better with min 100 and max 200 batches Maybe would need some instructions tweaking (I might play around with it)
Its a lot slower though
Anyways, I just like to experiment on how the new models behave, whether they are intended for production or not :P
Anyways, closing this. Thanks!
I once asked similar thing a while ago, when I could not see some OpenAI model to choose from the dropdown. (but then it appeared, or I just didnt see it)
Seems now I have the same kind of thing with chatgpt-4o-latest model
On latest 0.8.3
When I check the models API (using the python import OpenAI)
pip install openai
But gpt-subtrans
Any idea why its missing? I can see that it is querying the models api
That model just seems much better in everything else I use it. Would love to use it here.
(Again why I would rather prefer to be able to type custom model for such situations, as alternative 😄 )