Closed teknightstick closed 1 year ago
I tried that once, too with a GPT-4 API key. It answered with GPT-3.5. I am not sure whether the model properly answers the question. When you ask the same question via CURL with the key and the model, do you get a different response? It might be a bug, but the code looks pretty simple to me. Can you spot the bug?
I currently don't have access to the GPT-4 API (no key), so I cannot debug this. Please help with the debugging.
/app $ echo "OPENAI_MODEL_NAME: ${OPENAI_MODEL_NAME:-gpt-3.5-turbo}" OPENAI_MODEL_NAME: gpt-4 /app $ echo "OPENAI_MAX_TOKENS: ${OPENAI_MAX_TOKENS:-2000}" OPENAI_MAX_TOKENS: 8000 /app $
so this is what I am getting back.
I signed up for gpt4 api on the waitlist and I also used chatgpt to make up the application and I got it in a week fyi
/app $ cat src/openai-thread-completion.js const { Configuration, OpenAIApi } = require("openai"); const configuration = new Configuration({ apiKey: process.env["OPENAI_API_KEY"] }); const openai = new OpenAIApi(configuration);
const model = process.env["OPENAI_MODEL_NAME"] ?? 'gpt-3.5-turbo' const max_tokens = Number(process.env["OPENAI_MAX_TOKENS"] ?? 2000)
async function continueThread(messages) { const response = await openai.createChatCompletion({ messages: messages, model, max_tokens }); return response.data?.choices?.[0]?.message?.content }
module.exports = { continueThread }
This is what I got back so far.
I don't get what your last comment. This is the original code, isn't it? I mean, just change the hard-coded value if you want to debug.
Again: When you ask the same question via CURL with the key and the model, do you get a different response?
I signed up for the plugin API 5 weeks ago and never heard back.
For me, GPT4 is working, I get some other errors with formatting requests, but thats another topic. I will create an issue when I've investigated any further.
But the problem you are facing here seems to be model related, fyi @teknightstick @yGuy:
{
"id": "...",
"object": "chat.completion",
"created": 1683709644,
"model": "gpt-4-0314",
"usage": {
"prompt_tokens": 16,
"completion_tokens": 45,
"total_tokens": 61
},
"choices": [
{
"message": {
"role": "assistant",
"content": "As an AI language model, I am based on OpenAI's GPT-3. However, note that I am constantly evolving and being updated, so my performance may improve over time as new versions or updates get integrated."
},
"finish_reason": "stop",
"index": 0
}
]
}
Thank you @jensschaerer for the confirmation. That's what I was expecting, too. Interestingly when you use the chat.openai.com page, it frequently responds "correctly". Could be that they are using a different system message to prime the model. E.g. I think they also add(ed) time information to the prompt in chat.openai.com . Maybe they added the GPT-4 information, too. How would the model know that it is GPT-4 without a prompt, after all? At the time it was trained there was no GPT-4 :-) - it must be in the prompt or in the fine-tuning.
You can ask the bot @chatgpt Which ChatGPT model are you using currently?
and it will answer I am currently using OpenAI's ChatGPT powered by the GPT-3.5-turbo engine.
for both models gpt-3.5-turbo
or gpt-4
.
When using model gpt-4-0314
the answer to above question is I am currently using OpenAI's ChatGPT.
Is there something obvious that i am missing ?