Closed zachsa999 closed 1 year ago
Where did you change the model to GPT-3? I see "model":"gpt-4"
in the request data.
I replaced gpt-4
with gpt-3.5-turbo
and now get a 400 response code instead of 404 with the following error:
{
error: {
message: "This model's maximum context length is 4097 tokens. However, you requested 4476 tokens (476 in the messages, 4000 in the completion). Please reduce the length of the messages or completion.",
type: 'invalid_request_error',
param: 'messages',
code: 'context_length_exceeded'
}
}
It seems the prompt length needs to be reduced for GPT-3 somehow
You need access to GPT-4 or use model gpt-3.5-turbo
. Now it's possible to configure via env variable OPENAI_MODEL
@RafalWilinski setting OPENAI_MODEL=gpt-3.5-turbo
results in the error I posted above.
same issue here
Where did you change the model to GPT-3? I see
"model":"gpt-4"
in the request data.I replaced
gpt-4
withgpt-3.5-turbo
and now get a 400 response code instead of 404 with the following error:{ error: { message: "This model's maximum context length is 4097 tokens. However, you requested 4476 tokens (476 in the messages, 4000 in the completion). Please reduce the length of the messages or completion.", type: 'invalid_request_error', param: 'messages', code: 'context_length_exceeded' } }
It seems the prompt length needs to be reduced for GPT-3 somehow
I fixed this by changing params.maxTokens
in src/models/chatWithTools.ts
to a smaller number, 300 works.
same as @zachsa999 , changing the .env to OPENAI_MODEL=gpt-3.5-turbo gives me a response code 400
Entering new agent_executor chain...
AxiosError: Request failed with status code 400
at createError (/opt/telegram-chatgpt-concierge-bot/node_modules/langchain/dist/util/axios-fetch-adapter.cjs:313:16)
at settle (/opt/telegram-chatgpt-concierge-bot/node_modules/langchain/src/util/axios-fetch-adapter.js:47:3)
at /opt/telegram-chatgpt-concierge-bot/node_modules/langchain/dist/util/axios-fetch-adapter.cjs:181:19
at new Promise (<anonymous>)
at fetchAdapter (/opt/telegram-chatgpt-concierge-bot/node_modules/langchain/dist/util/axios-fetch-adapter.cjs:173:12)
at processTicksAndRejections (node:internal/process/task_queues:95:5) {
..........
method: 'post',
data: '{"model":"gpt-3.5-turbo","temperature":1,"top_p":1,"frequency_penalty":0,"presence_penalty":0,"max_tokens":4000,"n":1,"stop":["Observation:"]
I changed the model to 3.5 to get around the waitlist, the bot initilizes, bu the server crashes when posting request to openai. here is the logs.