Closed Mortezanavidi closed 5 months ago
weirdly enough, openai library and cURL give error but chatbox works perfectly fine with the models
Update to version 0.2.1.9. The last update have fix for this issue.
Update to version 0.2.1.9. The last update have fix for this issue.
i am already using the latest version which is hlohaus789/g4f:0.2.1.9
Maybe your request is incomplete. Model and messages are required. If you forget them, is the error valid.
Maybe your request is incomplete. Model and messages are required. If you forget them, is the error valid.
this is the cURL i am using:
curl -XPOST -H 'Authorization: Bearer sk-ANYTHING' -H "Content-type: application/json" -d '{
"model": "airoboros-70b",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Who won the world series in 2020?"
},
{
"role": "assistant",
"content": "The Los Angeles Dodgers won the World Series in 2020."
},
{
"role": "user",
"content": "Where was it played?"
}
]
}' 'http://127.0.0.1:7500/v1/chat/completions'
Maybe your request is incomplete. Model and messages are required. If you forget them, is the error valid.
this is the cURL i am using:
curl -XPOST -H 'Authorization: Bearer sk-ANYTHING' -H "Content-type: application/json" -d '{ "model": "airoboros-70b", "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "Who won the world series in 2020?" }, { "role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020." }, { "role": "user", "content": "Where was it played?" } ] }' 'http://127.0.0.1:7500/v1/chat/completions'
which if i just change airoboros-70b to gpt-4, everything works as expected but anything beside gpt-*, gives the stated error, on g4f:0.2.1.9
Ok, only not GPT models. I will have a look.
Maybe you have to upgrade some packages. I can use the API with all models on my phone.
And can you look in the logs? You can do it with this command:
docker logs
Or this command:
docker-compose logs
Bumping this issue because it has been open for 7 days with no activity. Closing automatically in 7 days unless it becomes active again.
Closing due to inactivity.
I have this response almost randomly (but about 80% of requests) to gpt-3.5-turbo. Checked with many providers. These response 422 Unprocessable Entity most often: FreeGpt ChatgptX ChatgptAi Chatgpt4Online ChatForAi
Tried with/without proxy. No matter.
Any solution?
Tried the latest v0.2.8.0 - didn't help
Logs in docker: INFO: 172.17.0.1:59932 - "POST /v1/chat/completions HTTP/1.1" 422 Unprocessable Entity (yes, that's all)
NodeJS openai lib error message: UnprocessableEntityError: 422 status code (no body)
@xtekky not sure if you will see a closed issue message
Do you have you a example for a request body to the API?
I think i found the problem. I get the error when I send multiple user-roled messages.
For example:
const openai = new OpenAI({
baseURL: 'http://my-server:1337/v1',
apiKey: 'rubbish string',
});
// This works fine
const chatCompletion = await openai.chat.completions.create({
model: 'gpt-3.5-turbo',
provider: 'FreeGpt',
temperature: 0.5,
messages: [
{
role: 'user',
content: 'Say hello',
},
],
});
// This throws error
const chatCompletion = await openai.chat.completions.create({
model: 'gpt-3.5-turbo',
provider: 'FreeGpt',
temperature: 0.5,
messages: [
{
role: 'user',
content: 'Say hello again',
},
{
role: 'user',
content: 'Say hello',
},
],
});
InternalServerError: 500 RequestsError: HTTP Error 400:
at Function.generate (/Users/akuma/projects/gpt-test/node_modules/openai/src/error.ts:95:14)
at OpenAI.makeStatusError (/Users/akuma/projects/gpt-test/node_modules/openai/src/core.ts:383:21)
at OpenAI.makeRequest (/Users/akuma/projects/gpt-test/node_modules/openai/src/core.ts:446:24)
at processTicksAndRejections (node:internal/process/task_queues:95:5)
at async main (/Users/akuma/projects/gpt-test/index-test.ts:14:28) {
status: 500,
headers: {
'content-length': '106',
'content-type': 'application/json',
date: 'Sat, 06 Apr 2024 08:24:46 GMT',
server: 'uvicorn'
},
error: { message: 'RequestsError: HTTP Error 400: ' },
code: undefined,
param: undefined,
type: undefined
}
This time in the console: https://dropover.cloud/5c6e69 (Sorry, can't copy it as a text)
Now this is a 400 code, not 422. Have no idea why. Yesterday all evening I was getting 422 from almost every provider.
Oh my god, I'm so sorry. I found the problem.
Accidentally, undefined-s got into my message array. This is what is causing the problem.
const chatCompletion = await openai.chat.completions.create({
model: 'gpt-3.5-turbo',
provider: 'FreeGpt',
temperature: 0.5,
messages: [
{
role: 'user',
content: 'Say hello again',
},
undefined,
{
role: 'user',
content: 'Say hello',
},
],
});
And we will get UnprocessableEntityError: 422 status code (no body)
error.
In fact, this is a user flaw, but that's would be good if you can fix it on the script side, because official openapi does not show any errors, that's why I hadn't found it earlier.
Hello, I'll try to make the UnprocessableEntityError response better.
I made the error message better.
Bug description i get this response from the docker image or g4f library whenever i send a chatcompletion request to the api? this is the error:
even with cURL i get the below error:
this is not related to deepinfra, this happens with any model beside gpt-*
Environment