Closed AndreySupr closed 2 weeks ago
Bro do you got solution, I got same problem man. Is there another repositories you know.
@AndreySupr @Techbeastz Hi. provide please code or what provider, or what model you try run for replicate this problem.
I am running the text generation code "https://github.com/xtekky/gpt4free" and here's the code I am trying to run "from g4f.client import Client
client = Client() response = client.chat.completions.create( model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hello"}],
) print(response.choices[0].message.content)"
Working correctly. Try to reinstall like pip install -U g4f
or use other provider for this model or try to set model gpt-4o-mini
from g4f.client import Client
client = Client()
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello"}],
# Add any other necessary parameters
)
print(response.choices[0].message.content)
thank you so much man i love you, but gpt-4o-mini is working and gpt-3.5-turbo is not, by the way do you know any other repositories like this.
try:
response = await g4f.ChatCompletion.create_async(
model=g4f.models.gpt_4,
messages=[{"role": "user", "content": f"{message_without_emojis}\n {req}"}],
)
choices = response.lower()
except Exception as e:
pass
choices = f"Error {e}"
Thank you man
gpt-4o-mini
I have the following code and it gives the error that I indicated earlier. ( model not found or too long input. or any other error (xd))
model=g4f.models.gpt-4o-mini, it doesn't fit that way
Bro I not check this code, but gpt-4o-mini is working.
Hey @AndreySupr and @Techbeastz,
I've been testing this out as well, and it seems like the issue you're facing is specifically with the Airforce provider. They've recently stopped supporting the gpt-3.5-turbo model, which is why you're encountering this problem. The issue has been addressed in this pull request: https://github.com/xtekky/gpt4free/pull/2313.
It's likely that in the near future, support for gpt-3.5-turbo models may decrease further as fewer providers continue to use this model. Most are transitioning to the gpt-4o-mini model instead.
Therefore, I recommend switching to the newer gpt-4o-mini version, as it offers a more stable model with better provider support compared to gpt-3.5-turbo. Finding providers that still support gpt-3.5-turbo is becoming increasingly difficult, and it's uncertain how long they will continue to do so.
While there might still be some providers adding support for gpt-3.5-turbo, it's advisable to use gpt-4o-mini as a replacement. In essence, gpt-4o-mini is an improved version of gpt-3.5-turbo, with additional training, resulting in slightly better responses.
Moreover, gpt-4o-mini is a bit more cost-effective, which is why many providers are opting for it over gpt-3.5-turbo.
So, my recommendation would be to switch to gpt-4o-mini for a more reliable and stable experience.
Привет@AndreySuprи@Techbeastz,
Я тоже это тестировал, и похоже, что проблема, с которой вы столкнулись, связана именно с поставщиком Airforce. Недавно они прекратили поддержку модели gpt-3.5-turbo, поэтому у вас возникла эта проблема. Проблема была решена в этом pull request: #2313 .
Вероятно, в ближайшем будущем поддержка моделей gpt-3.5-turbo может еще больше сократиться, поскольку все меньше поставщиков продолжают использовать эту модель. Большинство переходят на модель gpt-4o-mini.
Поэтому я рекомендую перейти на более новую версию gpt-4o-mini, поскольку она предлагает более стабильную модель с лучшей поддержкой провайдера по сравнению с gpt-3.5-turbo. Найти провайдеров, которые все еще поддерживают gpt-3.5-turbo, становится все сложнее, и неизвестно, как долго они будут это делать.
Хотя некоторые поставщики все еще могут добавлять поддержку gpt-3.5-turbo, рекомендуется использовать gpt-4o-mini в качестве замены. По сути, gpt-4o-mini — это улучшенная версия gpt-3.5-turbo с дополнительным обучением, что приводит к немного лучшим ответам.
Более того, gpt-4o-mini немного более экономичен, поэтому многие провайдеры выбирают его вместо gpt-3.5-turbo.
Поэтому я бы рекомендовал перейти на gpt-4o-mini для более надежной и стабильной работы.
Thx Bro. Here, I changed the code - it works. solution response = await g4f.ChatCompletion.create_async( model="gpt-4o-mini", messages=[{"role": "user", "content": f"{message_without_emojis}\n {req}"}], )
And by the way, I didn’t use the 3.5 model
Before this I had model=g4f.models.gpt_4,
@AndreySupr,
I noticed that in your code, you're using model=g4f.models.gpt-4o-mini
, which is causing the error.
The correct syntax for the model should be model=g4f.models.gpt_4o_mini
instead. The underscore (_
) is used instead of the hyphen (-
) in the model name.
Here's the corrected code:
import g4f
from g4f.client import Client
client = Client()
response = client.chat.completions.create(
model=g4f.models.gpt_4o_mini,
messages=[{"role": "user", "content": "Hello"}],
# Add any other necessary parameters
)
print(response.choices[0].message.content)
Make sure to use g4f.models.gpt_4o_mini
as the model name, and it should resolve the error you're encountering.
It's worth noting that the error "model not found or too long input. or any other error (xd)" is often triggered by the Airforce provider, regardless of the model being used.
INFO model not found or too long input. or any other error (xd)