ChatGPTNextWeb / ChatGPT-Next-Web

A cross-platform ChatGPT/Gemini UI (Web / PWA / Linux / Win / MacOS). 一键拥有你自己的跨平台 ChatGPT/Gemini 应用。
https://app.nextchat.dev/
MIT License
74.83k stars 58.94k forks source link

Azure gpt-4-turbo-2024-04-09 没有响应,响应被截断或中断了。[Bug] #4624

Open jmartinezb3 opened 4 months ago

jmartinezb3 commented 4 months ago

Bug Description

在这个例子中,当请求模型定义图像时,模型没有完全回应,只显示了部分回应。我已经调整了最大生成令牌数参数来检查是否是这个问题,但它仍然不起作用。我还修改了在Azure上的模型部署参数,但我仍然得到相同的结果。这让我认为这可能是用户界面的问题,而不是模型配置的问题。

Steps to Reproduce

像往常一样使用模型

Expected Behavior

完整的答案。

Screenshots

Test2

Test

Deployment Method

Desktop OS

Windows 10, 桌面应用程序. v2.12.2

Desktop Browser

No response

Desktop Browser Version

No response

Smartphone Device

No response

Smartphone OS

No response

Smartphone Browser

No response

Smartphone Browser Version

No response

Additional Logs

No response

Issues-translate-bot commented 4 months ago

Bot detected the issue body's language is not English, translate it automatically.


Title: Azure gpt-4-turbo-2024-04-09 is not responding, the response was truncated or interrupted. [Bug]

Dean-YZG commented 4 months ago

my Azure gpt-4-turbo-2024-04-09 doesnt even support image

Dean-YZG commented 4 months ago

currentlly, Azure gpt-4-turbo-2024-04-09 doesnt support image https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/gpt-with-vision?tabs=rest%2Csystem-assigned%2Cresource

jmartinezb3 commented 4 months ago

my Azure gpt-4-turbo-2024-04-09 doesnt even support image

This is not true. If you call the API with a Python or JS script, it does support images. It is literally one of the main functions of the model.

Furthermore, if it didn't support images, in the screenshot I sent, the model wouldn't know what I'm talking about. However, that's not the case. The issue is that the responses are not being displayed correctly. @Dean-YZG

doherty88 commented 4 months ago

我也遇到这样的问题,用的是azure平台的gtp-4-turbo,文本问题回答ok的,只要上传图片的回答,答案会被截断

Issues-translate-bot commented 4 months ago

Bot detected the issue body's language is not English, translate it automatically.


I also encountered such a problem. I used gtp-4-turbo on the azure platform. The text questions were answered ok, but as long as I uploaded the image answer, the answer would be truncated.

jmartinezb3 commented 4 months ago

我也遇到这样的问题,用的是azure平台的gtp-4-turbo,文本问题回答ok的,只要上传图片的回答,答案会被截断

@doherty88 这似乎是关于如何处理带有图像的请求的问题。当模型需要使用其视觉能力时,图形界面无法正确处理请求。在Azure playground中,它可以被正确使用,所以这不是模型或Azure配置的问题。

Issues-translate-bot commented 4 months ago

Bot detected the issue body's language is not English, translate it automatically.


I also encountered such a problem. I used gtp-4-turbo on the azure platform. Answers to text questions were ok, but as long as I uploaded image answers, the answers would be truncated.

@doherty88 This seems to be an issue with how to handle requests with images. When the model needs to use its visual capabilities, the graphical interface does not handle the request correctly. In the Azure playground, it is used correctly, so it's not a problem with the model or the Azure configuration.

libli commented 4 months ago

同样问题,我对比了一下,看起来是流式响应时,第二段及后续的响应和openai官方api响应的结构不一样。

Issues-translate-bot commented 4 months ago

Bot detected the issue body's language is not English, translate it automatically.


I have the same problem. I compared it. It seems that when it is a streaming response, the structure of the second and subsequent responses is different from the openai official api response.

Dean-YZG commented 4 months ago

同样问题,我对比了一下,看起来是流式响应时,第二段及后续的响应和openai官方api响应的结构不一样。

有响应的截图么?

Issues-translate-bot commented 4 months ago

Bot detected the issue body's language is not English, translate it automatically.


I have the same problem. I compared it. It seems that when it is a streaming response, the structure of the second and subsequent responses is different from the openai official api response.

Do you have any screenshots of the response?

Dean-YZG commented 4 months ago

my Azure gpt-4-turbo-2024-04-09 doesnt even support image

This is not true. If you call the API with a Python or JS script, it does support images. It is literally one of the main functions of the model.

Furthermore, if it didn't support images, in the screenshot I sent, the model wouldn't know what I'm talking about. However, that's not the case. The issue is that the responses are not being displayed correctly. @Dean-YZG

i agree with you, do you have any screenshots of the response?

jmartinezb3 commented 4 months ago

Hello, yes, @Dean-YZG

This is the response using custom proxy from https://github.com/haibbo/cf-openai-azure-proxy

这是使用 https://github.com/haibbo/cf-openai-azure-proxy 提供的自定义代理的响应。

imagen imagen

This is the response using usual Azure API configuration.

这是使用通常的 Azure API 配置的响应。

imagen imagen

dustookk commented 3 weeks ago

I just solved the same problem; I hope this can help you guys.

change the code of isVisionModel make sure your model name is included

change the code of visionModel && modelConfig.model.includes("preview") make sure your model name is included