ChatGPTNextWeb / ChatGPT-Next-Web

A cross-platform ChatGPT/Gemini UI (Web / PWA / Linux / Win / MacOS). 一键拥有你自己的跨平台 ChatGPT/Gemini 应用。
https://app.nextchat.dev/
MIT License
76.43k stars 59.11k forks source link

[Bug] 视觉模式回答不完整(回答中断) #4084

Closed devyujie closed 8 months ago

devyujie commented 8 months ago

Bug Description

image

Steps to Reproduce

Expected Behavior

Screenshots

No response

Deployment Method

Desktop OS

No response

Desktop Browser

No response

Desktop Browser Version

No response

Smartphone Device

No response

Smartphone OS

No response

Smartphone Browser

No response

Smartphone Browser Version

No response

Additional Logs

No response

Issues-translate-bot commented 8 months ago

Bot detected the issue body's language is not English, translate it automatically.


Title: [Bug] Incomplete answer in visual mode (interrupted answer)

QAbot-zh commented 8 months ago

+1

X-Zero-L commented 8 months ago

+1

boanz commented 8 months ago

+1

jam-cc commented 8 months ago

@devyujie 请教一下怎么使用视觉模式?在哪里输入图片呢?

Issues-translate-bot commented 8 months ago

Bot detected the issue body's language is not English, translate it automatically.


@devyujie Please tell me how to use visual mode? Where do I enter the image?

QAbot-zh commented 8 months ago

@devyujie 请教一下怎么使用视觉模式?在哪里输入图片呢?

切换成一个支持视觉的模型,比如gpt4v或者gemini-vision,就会出现上传图片的图标

Issues-translate-bot commented 8 months ago

Bot detected the issue body's language is not English, translate it automatically.


@devyujie Please tell me how to use visual mode? Where do I enter the image?

Switch to a model that supports vision, such as gpt4v or gemini-vision, and the icon for uploading images will appear.

KyleJKC commented 8 months ago

Same

DreamsCat commented 8 months ago

same,hot to fix? change max_tokens to 4096,same problean

H0llyW00dzZ commented 8 months ago

you need to use max_tokens for gpt-4-vision-preview

example:

https://hackerchat.btz.sh/

image

H0llyW00dzZ commented 8 months ago

same,hot to fix? change max_tokens to 4096,same problean

it doesn't work because default on this repository has been disabled it

https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web/blob/main/app/client/platforms/openai.ts#L109

H0llyW00dzZ commented 8 months ago

Anyways This bug can be easily fixed. However, I don't believe it will be merged into the main branch since the owner has made changes.

AndyX-Net commented 8 months ago

Anyways This bug can be easily fixed. However, I don't believe it will be merged into the main branch since the owner has made changes.

It's really bad, the problem still reproduce even after updated to version 2.11.2.

H0llyW00dzZ commented 8 months ago

Anyways This bug can be easily fixed. However, I don't believe it will be merged into the main branch since the owner has made changes.

It's really bad, the problem still reproduce even after updated to version 2.11.2.

Yes, I understand that there's nothing particularly remarkable about the latest version. It would be more beneficial to focus on bug fixes and performance improvements, rather than adding another AI that may not be entirely stable for everyone.

DreamsCat commented 8 months ago

you need to use max_tokens for gpt-4-vision-preview

example:

https://hackerchat.btz.sh/

image

thank,but I don't see the use max tokens option...

DreamsCat commented 8 months ago

same,hot to fix? change max_tokens to 4096,same problean

it doesn't work because default on this repository has been disabled it

https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web/blob/main/app/client/platforms/openai.ts#L109

okok,i got it

AndyX-Net commented 8 months ago

Anyways This bug can be easily fixed. However, I don't believe it will be merged into the main branch since the owner has made changes.

It's really bad, the problem still reproduce even after updated to version 2.11.2.

Yes, I understand that there's nothing particularly remarkable about the latest version. It would be more beneficial to focus on bug fixes and performance improvements, rather than adding another AI that may not be entirely stable for everyone.

Agree with your point : )

KSnow616 commented 8 months ago

same,hot to fix? change max_tokens to 4096,same problean

it doesn't work because default on this repository has been disabled it

https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web/blob/main/app/client/platforms/openai.ts#L109

Currently GPT4-v has a very low max_token default value, which makes the replies very short and imcomplete. Uncommenting the line and build from source again will pass the max_token value again, override the default and solve the problem.

fred-bf commented 8 months ago

To minimize the impact, only the Vision Model is currently configured separately for max_tokens. If you encounter additional problems, please feel free to give feedback

induite commented 8 months ago

有一个问题,就是图片太大的时候就报错了,能否上传后自动压缩图片?

Issues-translate-bot commented 8 months ago

Bot detected the issue body's language is not English, translate it automatically.


There is a problem, that is, an error is reported when the image is too large. Can the image be automatically compressed after uploading?

dustookk commented 2 months ago

I just solved the same problem; I hope this can help you guys.

change the code of isVisionModel make sure your model name is included

change the code of visionModel && modelConfig.model.includes("preview") make sure your model name is included