Closed devyujie closed 8 months ago
Bot detected the issue body's language is not English, translate it automatically.
Title: [Bug] Incomplete answer in visual mode (interrupted answer)
+1
+1
+1
@devyujie 请教一下怎么使用视觉模式?在哪里输入图片呢?
Bot detected the issue body's language is not English, translate it automatically.
@devyujie Please tell me how to use visual mode? Where do I enter the image?
@devyujie 请教一下怎么使用视觉模式?在哪里输入图片呢?
切换成一个支持视觉的模型,比如gpt4v或者gemini-vision,就会出现上传图片的图标
Bot detected the issue body's language is not English, translate it automatically.
@devyujie Please tell me how to use visual mode? Where do I enter the image?
Switch to a model that supports vision, such as gpt4v or gemini-vision, and the icon for uploading images will appear.
Same
same,hot to fix? change max_tokens to 4096,same problean
same,hot to fix? change max_tokens to 4096,same problean
it doesn't work because default on this repository has been disabled it
https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web/blob/main/app/client/platforms/openai.ts#L109
Anyways This bug can be easily fixed. However, I don't believe it will be merged into the main branch since the owner has made changes.
Anyways This bug can be easily fixed. However, I don't believe it will be merged into the main branch since the owner has made changes.
It's really bad, the problem still reproduce even after updated to version 2.11.2.
Anyways This bug can be easily fixed. However, I don't believe it will be merged into the main branch since the owner has made changes.
It's really bad, the problem still reproduce even after updated to version 2.11.2.
Yes, I understand that there's nothing particularly remarkable about the latest version. It would be more beneficial to focus on bug fixes and performance improvements, rather than adding another AI that may not be entirely stable for everyone.
you need to use
max_tokens
forgpt-4-vision-preview
example:
thank,but I don't see the use max tokens option...
same,hot to fix? change max_tokens to 4096,same problean
it doesn't work because default on this repository has been disabled it
https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web/blob/main/app/client/platforms/openai.ts#L109
okok,i got it
Anyways This bug can be easily fixed. However, I don't believe it will be merged into the main branch since the owner has made changes.
It's really bad, the problem still reproduce even after updated to version 2.11.2.
Yes, I understand that there's nothing particularly remarkable about the latest version. It would be more beneficial to focus on bug fixes and performance improvements, rather than adding another AI that may not be entirely stable for everyone.
Agree with your point : )
same,hot to fix? change max_tokens to 4096,same problean
it doesn't work because default on this repository has been disabled it
https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web/blob/main/app/client/platforms/openai.ts#L109
Currently GPT4-v has a very low max_token default value, which makes the replies very short and imcomplete. Uncommenting the line and build from source again will pass the max_token value again, override the default and solve the problem.
To minimize the impact, only the Vision Model is currently configured separately for max_tokens. If you encounter additional problems, please feel free to give feedback
有一个问题,就是图片太大的时候就报错了,能否上传后自动压缩图片?
Bot detected the issue body's language is not English, translate it automatically.
There is a problem, that is, an error is reported when the image is too large. Can the image be automatically compressed after uploading?
I just solved the same problem; I hope this can help you guys.
change the code of isVisionModel make sure your model name is included
change the code of visionModel && modelConfig.model.includes("preview") make sure your model name is included
Bug Description
Steps to Reproduce
。
Expected Behavior
。
Screenshots
No response
Deployment Method
Desktop OS
No response
Desktop Browser
No response
Desktop Browser Version
No response
Smartphone Device
No response
Smartphone OS
No response
Smartphone Browser
No response
Smartphone Browser Version
No response
Additional Logs
No response