top_p:取值范围是:(0.0, 1.0) 开区间,不能等于 0 或 1
Traceback (most recent call last):
File "/data/txy/glm-cookbook/vision/video_understanding.py", line 49, in
result = client.chat.completions.create(**params)
File "/apprun/anaconda3/envs/glm4/lib/python3.10/site-packages/zhipuai/api_resource/chat/completions.py", line 73, in create
return self._post(
File "/apprun/anaconda3/envs/glm4/lib/python3.10/site-packages/zhipuai/core/_http_client.py", line 595, in post
return cast(ResponseT, self.request(cast_type, opts, stream=stream, stream_cls=stream_cls))
File "/apprun/anaconda3/envs/glm4/lib/python3.10/site-packages/zhipuai/core/_http_client.py", line 363, in request
return self._request(
File "/apprun/anaconda3/envs/glm4/lib/python3.10/site-packages/zhipuai/core/_http_client.py", line 450, in _request
raise self._make_status_error(err.response) from None
zhipuai.core._errors.APIRequestFailedError: Error code: 400, with error text {"error":{"code":"1214","message":"max_tokens参数非法。请检查文档。"}}
System Info / 系統信息
glm-4v
Information / 问题信息
Reproduction / 复现过程
top_p:取值范围是:(0.0, 1.0) 开区间,不能等于 0 或 1 Traceback (most recent call last): File "/data/txy/glm-cookbook/vision/video_understanding.py", line 49, in
result = client.chat.completions.create(**params)
File "/apprun/anaconda3/envs/glm4/lib/python3.10/site-packages/zhipuai/api_resource/chat/completions.py", line 73, in create
return self._post(
File "/apprun/anaconda3/envs/glm4/lib/python3.10/site-packages/zhipuai/core/_http_client.py", line 595, in post
return cast(ResponseT, self.request(cast_type, opts, stream=stream, stream_cls=stream_cls))
File "/apprun/anaconda3/envs/glm4/lib/python3.10/site-packages/zhipuai/core/_http_client.py", line 363, in request
return self._request(
File "/apprun/anaconda3/envs/glm4/lib/python3.10/site-packages/zhipuai/core/_http_client.py", line 450, in _request
raise self._make_status_error(err.response) from None
zhipuai.core._errors.APIRequestFailedError: Error code: 400, with error text {"error":{"code":"1214","message":"max_tokens参数非法。请检查文档。"}}
Expected behavior / 期待表现
what is the correct max token size of glm-4v?