RateLimitError: Error code: 429 - {'error': {'message': 'max request per minute reached: 3, please try again after 1 seconds', 'type': 'rate_limit_reached_error'}}
Traceback:
File "C:\ai\Chenyme_AAVT_0.6.3_FIixbug\Chenyme_AAVT_0.6.3_FIixbug\env\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 584, in _run_script
exec(code, module.__dict__)
File "C:\ai\Chenyme_AAVT_0.6.3_FIixbug\Chenyme_AAVT_0.6.3_FIixbug\pages\📽️视频(Video).py", line 130, in <module>
result = kimi_translate(st.session_state.kimi_key, translate_option, result, language1, language2, token_num)
File "C:\ai\Chenyme_AAVT_0.6.3_FIixbug\Chenyme_AAVT_0.6.3_FIixbug\utils\utils.py", line 190, in kimi_translate
completion = client.chat.completions.create(
File "C:\ai\Chenyme_AAVT_0.6.3_FIixbug\Chenyme_AAVT_0.6.3_FIixbug\env\lib\site-packages\openai\_utils\_utils.py", line 275, in wrapper
return func(*args, **kwargs)
File "C:\ai\Chenyme_AAVT_0.6.3_FIixbug\Chenyme_AAVT_0.6.3_FIixbug\env\lib\site-packages\openai\resources\chat\completions.py", line 667, in create
return self._post(
File "C:\ai\Chenyme_AAVT_0.6.3_FIixbug\Chenyme_AAVT_0.6.3_FIixbug\env\lib\site-packages\openai\_base_client.py", line 1233, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File "C:\ai\Chenyme_AAVT_0.6.3_FIixbug\Chenyme_AAVT_0.6.3_FIixbug\env\lib\site-packages\openai\_base_client.py", line 922, in request
return self._request(
File "C:\ai\Chenyme_AAVT_0.6.3_FIixbug\Chenyme_AAVT_0.6.3_FIixbug\env\lib\site-packages\openai\_base_client.py", line 998, in _request
return self._retry_request(
File "C:\ai\Chenyme_AAVT_0.6.3_FIixbug\Chenyme_AAVT_0.6.3_FIixbug\env\lib\site-packages\openai\_base_client.py", line 1046, in _retry_request
return self._request(
File "C:\ai\Chenyme_AAVT_0.6.3_FIixbug\Chenyme_AAVT_0.6.3_FIixbug\env\lib\site-packages\openai\_base_client.py", line 998, in _request
return self._retry_request(
File "C:\ai\Chenyme_AAVT_0.6.3_FIixbug\Chenyme_AAVT_0.6.3_FIixbug\env\lib\site-packages\openai\_base_client.py", line 1046, in _retry_request
return self._request(
File "C:\ai\Chenyme_AAVT_0.6.3_FIixbug\Chenyme_AAVT_0.6.3_FIixbug\env\lib\site-packages\openai\_base_client.py", line 1013, in _request
raise self._make_status_error_from_response(err.response) from None
Hello! thanks for your advice.
The function of adjusting the request interval will be added in the future to deal with the problem of low Kimi concurrency~
Log as below.
Maybe Add a configuration of Max request?