mckaywrigley / repo-chat

Use AI to ask questions about any GitHub repo.
MIT License
604 stars 108 forks source link

Server is currently overloaded #13

Open angelorc opened 1 year ago

angelorc commented 1 year ago

Hello everyone, I had the chance to try the script. Unfortunately I get several errors

2023-05-09 17:07:39,503:INFO - error_code=None error_message='The server is currently overloaded with other requests. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if the error persists.' error_param=None error_type=server_error message='OpenAI API error received' stream_error=False
Traceback (most recent call last):
  File "/home/angelo/Progetti/repo-chat/embed.py", line 48, in <module>
    vector_store = SupabaseVectorStore.from_documents(
  File "/home/angelo/miniconda3/lib/python3.10/site-packages/langchain/vectorstores/base.py", line 246, in from_documents
    return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
  File "/home/angelo/miniconda3/lib/python3.10/site-packages/langchain/vectorstores/supabase.py", line 99, in from_texts
    embeddings = embedding.embed_documents(texts)
  File "/home/angelo/miniconda3/lib/python3.10/site-packages/langchain/embeddings/openai.py", line 234, in embed_documents
    return self._get_len_safe_embeddings(texts, engine=self.deployment)
  File "/home/angelo/miniconda3/lib/python3.10/site-packages/langchain/embeddings/openai.py", line 175, in _get_len_safe_embeddings
    response = embed_with_retry(
  File "/home/angelo/miniconda3/lib/python3.10/site-packages/langchain/embeddings/openai.py", line 63, in embed_with_retry
    return _embed_with_retry(**kwargs)
  File "/home/angelo/miniconda3/lib/python3.10/site-packages/tenacity/__init__.py", line 289, in wrapped_f
    return self(f, *args, **kw)
  File "/home/angelo/miniconda3/lib/python3.10/site-packages/tenacity/__init__.py", line 379, in __call__
    do = self.iter(retry_state=retry_state)
  File "/home/angelo/miniconda3/lib/python3.10/site-packages/tenacity/__init__.py", line 325, in iter
    raise retry_exc.reraise()
  File "/home/angelo/miniconda3/lib/python3.10/site-packages/tenacity/__init__.py", line 158, in reraise
    raise self.last_attempt.result()
  File "/home/angelo/miniconda3/lib/python3.10/concurrent/futures/_base.py", line 451, in result
    return self.__get_result()
  File "/home/angelo/miniconda3/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
    raise self._exception
  File "/home/angelo/miniconda3/lib/python3.10/site-packages/tenacity/__init__.py", line 382, in __call__
    result = fn(*args, **kwargs)
  File "/home/angelo/miniconda3/lib/python3.10/site-packages/langchain/embeddings/openai.py", line 61, in _embed_with_retry
    return embeddings.client.create(**kwargs)
  File "/home/angelo/miniconda3/lib/python3.10/site-packages/openai/api_resources/embedding.py", line 33, in create
    response = super().create(*args, **kwargs)
  File "/home/angelo/miniconda3/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
  File "/home/angelo/miniconda3/lib/python3.10/site-packages/openai/api_requestor.py", line 226, in request
    resp, got_stream = self._interpret_response(result, stream)
  File "/home/angelo/miniconda3/lib/python3.10/site-packages/openai/api_requestor.py", line 619, in _interpret_response
    self._interpret_response_line(
  File "/home/angelo/miniconda3/lib/python3.10/site-packages/openai/api_requestor.py", line 679, in _interpret_response_line
    raise self.handle_error_response(
openai.error.RateLimitError: The server is currently overloaded with other requests. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if the error persists.

do you know how can I fix it? It seem that we should ad a rate limit