anarchy-ai / LLM-VM

irresponsible innovation. Try now at https://chat.dev/
https://anarchy.ai/
MIT License
477 stars 148 forks source link

AttributeError: module 'openai' has no attribute 'error' #390

Closed bpanahij closed 10 months ago

bpanahij commented 10 months ago
Traceback (most recent call last):
  File "/llmvm_testing/test.py", line 2, in <module>
    from llm_vm.client import Client
  File "/LLM-VM/src/llm_vm/client.py", line 6, in <module>
    from llm_vm.completion.optimize import LocalOptimizer
  File "/LLM-VM/src/llm_vm/completion/optimize.py", line 16, in <module>
    from llm_vm.completion.data_synthesis import DataSynthesis
  File "/LLM-VM/src/llm_vm/completion/data_synthesis.py", line 13, in <module>
    class DataSynthesis:
  File "/LLM-VM/src/llm_vm/completion/data_synthesis.py", line 57, in DataSynthesis
    @backoff.on_exception(backoff.expo, openai.error.RateLimitError)

It appears a recent (last few days) change to OpenAI's api is causing an error in the code after first downloading a model configs:

src/llm_vm/completion/data_synthesis.py: line 57

 @backoff.on_exception(backoff.expo, openai.error.RateLimitError)
    def generate_examples(self, final_prompt, openai_key, example_delim="<END>", model="gpt-4", max_tokens=1000, temperature=1, completion=None, call_big_kwargs={}):

The latest openai api removes the "error" intermediary, and references openai.RateLimitError like so.

VictorOdede commented 10 months ago

Hey @bpanahij thanks for highlighting this error. Seems to be an issue with the backoff lib. Will look into it asap.

kshitiz305 commented 10 months ago

@bpanahij were you running data_synthesis.py

kshitiz305 commented 10 months ago

Got it you are running test.py

bpanahij commented 10 months ago

It's a relatively small fix to correct the reference to RateLimitError. It of course doesn't address the underlying backoff issue.

bpanahij commented 10 months ago

https://github.com/anarchy-ai/LLM-VM/pull/392

VictorOdede commented 10 months ago

Closed by #392