Open abhi-vachani opened 1 week ago
FYI, I got this error with Anthropic when my free credits ran out:
Error Anthropic Status Error: Error code: 400 - {'type': 'error', 'error': {'type': 'invalid_request_error', 'message': 'Your credit balance is too low to access the Claude API. Please go to Plans & Billing to upgrade or purchase credits.'}}
I submitted a PR that handles the OpenAI specific messaging. The response message from OpenAI is particularly vague, but the one reference about Anthropic above seems to do a good job of detailing the issue. I'm very new to the project, but at first blush it would seem that handling these issues generically across all LLMs might be a pretty big refactor.
I submitted a PR that handles the OpenAI specific messaging. The response message from OpenAI is particularly vague, but the one reference about Anthropic above seems to do a good job of detailing the issue. I'm very new to the project, but at first blush it would seem that handling these issues generically across all LLMs might be a pretty big refactor.
Hey Cory, saw your PR: the other thing to consider is what the error should be given an actual "model not found" issue for example when incorrectly typing a gpt model name. However, that being said, not sure if there's really a way to know when this is the case or when it is due to insufficient funds given the error is the same, so your response might actually cover both cases.
commented on the pr, but I wanted to bring the conversation back here as well. one approach could be to list models via /v1/models. Would cover most scenarios.
commented on the pr, but I wanted to bring the conversation back here as well. one approach could be to list models via /v1/models. Would cover most scenarios.
I had considered something like this, too. However, using my own account as an example (which is unfunded / has no billing), I don't see the model in the list of models on the page. I would suspect that once I enabled billing for the account then I would be able to see the model in the list -- so again, this would still be a scenario related to insufficient funds vs a bad model name. I'm not sure that we would really be able to programmatically tell the difference using their API.
Problem:
gpt-4-turbo
does not exist or you do not have access to it.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}} agent-server-1 | agent-server-1 | The above exception was the direct cause of the following exception: agent-server-1 | agent-server-1 | Traceback (most recent call last): agent-server-1 | File "/usr/local/lib/python3.11/site-packages/eidolon_ai_sdk/system/agent_controller.py", line 346, in stream_agent_iterator agent-server-1 | async for event in stream: agent-server-1 | File "/usr/local/lib/python3.11/site-packages/eidolon_ai_sdk/agent/simple_agent.py", line 238, in fn agent-server-1 | async for e in action_fn(self, action, process_id, **kwargs): agent-server-1 | File "/usr/local/lib/python3.11/site-packages/eidolon_ai_sdk/agent/simple_agent.py", line 282, in _act agent-server-1 | async for event in response: agent-server-1 | File "/usr/local/lib/python3.11/site-packages/eidolon_ai_sdk/apu/conversational_apu.py", line 144, in schedule_request agent-server-1 | raise APUException(f"{e.class.name} while processing request") from e agent-server-1 | eidolon_ai_sdk.apu.apu.APUException: APU Error: NotFoundError while processing requestSolution: Display clear error message explaining problem is due to unfunded OpenAI account.