-
### What happened?
In production, LLM calls were failing with the following error:
`LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=gpt-4o-mini-2024-…
-
.
-
When previewing LlamaParse, I get the correct output when I select the GPT 4o mode, so I want to use that when calling the Llama_parse API, but it seems there are no options to use this mode when usin…
-
### Describe the bug
In a group chat setting, where some agents have tools associated with them. I frequently face this error:
openai.InternalServerError: Error code: 500 - {'error': {'message': 'Th…
-
### Describe the need of your request
Models with vision support are becoming more common but as of now, you can only use vision with OpenAI and Claude in CodeGPT.
It would be nice to enable Vis…
-
Hi, Great work!
I found `args.model_name = "gpt-4o"` in line 323 of `evaluate_from_api.py`. Could this be added by mistake?
-
**Describe the bug**
Whilst looking into https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/543 I noticed that in ParseNode the chunking functionality is passing `token_counter=lambda text: len(t…
-
Sadly - it's not as simple as overriding OPENAI_HOST (or at least - I haven't figured out how to make it that simple). API is fairly different:
URLs are
{instance specific url}/openai/deployments/{…
-
In fact, the GPT API request was not sent correctly, as I can see from the API logs.
![Image_2024-05-20_16-56-23](https://github.com/abi/screenshot-to-code/assets/43348055/1c5ba7fb-a0fa-4599-a6d0-d05…
-