-
**Describe the bug**
I have an error "Sorry, we are having some problems. Please try again later." after deployment as chat responses
**To Reproduce**
Steps to reproduce the behavior:
1. Follow th…
-
When running with AzureOpenAI as OAIProvider, the module fails:
```
Invoke-OAIChat -UserInput "can you give me a list of the 10 largest countries in Africa and their capitals" -Instructions "You a…
-
We need to consider updating the GPT35 turbo model for this repo:
> Version 0613 of gpt-35-turbo and gpt-35-turbo-16k will be retired on June 13, 2024.
1103 is not available in all the regions …
-
### Check for previous/existing GitHub issues
- [X] I have checked for previous/existing GitHub issues
### Issue Type?
Bug
### Module Name
avm/res/cognitive-services/account
### (Optional) Modul…
-
### Describe the bug
Ran into a few errors
```
- File "/Users/ixiong/Desktop/SWE-bench/swebench/harness/docker_build.py", line 143, in build_image
raise docker.errors.BuildError(
docker.er…
-
### The Feature
Azure OpenAI only returns back the model family name (like `gpt-4` instead of `gpt-4-vision-preview`), not the actual model name. Like #1810, overwrite what is returned for the `model…
-
### Bug Description
Hello,
I am currently using Llama Index for a project that requires multimodal generation with the Azure OpenAI service. I have deployed the model "gpt-4-turbo-2024-04-09" on A…
-
**As a RAG Experiment Accelerator user** I would like to have a meaningful, hierarchy based config file structure
SO I would be able to **easily understand** the different features and settings which…
-
Hi, after using the module, I realized that your code only accepts specific types of models in openAI, which are GPT 3.5 and GPT 4 (not turbo).
Since there are multiple other models and maybe poten…
-
this is my code:
```
from g4f.client import Client as client_ai
from g4f.Provider import OpenaiChat
client = client_ai(provider=OpenaiChat)
response = client.chat.completions.create(
api…