Closed giuliohome closed 1 year ago
Azure OpenAI Model name is set via the deployment name. Please refer to how LangChain references the model name https://js.langchain.com/docs/modules/model_io/models/llms/integrations/azure
Based on your suggestion, you can swap the deployment names. I will look to add that as part of a fix
Yes, I was briefly experimenting with vanilla OpenAI. You're right – you should consider swapping the deployment names. Additionally, both at the UI level and in Cosmos DB, the history should indicate the correct update of the model name in the header tab. I'm still working on understanding how to accomplish that myself, because at the moment of the CreateChatThread
, you don't know the model yet, and immediately after the user tab selection, you need to upsert
it. I would greatly appreciate it if you could commit the fix. Thanks!
Ok, I completed it in my fork, after doing the other changes I was missing a model: props.model
in the useState<PromptGPTBody>
of the ChatUI
. Honestly the difference between GPT-3.5 and GPT-4 is not so clear, based on the answers... so I have also written in the prompt + " If asked say your AI model is " + myModel
😄
@giuliohome have a look at the new updates. Now the model names are passed correctly and get's saved to the database
Thank you @thivy and @davidxw , so fast! 🚀 Keep up the good work!
Just to understand your code changes, it appears that the model names are saved to Cosmos DB, but I can't see them being passed as different deployment names to the Azure OpenAI chat API:
As a minor note, which you can ignore for the moment, please be aware that the Azure deployment names might differ from the standard OpenAI gpt-3.5
and gpt-4
strings. So, I would have expected a couple of environment variables AZURE_OPENAI_API_DEPLOYMENT_NAME_GPT35
and AZURE_OPENAI_API_DEPLOYMENT_NAME_GPT4
to be passed as azureOpenAIApiDeploymentName
to the above code.
So how do we add support for gpt4? Do I just update the AZURE_OPENAI_API_DEPLOYMENT_NAME
variable with the deployment name of my gpt4 model? @thivy @giuliohome
While that would work, what is missing in my opinion is the possibility to dynamically switch from GPT-3.5 to 4 and vice versa via tabs. The current version seems to track those tab selections in the history but without affecting the actual calls to the Chat AI, unfortunately. Not sure if they are going to fix this too. (fwiw that's what I do in my "fork")
I was able to switch the deployment to GPT4 by updating the variable value, but it would be great to have an option to switch between 3.5 and 4
You're not using the header's tabs.
It would need something like