Closed mirojs closed 3 weeks ago
@pamelafox is working on this. It did require other changes AFAIK
Is there a plan to include the option for video and voice input as part this core app utilizing gpt4o multi input once available?
@mattgotteiner That's good news, thanks.
@pamelafox is working on this. It did require other changes AFAIK
This is correct, I got it to work by deleting gpt-4 turbo chat and then deploying gpt-4o from the Azure Openai service interface after the model was made generally available at Microsoft Build. Deploying with azd up caused the web interface to fail loading after changing the model from gpt-4 to gpt-4o. Nice work around though and really loving the improvements of gpt-4o so far! Was such an exciting announcement!
Are you all trying to use gpt-4o for the vision feature, or for just non-image answers? The configuration would be different depending.
Right. Ideally, the vision feature is preferred. If there would be too much effort, a temporary replacement for Turbo would also work due to the language processing capability difference.
On Tue, 28 May 2567 BE at 21:55 Pamela Fox @.***> wrote:
Are you all trying to use gpt-4o for the vision feature, or for just non-image answers? The configuration would be different depending.
— Reply to this email directly, view it on GitHub https://github.com/Azure-Samples/azure-search-openai-demo/issues/1644#issuecomment-2135437773, or unsubscribe https://github.com/notifications/unsubscribe-auth/A37NCH3WRLHI6YKV7TL6I23ZESLELAVCNFSM6AAAAABIDYXSC6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMZVGQZTONZXGM . You are receiving this because you authored the thread.Message ID: @.***>
Okay, I'm going to test it out with the vision feature today. I did actually test the repo already with the OpenAi.com gpt-4o (and saw good performance improvements), but I'm going to test it with an Azure OpenAI deployment now that those are available.
You can see gpt-4o related changes in this branch: https://github.com/Azure-Samples/azure-search-openai-demo/pull/1656/files
I'll continue doing performance and quality testing over next few days. Unfortunately, as mentioned in the PR, its annoyingly difficult to simply swap to a gpt-4o deployment due to the region differences.
For an existing deployment:
Then it should create a new OpenAI resource in that location with the gpt-4o deployment.
@pamelafox is working on this. It did require other changes AFAIK
This is correct, I got it to work by deleting gpt-4 turbo chat and then deploying gpt-4o from the Azure Openai service interface after the model was made generally available at Microsoft Build. Deploying with azd up caused the web interface to fail loading after changing the model from gpt-4 to gpt-4o. Nice work around though and really loving the improvements of gpt-4o so far! Was such an exciting announcement!
how did you get the web interface working again? I'm running into the same problem so any advice would be appreciated!
What error are you getting with the web interface?
What error are you getting with the web interface?
I was getting the blank site with no errors during build + deploy. Did a bit of digging and found some version mismatches in requirements. txt as well as a couple other spots that needed to be updated in main.bicep as well as some tweaks required in modelhelper.py and config.py. I managed to track down the errors by tuning the app insights implementation and it showed the app service wasn't starting properly. Working great with GPT4o now though!
@Lawndemon Ah yes if you still have modelhelper.py from the older version of the repo, that would need tweaks. The repo now uses a packaged version of modelhelper that I specifically upgraded for gpt-4o compatibility.
Closing this issue since the repo now defaults to gpt-4o for vision.
This issue is for a: (mark with an
x
)Minimal steps to reproduce
Any log messages given by the failure
Expected/desired behavior
OS and Version?
azd version?
Versions
Mention any other details that might be useful