Open ghost-pep opened 1 week ago
@ghost-pep can i contribute this issue ?
Hi! Yes, this would be a great one to contribute 😄 Let me know if you have any questions or need help setting things up! You could set up a docs
folder with markdown in there or you can add directly to the README
@ghost-pep i dont know how to setup this project and run .can you explain me for deploying Azure OpenAI models, configuring the demo project, and running the code and provide instructions how documentation should be? .I can start to work.
Yes! For deploying the models you will need to deploy an Azure OpenAI instance. From there, you can deploy a model like gpt-4 or gpt-3.5-turbo and then to hook it up to the demo project. You will need to edit the example appsettings file to fill in all those fields. All the fields are present in your deployment and will tell the demo project how to call your deployed models.
The appsettings file is expected to be called appsettings.json
so you will have to fill those in (and be sure not to commit any of your secrets to git).
Then for running the project, you can use dotnet run --project .\src\Demo\Demo.csproj
to get things working!
For documentation: Because you are the first to contribute, you get to decide! I think it would be great to get a section in the README (maybe something like # Getting Started
) that explains how to deploy then build then run all in subsections. Ideally, we should have something to auto deploy the models through ARM templates or bicep files but for the initial code we deployed manually so that's how it is right now. I'll create an issue for automated deployment!
i dont have access to azure services .so i don't know the process of deployment a model like gpt -4o or 3.5 .if you share any resources like images or videos it will be helpful for me and assign me as contributor .
Hey! That is no problem. There is a way to get local LLMs working with Semantic Kernel. This might be a good resource but I haven't run through it before.
As for contributing, you can fork this repository and then work off of a feature branch on your fork without needing contributor access.
It would be great to get a better set of documentation for deploying Azure OpenAI models, configuring the demo project, and building/running the code