Closed Inkatrail80 closed 3 months ago
Sure. In order to establish a connection with the OpenAI (GPT and embedding) models you have two options:
In all the projects I have a private ".env" file and there I inserted all my credentials and in each project I am loading them in a way like:
arg = os.getenv(
So, depending on your approach you have to create this file and add your credentials as well. Besides that, to use the GPT model and the OpenAI's embedding model, you need to provide the name of that model (deployment name). For instance, if you are using Azure OpenAI these are the names that you used for deploying the models in your selected region. The chatbots need them too.
The error that you are encountering is coming from a lack of connection (endpoint) to OpenAI's embedding model and depending on how you tried to run the project, any of the points that I mentioned above can be the source of the cause.
One final note: In the project, I am using Azure OpenAI. In case you are using OpenAI directly there is one more additional step that you have to take. And that is changing the completion function for the GPT model. I pinned a comment on RAG-GPT video on the YouTube channel where I explained all the necessary changes in detail.
As soon as you fix these aspects of the project (create a proper connection to OpenAI), you can run any of the projects with no issue.
Thank you Farzad, very helpful
I got his error? can you help?