Farzad-R / LLM-Zero-to-Hundred

This repository contains different LLM chatbot projects (RAG, LLM agents, etc.) and well-known techniques for training and fine tuning LLMs.
193 stars 105 forks source link

seconds as it raised APIConnectionError: Error communicating with OpenAI: Invalid URL 'None/embeddings': No scheme supplied. Perhaps you meant https://None/embeddings?. #7

Closed Inkatrail80 closed 3 months ago

Inkatrail80 commented 3 months ago

I got his error? can you help?

Farzad-R commented 3 months ago

Sure. In order to establish a connection with the OpenAI (GPT and embedding) models you have two options:

  1. Using OpenAI's credentials (Key)
  2. Using Azure credentials (Key, endpoint, etc.)

In all the projects I have a private ".env" file and there I inserted all my credentials and in each project I am loading them in a way like: arg = os.getenv()

So, depending on your approach you have to create this file and add your credentials as well. Besides that, to use the GPT model and the OpenAI's embedding model, you need to provide the name of that model (deployment name). For instance, if you are using Azure OpenAI these are the names that you used for deploying the models in your selected region. The chatbots need them too.

The error that you are encountering is coming from a lack of connection (endpoint) to OpenAI's embedding model and depending on how you tried to run the project, any of the points that I mentioned above can be the source of the cause.

One final note: In the project, I am using Azure OpenAI. In case you are using OpenAI directly there is one more additional step that you have to take. And that is changing the completion function for the GPT model. I pinned a comment on RAG-GPT video on the YouTube channel where I explained all the necessary changes in detail.

As soon as you fix these aspects of the project (create a proper connection to OpenAI), you can run any of the projects with no issue.

Inkatrail80 commented 3 months ago

Thank you Farzad, very helpful