Hello! I am running a server and using vLLM to deploy CodeQwen1.5. When choosing the provider for CodeGPT, it seems like none of the existing options work for me. Below are my three attempts to configure the provider.
In the Custom OpenAI option, I entered the URL as shown by the screenshot below and clicked the test connection button, didn't work.
In the LLaMA C/C++ (Local) option, I entered base host and the rest of the fields as the following, clicked OK, but entering stuff in the chat gave me the error message '{"detail":"Not Found"}'.
In the Ollama (Local) option, I entered base host, clicked Refresh Models, I got 'Unable to load models'.
I'm able to use my model in VS Code using the plugin Twinny. Here is a screenshot for the provider configuration I had in there. The major difference I noticed is that Twinny request a Model name where I can enter the path for vLLM to access the model.
Proposed solution
Support vLLM deployed CodeQwen1.5. For vLLM to work, there needs to be a space to specify the model name.
Additional context
I'm using:
CodeGPT plugin version 2.7.1-241
Intellij IDEA Community Edition version 2024.1.2
Found the solution by myself. It turns out that choosing the provider as Custom OpenAI and filling in the model name in the Body section (model = my_model_name) works!
Describe the need of your request
Hello! I am running a server and using vLLM to deploy CodeQwen1.5. When choosing the provider for CodeGPT, it seems like none of the existing options work for me. Below are my three attempts to configure the provider.
In the Custom OpenAI option, I entered the URL as shown by the screenshot below and clicked the test connection button, didn't work.![customOpenAI-error](https://github.com/carlrobertoh/CodeGPT/assets/110203462/7504b987-9af5-4df0-8481-05f957a5fa0c)
In the LLaMA C/C++ (Local) option, I entered base host and the rest of the fields as the following, clicked OK, but entering stuff in the chat gave me the error message '{"detail":"Not Found"}'.
![LLaMA-error2](https://github.com/carlrobertoh/CodeGPT/assets/110203462/2680e99d-2b57-4bcc-9440-bc4d67bf2f61)
In the Ollama (Local) option, I entered base host, clicked Refresh Models, I got 'Unable to load models'.![Ollama-error](https://github.com/carlrobertoh/CodeGPT/assets/110203462/97d1381f-0abb-4def-9832-1f44ac38b51b)
I'm able to use my model in VS Code using the plugin Twinny. Here is a screenshot for the provider configuration I had in there. The major difference I noticed is that Twinny request a Model name where I can enter the path for vLLM to access the model.![Twinny-configuration](https://github.com/carlrobertoh/CodeGPT/assets/110203462/b1f148f1-3e33-472b-bc3b-dbff510de519)
Proposed solution
Support vLLM deployed CodeQwen1.5. For vLLM to work, there needs to be a space to specify the model name.
Additional context
I'm using: CodeGPT plugin version 2.7.1-241 Intellij IDEA Community Edition version 2024.1.2