Open lakshits11 opened 2 weeks ago
Hi @lakshits11 , we had some docs for this but it looks like they accidentally got removed in a recent refactor.
We do support NVIDIA as an OpenAI compatible provider: https://docs.continue.dev/customize/model-providers/openai#openai-compatible-servers--apis
And for embeddings: https://github.com/continuedev/continue/pull/2233/files#diff-f5ee2eb28e79019bdf138776df40073fefe4c5c12c03877cb1befdda47ef6418R71-R85
For some reason I am not able to get nvidia to work. I m using below configuration
{
"title": "mistral",
"provider": "nvidia",
"model": "mistralai/mistral-large-2-instruct,
"apiKey": "[API_KEY]",
"apiBase": "https://integrate.api.nvidia.com/v1",
}
Am i missing something?
Also, the docs have a reference for kindo https://docs.continue.dev/customize/model-providers/more/kindo
But I dont find their name in provider autocomplete suggestions
@lakshits11 change provider to openai
, leave apiBase
{
"apiKey": "",
"title": "Llama-3.1-Nemotron-70B-Instruct",
"model": "nvidia/llama-3.1-nemotron-70b-instruct",
"provider": "openai",
"apiBase": "https://integrate.api.nvidia.com/v1"
}
Thank you this was exactly the model I was gonna try
hy it still doesnt work :( i got my api key but still it does not print any output but the credit points are getting deducted
EDIT : adding this got it to work
"useLegacyCompletionsEndpoint": true,
Also a humble reuest, I think your docs and extension have a huge mismatch. docs dont have nvidia, but the config.json autocomplete shows nvidia as provider and on hovering it, it shows description about vLLM
pls update the docs. Also I will be happy to do so. Thanks, please consider my request
Yeah, the latest update broke it. Thanks for your post, I had no idea why it suddenly stopped working...
Validations
Problem
Please add NVIDIA as a provider
Solution
No response