Closed aishwaryap closed 4 months ago
Note that the trace includes a Please check or regenerate your API key
but I actually did not need to do that. I only needed to upgrade my package to successfully query the newer models.
@aishwaryap thank you for reporting this. currently, supporting new models requires updating a table in the package with invocation information. we're working on automation for this.
as for the informative error, that is also tracked in https://github.com/langchain-ai/langchain-nvidia/issues/21
we will track the informative error request in #21
Hi all, I was experimenting with
langchain-nvidia-ai-endpoints==0.0.4
and when I tried querying theai-llama3-70b
model, I got the following error:It turned out that all I needed to do to query Llama 3 was to upgrade to 0.0.8. Is it intentional for newer models to be incompatible with older package versions, even minor ones? This is inconvenient when
langchain-nvidia-ai-endpoints
is a dependency inside another open source package which may not be staying up to date with the latest versions.Additionally, is it possible for it to fail with a more informative error that would let the user know that they could query this model if they upgraded the package. An alternative would be for the Github README or Langchain documentation to be kept updated with the minimum package version required to query various models. It would also be nice if the output of
llm.available_models
could also show the minimum package version needed to query the model.