langchain-ai / langchain-nvidia

MIT License
48 stars 15 forks source link

Provide default model in local NIM mode #51

Closed raspawar closed 2 months ago

raspawar commented 2 months ago

New implementation:

This can be simplified by using the first available model if none is provided, e.g.

llm = ChatNVIDIA(base_url="http://localhost:1234/v1") llm.chat(...)

llm._client.model => default model => first available model in local NIM

Fixes: