Support was added for all Nvidia NIM LLM models (integrate.api.nvidia.com addresses), but so far, I don't believe support was added for the VLM models (ai.api.nvidia.com addresses).
Examples (each endpoint listed below links to the associated Nvidia build):
The Feature
Support was added for all Nvidia NIM LLM models (integrate.api.nvidia.com addresses), but so far, I don't believe support was added for the VLM models (ai.api.nvidia.com addresses).
Examples (each endpoint listed below links to the associated Nvidia build):
https://ai.api.nvidia.com/v1/vlm/microsoft/phi-3-vision-128k-instruct: https://ai.api.nvidia.com/v1/vlm/google/paligemma https://ai.api.nvidia.com/v1/vlm/nvidia/neva-22b
It would be great if support for these could be added.
Thanks!
Motivation, pitch
Since support for regular Nvidia LLMs has been added, it would be good to also have support for their set of VLMs.
Twitter / LinkedIn details
No response