The NIM plugin should allow serving the model within a task as a sidecar service. This can be especially useful for batch inference, as the model can be spun up once in a local network and accessed directly with no network overhead. The plugin eliminates Docker hassles and the need to manually serve the model.
The NIM plugin should allow serving the model within a task as a sidecar service. This can be especially useful for batch inference, as the model can be spun up once in a local network and accessed directly with no network overhead. The plugin eliminates Docker hassles and the need to manually serve the model.