NVIDIA / nim-anywhere

Accelerate your Gen AI with NVIDIA NIM and NVIDIA AI Workbench
https://www.nvidia.com/en-us/ai/
Apache License 2.0
97 stars 57 forks source link

Update README with clearer and/or suggested AIWB configuration inputs #48

Open nv-edwli opened 1 month ago

nv-edwli commented 1 month ago

I had a user trying to start the project who got confused at what to put for those configuration interstitials for the docker socket and for the local NIM cache.

The README does a good job providing a sample input for the latter:

Image

but less so for the former:

Image

I get that there is /var/run specified in the text, but unfortunately not everyone will read all of the text when working through the project.

Another concern is the user OS may require different formatting for these configurations. Maybe a good idea to call these out as well for clarity. For reference, the Hybrid RAG project states the following for a similar flow:

Image

freemansoft commented 1 week ago

This clarification would be a great addition.

I always put /var/run for /var/host-run but today on a new machine I read that this should be /Docker/host/ because that is where a bunch of docker files were. (Was in a Windows/WSL environment)