Closed aj2622 closed 12 months ago
Hi @aj2622,
Thank you for reaching out. Please run the container with the GPU support enabled --gpus all
for example.
The very first message in the log indicates
WARNING: The NVIDIA Driver was not detected. GPU functionality will not be available.
Use the NVIDIA Container Toolkit to start this container with GPU support; see
https://docs.nvidia.com/datacenter/cloud-native/ .```
the GPU is not detected.
You can learn more about it for example [here](https://docs.nvidia.com/deeplearning/frameworks/user-guide/index.html).
Hi @aj2622,
Thank you for reaching out. Please run the container with the GPU support enabled
--gpus all
for example. The very first message in the log indicatesWARNING: The NVIDIA Driver was not detected. GPU functionality will not be available. Use the NVIDIA Container Toolkit to start this container with GPU support; see https://docs.nvidia.com/datacenter/cloud-native/ .``` the GPU is not detected. You can learn more about it for example [here](https://docs.nvidia.com/deeplearning/frameworks/user-guide/index.html).
Thanks for the prompt response. I am using a mac, it has no NVIDIA GPU.
running the container with GPU support enabled --gpus all
just gives me a similar error,
(base) ➜ counterfeit-model-triton-server git:(feature/grpc) ✗ docker run --rm -it --gpus all -p8000:8000 -p8001:8001 -p8002:8002 \
-v /Users/aj/counterfeit-model-triton-server/model_repository:/models \
tritonserver:dali-latest \
tritonserver --model-repository=/models
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy'
nvidia-container-cli: initialization error: load library failed: libnvidia-ml.so.1: cannot open shared object file: no such file or directory: unknown.
Do I need to run this on a gpu enabled device ? My code will eventually run on an EC2 with a GPU, I am hoping for a workaround so I can test my code locally.
Hi @aj2622,
Now I understand your intention to run TRITON and DALI on the CPU only. To achieve that please set the device_id
in the pipeline to None to avoid any interaction with the GPU:
@pipeline_def(batch_size=0, num_threads=1, device_id=None)
Thanks ! That resolved my issue.
logs
i built my image following
i am using a macbook without a gpu
my file model_repository/dali_decoder/1/dali.py is the following
my model_repository/dali_decoder/config.pbtxt is