First, the inference code can run in the on the physical machine (no docker env)
(tvm-build) root@hyongtao-Precision-Tower-5810:/home/hyongtao/build_docker/tvm# python3 tune_relay_cuda_test.py
Extract tasks...
One or more operators have not been tuned. Please tune your model for better performance. Use DEBUG logging level to see more details.
Evaluate inference time cost...
Mean inference time (std dev): 11.10 ms (0.34 ms)
(tvm-build) root@hyongtao-Precision-Tower-5810:/home/hyongtao/build_docker/tvm#
OS: ubuntu 18.04 NVIDIA GPU: GM107GL [Quadro K620]
First, the inference code can run in the on the physical machine (no docker env)
Then, we wanna test TVM in docker environment. I have build the docker as https://github.com/apache/tvm/tree/main/docker
Docker building and runing are both successful.
But when I infer the resnet-18 inside the docker 'ci_gpu', I got these error:
Could you help me solve the problem? Thanks a lot.