Closed bahar3474 closed 1 week ago
You can try upgrading the Paddle version to 2.5.2.
Unfortunately, updating the Paddle version to 2.5.2 doesn't fix my problem. I've added more description about this problem on the issue that Vvsmile mentioned.
It's indeed very strange. Maybe the cuda version in the docker image does not match the installed paddle package. There is a solution, you can try to run gpu inference through the official image. refer to: https://www.paddlepaddle.org.cn/en
For example, if your environment is cuda11.7, please use the following command
nvidia-docker pull registry.baidubce.com/paddlepaddle/paddle:2.6.1-gpu-cuda11.7-cudnn8.4-trt8.4
nvidia-docker run --name paddle -it -v $PWD:/paddle registry.baidubce.com/paddlepaddle/paddle:2.6.1-gpu-cuda11.7-cudnn8.4-trt8.4 /bin/bash
Hello everyone,
I'm currently facing an issue and would greatly appreciate any assistance you can offer.
I have a paddle model that I'm serving through a Docker image based on version 2.5.1 of the paddlepaddle/paddle image. On one workstation, it works well with the 'use_gpu' attribute set to True or False. However, on another workstation, the outputs of the model are incorrect when it runs on GPU. I have attached the results of the model in both situations.
CPU result:
GPU result: It appears that the computation of the model is incorrect when it's running on GPU. It's important to note that I'm not encountering any errors or warnings, just inaccurate results.
Some additional context:
What could be the root cause of this inconsistency?
Thank you in advance for any insights or suggestions you can provide.