warmshao / FasterLivePortrait

Bring portraits to life in Real Time!onnx/tensorrt support!实时肖像驱动!
455 stars 42 forks source link

Building from Docker Error... #50

Closed salahzoubi closed 1 month ago

salahzoubi commented 1 month ago

Hello,

I just tried building from Docker, when in Docker, I get the following error when I run the sh scripts/all_onnx2trt.sh

[08/08/2024-09:25:42] [TRT] [W] Unable to determine GPU memory usage
[08/08/2024-09:25:42] [TRT] [W] Unable to determine GPU memory usage
[08/08/2024-09:25:42] [TRT] [I] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 11, GPU 0 (MiB)
[08/08/2024-09:25:42] [TRT] [W] CUDA initialization failure with error: 35. Please check your CUDA installation:  http://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html

I have CUDA 12.2 installed, is there a particular version that I need to have installed to get this to work?

Also, side question: Does TensorRT support batch inference in this case?

Thank you!

warmshao commented 1 month ago

Hello,

I just tried building from Docker, when in Docker, I get the following error when I run the sh scripts/all_onnx2trt.sh

[08/08/2024-09:25:42] [TRT] [W] Unable to determine GPU memory usage
[08/08/2024-09:25:42] [TRT] [W] Unable to determine GPU memory usage
[08/08/2024-09:25:42] [TRT] [I] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 11, GPU 0 (MiB)
[08/08/2024-09:25:42] [TRT] [W] CUDA initialization failure with error: 35. Please check your CUDA installation:  http://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html

I have CUDA 12.2 installed, is there a particular version that I need to have installed to get this to work?

Also, side question: Does TensorRT support batch inference in this case?

Thank you!

Have you installed CUDA and the drivers on your host machine? Try running the command nvidia-smi inside Docker to see if there's any information.

salahzoubi commented 1 month ago

Yes. I get Cuda 12.2 for both nvidia-smi and nvcc... was this built using an older version by any chance? I'm also using Ubuntu 20.04 if that makes any difference.

Also no need to build or install tensorrt in the docker right?

warmshao commented 1 month ago

Yes. I get Cuda 12.2 for both nvidia-smi and nvcc... was this built using an older version by any chance? I'm also using Ubuntu 20.04 if that makes any difference.

Also no need to build or install tensorrt in the docker right?

Yes, you don't need to install anything else, everything is already set up. BTW,which docker image are you using?

salahzoubi commented 1 month ago

Or to rephrase, on what cuda/ubuntu version did you test the docker image on? Seems like tensorrt models work with the original versions they were built with @warmshao

warmshao commented 1 month ago

Yes. I get Cuda 12.2 for both nvidia-smi and nvcc... was this built using an older version by any chance? I'm also using Ubuntu 20.04 if that makes any difference.

Also no need to build or install tensorrt in the docker right?

I remember my image wasn't equipped with CUDA 12.2. Did you reinstall CUDA in my docker image?"

salahzoubi commented 1 month ago

@warmshao im using the v2 version because I want to use mediapipe. No I spin up a fresh instance of 20.04 on a H100 machine, and then install cuda 12.2 from the nvidia website and build and run docker afterwards.

salahzoubi commented 1 month ago

Also, if I try nvidia-smi inside the docker image I do get an error, it doesn't load properly. Could that be the problem?

warmshao commented 1 month ago

Also, if I try nvidia-smi inside the docker image I do get an error, it doesn't load properly. Could that be the problem?

yes,it should be a problem. you can refer https://github.com/warmshao/FasterLivePortrait/issues/8 to solve it

salahzoubi commented 1 month ago

Ok, thank you I will try asap! Also, do you know if I can do batch inference on tensorrt?

warmshao commented 1 month ago

Ok, thank you I will try asap! Also, do you know if I can do batch inference on tensorrt?

It is possible to use dynamic shape in TENSORRT, but I've now fixed the batch size to 1, so batch inference isn't possible in this project.🥶

salahzoubi commented 1 month ago

I see, if I were to do this my self then I'd have to rebuild the onnx and trt files? Or is there more to getting batched inference to work? Thanks for all your great help btw!

warmshao commented 1 month ago

I see, if I were to do this my self then I'd have to rebuild the onnx and trt files? Or is there more to getting batched inference to work? Thanks for all your great help btw!

yes,you need to rebuild the onnx and trt files