Open AbhinavJangra29 opened 1 day ago
For those running into issues with installation, here’s a streamlined guide! This has been tested on the following container:
runpod/pytorch:2.0.1-py3.10-cuda11.8.0-devel-ubuntu22.04
Clone the Repository
git clone <this-repo-url> cd <repo-directory>
Install Requirements
pip install -r requirements.txt
Install FFmpeg
sudo apt-get install ffmpeg
Install ONNX Runtime
pip install onnxruntime
Note: To avoid building and converting to TensorRT, you can directly download pre-trained weights!
Download the pre-built TensorRT weights using the Hugging Face CLI:
huggingface-cli download mlgawd/trt-weights --local-dir ./checkpoints
Now, you’re ready to run the app in TensorRT mode:
python app.py --mode trt
No need to modify paths or get confused! Just follow these steps and you’re all set. 😎
Thank you, but the TRT weight is associated with the cuda version and NVDIA GPU, so it cannot be used directly like this.
yes for rtx3090 it works
Installation Guide
For those running into issues with installation, here’s a streamlined guide! This has been tested on the following container:
runpod/pytorch:2.0.1-py3.10-cuda11.8.0-devel-ubuntu22.04
Steps
Clone the Repository
Install Requirements
Install FFmpeg
Install ONNX Runtime
Skip the TensorRT Conversion
Download the pre-built TensorRT weights using the Hugging Face CLI:
Run the Application
Now, you’re ready to run the app in TensorRT mode: