triton-inference-server / server

The Triton Inference Server provides an optimized cloud and edge inferencing solution.
https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/index.html
BSD 3-Clause "New" or "Revised" License
8.3k stars 1.48k forks source link

curl failed to follow relocation of cuda-keyring_1.0-1_all.deb #6477

Open gongysh2004 opened 1 year ago

gongysh2004 commented 1 year ago

Description curl cannot follow the relocation of a file

Triton Information What version of Triton are you using?

main

Are you using the Triton container or did you build it yourself?

build myself To Reproduce Steps to reproduce the behavior.

1. ./build.py -v --enable-all --build-type=Debug

it will fail at the command: curl -o /tmp/cuda-keyring.deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.0-1_all.deb && dpkg -i /tmp/cuda-keyring.deb

since the https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.0-1_all.deb is relocated to another url, the curl cannot follow the relocation address and get a zero-sized /tmp/cuda-keyring.deb

Describe the models (framework, inputs, outputs), ideally include the model configuration file (if using an ensemble include the model configuration file for that as well).

Expected behavior A clear and concise description of what you expected to happen.

add -L argument to direct the curl to follow the relocation

dyastremsky commented 1 year ago

Are you still seeing this? This curl command works for me and the file there does not seem to have been modified since April 2022. Is there perhaps a networking error on your machine?