triton-inference-server / server

The Triton Inference Server provides an optimized cloud and edge inferencing solution.
https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/index.html
BSD 3-Clause "New" or "Revised" License
8.23k stars 1.47k forks source link

Triton with python backend: not Using Python execution env *.tar.gz file #3222

Closed gioipv closed 3 years ago

gioipv commented 3 years ago

Hello. I am using triton with python backend.

ModuleNotFoundError: No module named 'librosa

Using Python execution env ***.tar.gz Could you please help me with this ...

Triton Information Triton docker images 20.11 version release

To Reproduce Following this issues https://github.com/triton-inference-server/server/issues/3189

ghost commented 3 years ago

@gioipv You should point EXECUTION_ENV_PATH out properly.

parameters: {
  key: "EXECUTION_ENV_PATH",
  value: { string_value: "/models/model1/test2.tar.gz" }
}

because you mounted /home/gioipv/workspaces/ekyc_glasses/triton/model_repo2 in /models in a docker container.

Tabrizian commented 3 years ago

@gioipv Adding to what @ihcho9088 mentioned, looks like you are using an old version of triton that doesn't support Python Execution environments. Please make sure to update Triton to 21.07+ to use this feature.

gioipv commented 3 years ago

yep, thank you for you help. @Tabrizian , @ihcho9088 I understand, in docker run command -v it mean the mount path. so i changed this path in my config as in your comment and I used an new version of triton ( it is 21.07), It worked, but it also raise an error about relate to GPU Driver version, because my Driver Version: 455.23.05, compared with Supported matrix, i have to use triton 20.11. I'm thinking of two solutions

Tabrizian commented 3 years ago

I think you should update the GPU driver version. The version of the Python backend and the server must match.

gioipv commented 3 years ago

Yep. Thank you so much for your helping ^^