pytorch / serve

Serve, optimize and scale PyTorch models in production
https://pytorch.org/serve/
Apache License 2.0
4.23k stars 864 forks source link

PredictionException wrong message format #1879

Closed blatr closed 2 years ago

blatr commented 2 years ago

🐛 Describe the bug

Wrong string formatting syntax in PredictionException _str_ method.

Error logs

print(PredictionException('hello','world')) returns "message : error_code" instead of "hello : world"

Installation instructions

Install torchserve from source: Yes

Model Packaing

-

config.properties

-

Versions


Environment headers

Torchserve branch:

torchserve==0.6.0 torch-model-archiver==0.2.1

Python version: 3.7 (64-bit runtime) Python executable: /home/ubuntu/anaconda3/envs/pytorch_latest_p37/bin/python

Versions of relevant python libraries: efficientnet-pytorch==0.6.3 future==0.18.2 numpy==1.19.2 numpydoc==1.1.0 psutil==5.8.0 pylint==2.7.0 pytest==6.2.2 requests==2.25.1 requests-kerberos==0.12.0 requests-oauthlib==1.3.1 torch==1.8.1+cu111 torch-model-archiver==0.2.1 torchaudio==0.7.0a0+a853dff torchserve==0.6.0 torchvision==0.9.1+cu111 wheel==0.36.2 torch==1.8.1+cu111 **Warning: torchtext not present .. torchvision==0.9.1+cu111 torchaudio==0.7.0a0+a853dff

Java Version:

OS: Ubuntu 18.04.5 LTS GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 Clang version: N/A CMake version: version 3.18.4

Is CUDA available: Yes CUDA runtime version: 11.1.105 GPU models and configuration: GPU 0: Tesla T4 Nvidia driver version: 450.119.03 cuDNN version: Probably one of the following: /usr/local/cuda-10.1/targets/x86_64-linux/lib/libcudnn.so.7.6.5 /usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn.so.7.6.5 /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn.so.8.0.5 /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5 /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5 /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5 /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5 /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5 /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8.0.5 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5

Repro instructions

from ts.utils.util import PredictionException print(PredictionException('hello','world'))

Possible Solution

replace return "message : error_code".format(message=self.message, error_code=self.error_code) with return "{message} : {error_code}".format(message=self.message, error_code=self.error_code)

blatr commented 2 years ago

Already fixed in https://github.com/pytorch/serve/pull/1802