ELS-RD / transformer-deploy

Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀
https://els-rd.github.io/transformer-deploy/
Apache License 2.0
1.64k stars 150 forks source link

fix: default device checked before device checking #137

Closed Thytu closed 2 years ago

Thytu commented 2 years ago

The raise Exception("can't perform inference on CPU and use Nvidia TensorRT as backend") was perform before setting the default device.

Thus, if you had no GPU and set tensorrt in commands.backend it would not enter the raise condition.