pytorch / serve

Serve, optimize and scale PyTorch models in production
https://pytorch.org/serve/
Apache License 2.0
4.19k stars 858 forks source link

GPU not detected on AzureWindows Server 2019 Virtual Machine #2120

Open khelkun opened 1 year ago

khelkun commented 1 year ago

šŸ› Describe the bug

First thanks for this great tool. It's my first deployment try of TorchServe and it does work on Windows Server 2019.

However the GPU seems not detected by TorchServe on Azure NVv3-series Windows Server 2019 VM. It's a Standard NC4as T4 v3. The GPU driver is correctly installed and detected by "GPU-Z": image

The nvidia-smi output:

+-----------------------------------------------------------------------------+
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 451.82       Driver Version: 451.82       CUDA Version: 11.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name            TCC/WDDM | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla T4            TCC  | 00000001:00:00.0 Off |                  Off |
| N/A   24C    P8     9W /  70W |      1MiB / 16225MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

Error logs

See the line Number of GPUs: 0 in the output of the torchserve --start --ncs --model-store model_store --models densenet161.mar command:

WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
2023-02-10T15:48:41,826 [INFO ] main org.pytorch.serve.servingsdk.impl.PluginsManager - Initializing plugins manager...
2023-02-10T15:48:41,967 [INFO ] main org.pytorch.serve.ModelServer - 
Torchserve version: 0.7.1
TS Home: C:\Users\3dverse\anaconda3\Lib\site-packages
Current directory: C:\
Temp directory: C:\Users\3dverse\AppData\Local\Temp\1
Metrics config path: C:\Users\3dverse\anaconda3\Lib\site-packages/ts/configs/metrics.yaml
Number of GPUs: 0
Number of CPUs: 4
Max heap size: 7168 M
Python executable: C:\Users\3dverse\anaconda3\python.exe
Config file: N/A
Inference address: http://127.0.0.1:8080
Management address: http://127.0.0.1:8081
Metrics address: http://127.0.0.1:8082
Model Store: C:\torchserve\model_store
Initial Models: densenet161.mar
Log dir: C:\logs
Metrics dir: C:\logs
Netty threads: 0
Netty client threads: 0
Default workers per model: 4
Blacklist Regex: N/A
Maximum Response Size: 6553500
Maximum Request Size: 6553500
Limit Maximum Image Pixels: true
Prefer direct buffer: false
Allowed Urls: [file://.*|http(s)?://.*]
Custom python dependency for model allowed: false
Metrics report format: prometheus
Enable metrics API: true
Workflow Store: C:\torchserve\model_store
Model config: N/A
2023-02-10T15:48:41,967 [INFO ] main org.pytorch.serve.servingsdk.impl.PluginsManager -  Loading snapshot serializer plugin...
2023-02-10T15:48:41,983 [INFO ] main org.pytorch.serve.ModelServer - Loading initial models: densenet161.mar
2023-02-10T15:48:43,686 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model densenet161
2023-02-10T15:48:43,686 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model densenet161
2023-02-10T15:48:43,686 [INFO ] main org.pytorch.serve.wlm.ModelManager - Model densenet161 loaded.
2023-02-10T15:48:43,686 [DEBUG] main org.pytorch.serve.wlm.ModelManager - updateModel: densenet161, count: 4
2023-02-10T15:48:43,704 [DEBUG] W-9002-densenet161_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [C:\Users\3dverse\anaconda3\python.exe, C:\Users\3dverse\anaconda3\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9002, --metrics-config, C:\Users\3dverse\anaconda3\Lib\site-packages/ts/configs/metrics.yaml]
2023-02-10T15:48:43,704 [INFO ] main org.pytorch.serve.ModelServer - Initialize Inference server with: NioServerSocketChannel.
2023-02-10T15:48:43,704 [DEBUG] W-9000-densenet161_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [C:\Users\3dverse\anaconda3\python.exe, C:\Users\3dverse\anaconda3\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9000, --metrics-config, C:\Users\3dverse\anaconda3\Lib\site-packages/ts/configs/metrics.yaml]
2023-02-10T15:48:43,711 [DEBUG] W-9001-densenet161_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [C:\Users\3dverse\anaconda3\python.exe, C:\Users\3dverse\anaconda3\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9001, --metrics-config, C:\Users\3dverse\anaconda3\Lib\site-packages/ts/configs/metrics.yaml]
2023-02-10T15:48:43,718 [DEBUG] W-9003-densenet161_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [C:\Users\3dverse\anaconda3\python.exe, C:\Users\3dverse\anaconda3\Lib\site-packages\ts\model_service_worker.py, --sock-type, tcp, --port, 9003, --metrics-config, C:\Users\3dverse\anaconda3\Lib\site-packages/ts/configs/metrics.yaml]
2023-02-10T15:48:43,969 [INFO ] main org.pytorch.serve.ModelServer - Inference API bind to: http://127.0.0.1:8080
2023-02-10T15:48:43,969 [INFO ] main org.pytorch.serve.ModelServer - Initialize Management server with: NioServerSocketChannel.
2023-02-10T15:48:43,969 [INFO ] main org.pytorch.serve.ModelServer - Management API bind to: http://127.0.0.1:8081
2023-02-10T15:48:43,969 [INFO ] main org.pytorch.serve.ModelServer - Initialize Metrics server with: NioServerSocketChannel.
2023-02-10T15:48:43,969 [INFO ] main org.pytorch.serve.ModelServer - Metrics API bind to: http://127.0.0.1:8082
Model server started.
2023-02-10T15:48:44,728 [WARN ] pool-3-thread-1 org.pytorch.serve.metrics.MetricCollector - worker pid is not available yet.
2023-02-10T15:48:44,947 [INFO ] pool-3-thread-1 TS_METRICS - CPUUtilization.Percent:100.0|#Level:Host|#hostname:GPU-EU-West,timestamp:1676044124
2023-02-10T15:48:44,947 [INFO ] pool-3-thread-1 TS_METRICS - DiskAvailable.Gigabytes:79.73257827758789|#Level:Host|#hostname:GPU-EU-West,timestamp:1676044124
2023-02-10T15:48:44,947 [INFO ] pool-3-thread-1 TS_METRICS - DiskUsage.Gigabytes:46.714637756347656|#Level:Host|#hostname:GPU-EU-West,timestamp:1676044124
2023-02-10T15:48:44,947 [INFO ] pool-3-thread-1 TS_METRICS - DiskUtilization.Percent:36.9|#Level:Host|#hostname:GPU-EU-West,timestamp:1676044124
2023-02-10T15:48:44,947 [INFO ] pool-3-thread-1 TS_METRICS - MemoryAvailable.Megabytes:22415.8125|#Level:Host|#hostname:GPU-EU-West,timestamp:1676044124
2023-02-10T15:48:44,947 [INFO ] pool-3-thread-1 TS_METRICS - MemoryUsed.Megabytes:6255.53125|#Level:Host|#hostname:GPU-EU-West,timestamp:1676044124
2023-02-10T15:48:44,947 [INFO ] pool-3-thread-1 TS_METRICS - MemoryUtilization.Percent:21.8|#Level:Host|#hostname:GPU-EU-West,timestamp:1676044124
2023-02-10T15:48:46,134 [INFO ] W-9003-densenet161_1.0-stdout MODEL_LOG - Listening on port: None
2023-02-10T15:48:46,134 [INFO ] W-9003-densenet161_1.0-stdout MODEL_LOG - Successfully loaded C:\Users\3dverse\anaconda3\Lib\site-packages/ts/configs/metrics.yaml.
2023-02-10T15:48:46,134 [INFO ] W-9003-densenet161_1.0-stdout MODEL_LOG - [PID]7780
2023-02-10T15:48:46,134 [INFO ] W-9003-densenet161_1.0-stdout MODEL_LOG - Torch worker started.
2023-02-10T15:48:46,134 [DEBUG] W-9003-densenet161_1.0 org.pytorch.serve.wlm.WorkerThread - W-9003-densenet161_1.0 State change null -> WORKER_STARTED
2023-02-10T15:48:46,141 [INFO ] W-9003-densenet161_1.0-stdout MODEL_LOG - Python runtime: 3.9.13
2023-02-10T15:48:46,141 [INFO ] W-9003-densenet161_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /127.0.0.1:9003
2023-02-10T15:48:46,141 [INFO ] W-9003-densenet161_1.0-stdout MODEL_LOG - Connection accepted: ('127.0.0.1', 9003).
2023-02-10T15:48:46,141 [INFO ] W-9003-densenet161_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1676044126141
2023-02-10T15:48:46,172 [INFO ] W-9003-densenet161_1.0-stdout MODEL_LOG - model_name: densenet161, batchSize: 1
2023-02-10T15:48:46,203 [INFO ] W-9002-densenet161_1.0-stdout MODEL_LOG - Listening on port: None
2023-02-10T15:48:46,203 [INFO ] W-9002-densenet161_1.0-stdout MODEL_LOG - Successfully loaded C:\Users\3dverse\anaconda3\Lib\site-packages/ts/configs/metrics.yaml.
2023-02-10T15:48:46,203 [INFO ] W-9002-densenet161_1.0-stdout MODEL_LOG - [PID]4892
2023-02-10T15:48:46,203 [INFO ] W-9002-densenet161_1.0-stdout MODEL_LOG - Torch worker started.
2023-02-10T15:48:46,203 [INFO ] W-9002-densenet161_1.0-stdout MODEL_LOG - Python runtime: 3.9.13
2023-02-10T15:48:46,203 [DEBUG] W-9002-densenet161_1.0 org.pytorch.serve.wlm.WorkerThread - W-9002-densenet161_1.0 State change null -> WORKER_STARTED
2023-02-10T15:48:46,213 [INFO ] W-9002-densenet161_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /127.0.0.1:9002
2023-02-10T15:48:46,215 [INFO ] W-9002-densenet161_1.0-stdout MODEL_LOG - Connection accepted: ('127.0.0.1', 9002).
2023-02-10T15:48:46,215 [INFO ] W-9002-densenet161_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1676044126215
2023-02-10T15:48:46,215 [INFO ] W-9002-densenet161_1.0-stdout MODEL_LOG - model_name: densenet161, batchSize: 1
2023-02-10T15:48:46,293 [INFO ] W-9001-densenet161_1.0-stdout MODEL_LOG - Listening on port: None
2023-02-10T15:48:46,293 [INFO ] W-9001-densenet161_1.0-stdout MODEL_LOG - Successfully loaded C:\Users\3dverse\anaconda3\Lib\site-packages/ts/configs/metrics.yaml.
2023-02-10T15:48:46,293 [INFO ] W-9001-densenet161_1.0-stdout MODEL_LOG - [PID]4592
2023-02-10T15:48:46,293 [INFO ] W-9001-densenet161_1.0-stdout MODEL_LOG - Torch worker started.
2023-02-10T15:48:46,293 [INFO ] W-9001-densenet161_1.0-stdout MODEL_LOG - Python runtime: 3.9.13
2023-02-10T15:48:46,293 [DEBUG] W-9001-densenet161_1.0 org.pytorch.serve.wlm.WorkerThread - W-9001-densenet161_1.0 State change null -> WORKER_STARTED
2023-02-10T15:48:46,293 [INFO ] W-9001-densenet161_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /127.0.0.1:9001
2023-02-10T15:48:46,293 [INFO ] W-9001-densenet161_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1676044126293
2023-02-10T15:48:46,293 [INFO ] W-9001-densenet161_1.0-stdout MODEL_LOG - Connection accepted: ('127.0.0.1', 9001).
2023-02-10T15:48:46,304 [INFO ] W-9001-densenet161_1.0-stdout MODEL_LOG - model_name: densenet161, batchSize: 1
2023-02-10T15:48:46,335 [INFO ] W-9000-densenet161_1.0-stdout MODEL_LOG - Listening on port: None
2023-02-10T15:48:46,351 [INFO ] W-9000-densenet161_1.0-stdout MODEL_LOG - Successfully loaded C:\Users\3dverse\anaconda3\Lib\site-packages/ts/configs/metrics.yaml.
2023-02-10T15:48:46,351 [INFO ] W-9000-densenet161_1.0-stdout MODEL_LOG - [PID]7560
2023-02-10T15:48:46,351 [INFO ] W-9000-densenet161_1.0-stdout MODEL_LOG - Torch worker started.
2023-02-10T15:48:46,351 [INFO ] W-9000-densenet161_1.0-stdout MODEL_LOG - Python runtime: 3.9.13
2023-02-10T15:48:46,351 [DEBUG] W-9000-densenet161_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-densenet161_1.0 State change null -> WORKER_STARTED
2023-02-10T15:48:46,351 [INFO ] W-9000-densenet161_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /127.0.0.1:9000
2023-02-10T15:48:46,351 [INFO ] W-9000-densenet161_1.0-stdout MODEL_LOG - Connection accepted: ('127.0.0.1', 9000).
2023-02-10T15:48:46,351 [INFO ] W-9000-densenet161_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1676044126351
2023-02-10T15:48:46,359 [INFO ] W-9000-densenet161_1.0-stdout MODEL_LOG - model_name: densenet161, batchSize: 1
2023-02-10T15:48:48,601 [INFO ] W-9003-densenet161_1.0-stdout MODEL_LOG - C:\Users\3dverse\AppData\Local\Temp\1\models\4ad9bdc37ddc4a2098d5a2b2e3a78b09\compile.json is missing. PT 2.0 will not be used
2023-02-10T15:48:48,664 [INFO ] W-9002-densenet161_1.0-stdout MODEL_LOG - C:\Users\3dverse\AppData\Local\Temp\1\models\4ad9bdc37ddc4a2098d5a2b2e3a78b09\compile.json is missing. PT 2.0 will not be used
2023-02-10T15:48:48,664 [INFO ] W-9001-densenet161_1.0-stdout MODEL_LOG - C:\Users\3dverse\AppData\Local\Temp\1\models\4ad9bdc37ddc4a2098d5a2b2e3a78b09\compile.json is missing. PT 2.0 will not be used
2023-02-10T15:48:48,789 [INFO ] W-9000-densenet161_1.0-stdout MODEL_LOG - C:\Users\3dverse\AppData\Local\Temp\1\models\4ad9bdc37ddc4a2098d5a2b2e3a78b09\compile.json is missing. PT 2.0 will not be used
2023-02-10T15:48:48,898 [INFO ] W-9003-densenet161_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 2726
2023-02-10T15:48:48,898 [DEBUG] W-9003-densenet161_1.0 org.pytorch.serve.wlm.WorkerThread - W-9003-densenet161_1.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED
2023-02-10T15:48:48,898 [INFO ] W-9003-densenet161_1.0 TS_METRICS - W-9003-densenet161_1.0.ms:5196|#Level:Host|#hostname:GPU-EU-West,timestamp:1676044128
2023-02-10T15:48:48,898 [INFO ] W-9003-densenet161_1.0 TS_METRICS - WorkerThreadTime.ms:31|#Level:Host|#hostname:GPU-EU-West,timestamp:1676044128
2023-02-10T15:48:48,929 [INFO ] W-9001-densenet161_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 2625
2023-02-10T15:48:48,929 [DEBUG] W-9001-densenet161_1.0 org.pytorch.serve.wlm.WorkerThread - W-9001-densenet161_1.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED
2023-02-10T15:48:48,929 [INFO ] W-9001-densenet161_1.0 TS_METRICS - W-9001-densenet161_1.0.ms:5227|#Level:Host|#hostname:GPU-EU-West,timestamp:1676044128
2023-02-10T15:48:48,929 [INFO ] W-9001-densenet161_1.0 TS_METRICS - WorkerThreadTime.ms:11|#Level:Host|#hostname:GPU-EU-West,timestamp:1676044128
2023-02-10T15:48:48,945 [INFO ] W-9002-densenet161_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 2730
2023-02-10T15:48:48,945 [DEBUG] W-9002-densenet161_1.0 org.pytorch.serve.wlm.WorkerThread - W-9002-densenet161_1.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED
2023-02-10T15:48:48,945 [INFO ] W-9002-densenet161_1.0 TS_METRICS - W-9002-densenet161_1.0.ms:5243|#Level:Host|#hostname:GPU-EU-West,timestamp:1676044128
2023-02-10T15:48:48,945 [INFO ] W-9002-densenet161_1.0 TS_METRICS - WorkerThreadTime.ms:0|#Level:Host|#hostname:GPU-EU-West,timestamp:1676044128
2023-02-10T15:48:49,079 [INFO ] W-9000-densenet161_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 2720
2023-02-10T15:48:49,079 [DEBUG] W-9000-densenet161_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-densenet161_1.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED
2023-02-10T15:48:49,079 [INFO ] W-9000-densenet161_1.0 TS_METRICS - W-9000-densenet161_1.0.ms:5393|#Level:Host|#hostname:GPU-EU-West,timestamp:1676044129
2023-02-10T15:48:49,079 [INFO ] W-9000-densenet161_1.0 TS_METRICS - WorkerThreadTime.ms:8|#Level:Host|#hostname:GPU-EU-West,timestamp:1676044129

This is the inference output of the "kitten_small.jpg":

2023-02-10T16:19:09,485 [INFO ] W-9003-densenet161_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1676045949485
2023-02-10T16:19:09,485 [INFO ] W-9003-densenet161_1.0-stdout MODEL_LOG - Backend received inference at: 1676045949
2023-02-10T16:19:09,781 [INFO ] W-9003-densenet161_1.0-stdout MODEL_METRICS - HandlerTime.Milliseconds:296.9|#ModelName:densenet161,Level:Model|#hostname:GPU-EU-West,requestID:9974235a-18a8-4194-8bcf-8df2e772b2ef,timestamp:1676045949
2023-02-10T16:19:09,781 [INFO ] W-9003-densenet161_1.0-stdout MODEL_METRICS - PredictionTime.Milliseconds:296.9|#ModelName:densenet161,Level:Model|#hostname:GPU-EU-West,requestID:9974235a-18a8-4194-8bcf-8df2e772b2ef,timestamp:1676045949
2023-02-10T16:19:09,781 [INFO ] W-9003-densenet161_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 296
2023-02-10T16:19:09,781 [INFO ] W-9003-densenet161_1.0 ACCESS_LOG - /127.0.0.1:58783 "PUT /predictions/densenet161 HTTP/1.1" 200 296
2023-02-10T16:19:09,781 [INFO ] W-9003-densenet161_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:GPU-EU-West,timestamp:1676045949
2023-02-10T16:19:09,781 [DEBUG] W-9003-densenet161_1.0 org.pytorch.serve.job.Job - Waiting time ns: 134500, Backend time ns: 300674000
2023-02-10T16:19:09,781 [INFO ] W-9003-densenet161_1.0 TS_METRICS - QueueTime.ms:0|#Level:Host|#hostname:GPU-EU-West,timestamp:1676045949
2023-02-10T16:19:09,781 [INFO ] W-9003-densenet161_1.0 TS_METRICS - WorkerThreadTime.ms:0|#Level:Host|#hostname:GPU-EU-West,timestamp:1676045949

Where 296ms of inference time seems to confirm the GPU is not used.

Installation instructions

I just followed the TorchServe on Windows tutorial: "Install from binaries".

Model Packaing

It's the densenet_161 model from the "Serve Model" tutorial

config.properties

No response

Versions

python serve/ts_scripts/print_env_info.py

------------------------------------------------------------------------------------------
Environment headers
------------------------------------------------------------------------------------------
Torchserve branch:

torchserve==0.7.1
torch-model-archiver==0.7.1

Python version: 3.9 (64-bit runtime)
Python executable: C:\Users\3dverse\anaconda3\python.exe

Versions of relevant python libraries:
numpy==1.24.2
numpydoc==1.4.0
torch==2.0.0.dev20230210+cu117
torch-model-archiver==0.7.1
torchaudio==0.13.1
torchserve==0.7.1
torchtext==0.14.1
torchvision==0.14.1
torch==2.0.0.dev20230210+cu117
torchtext==0.14.1
torchvision==0.14.1
torchaudio==0.13.1

Java Version:

OS: Microsoft Windows Server 2019 Datacenter
GCC version: N/A
Clang version: N/A
CMake version: N/A

Is CUDA available: Yes
CUDA runtime version: N/A
GPU models and configuration: None
Nvidia driver version: N/A
cuDNN version: None

Repro instructions

I could probably write a step by step repro, but the issue is about the VM running TorchServe.
Still I just followed the TorchServe on Windows tutorial.

Possible Solution

I may have missed something that may not be mentionned in the Windows installation procedure.

Should I have executed python ./ts_scripts/install_dependencies.py --environment=prod --cuda=cu102 instead of python ./ts_scripts/install_dependencies.py --environment=prod? Should I have installed CUDA 10.2 for Windows first?

By the way, I tried those 2 options:

But I observe the same result: Number of GPUs: 0 in the log and a slow inference of more than 300ms for the densenet_161 demo model.

Thanks for the help and advice you could give me.

agunapal commented 1 year ago

Yes, you need to install cuda dependency if you want to use GPU. https://github.com/pytorch/serve#-quick-start-with-torchserve. Please try it and let us know.

khelkun commented 1 year ago

Yes, you need to install cuda dependency if you want to use GPU. https://github.com/pytorch/serve#-quick-start-with-torchserve. Please try it and let us know.

@agunapal, I did:

By the way, I tried those 2 options:

  • Re-install dependencies with python ./ts_scripts/install_dependencies.py --environment=prod --cuda=cu102
  • And installing CUDA 10.2 Toolkit

But I observe the same result: Number of GPUs: 0 in the log and a slow inference of more than 300ms for the densenet_161 demo model.

Did I miss something?

agunapal commented 1 year ago

@khelkun What's your version of CUDA? Please note that the nightly version of torchserve would use PyTorch 2.0, which is using CUDA 11.7. So, you need to have CUDA 11.7. Also, the version in install dependencies should be cu117

khelkun commented 1 year ago

What's your version of CUDA? Please note that the nightly version of torchserve would use PyTorch 2.0, which is using CUDA 11.7. So, you need to have CUDA 11.7

@agunapal CUDA toolkit 10.2, torchserve 0.7.1, so I don't use the nightly version of torchserve.

However I finally setup things correctly apparently. The torchserve log was printing this:

2023-02-10T15:34:43,369 [INFO ] W-9000-coral_best_0.1-stdout MODEL_LOG - dynamo/inductor are not installed. 
2023-02-10T15:34:43,369 [INFO ] W-9000-coral_best_0.1-stdout MODEL_LOG -  For GPU please run pip3 install numpy --pre torch[dynamo] --force-reinstall --extra-index-url https://download.pytorch.org/whl/nightly/cu117 
2023-02-10T15:34:43,369 [INFO ] W-9000-coral_best_0.1-stdout MODEL_LOG -  for CPU please run pip3 install --pre torch --extra-index-url https://download.pytorch.org/whl/nightly/cpu

So I ran pip3 install numpy --pre torch[dynamo] --force-reinstall --extra-index-url https://download.pytorch.org/whl/nightly/cu117. This installed (among others packages) the torch 2.0.0.dev20230210+cu117 python package.

So I've just installed CUDA toolkit 11.7 (which installs the display driver 516.01). Please note that I've not re-installed torchserve dependencies with cu117 yet. Now the torchserve log prints:

Number of GPUs: 1

The python serve/ts_scripts/print_env_info.py command output is now:

------------------------------------------------------------------------------------------
Environment headers
------------------------------------------------------------------------------------------
Torchserve branch:

torchserve==0.7.1
torch-model-archiver==0.7.1

Python version: 3.9 (64-bit runtime)
Python executable: C:\Users\3dverse\anaconda3\python.exe

Versions of relevant python libraries:
numpy==1.24.2
numpydoc==1.4.0
torch==2.0.0.dev20230210+cu117
torch-model-archiver==0.7.1
torchaudio==0.13.1
torchserve==0.7.1
torchtext==0.14.1
torchvision==0.14.1
torch==2.0.0.dev20230210+cu117
torchtext==0.14.1
torchvision==0.14.1
torchaudio==0.13.1

Java Version:

OS: Microsoft Windows Server 2019 Datacenter
GCC version: N/A
Clang version: N/A
CMake version: N/A

Is CUDA available: Yes
CUDA runtime version: 11.7.64
GPU models and configuration:
GPU 0: Tesla T4
Nvidia driver version: 516.01
cuDNN version: None

The inference output of the "kitten_small.jpg" is:

2023-02-14T10:08:05,017 [INFO ] W-9000-densenet161_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 31
2023-02-14T10:08:05,031 [INFO ] W-9000-densenet161_1.0-stdout MODEL_METRICS - PredictionTime.Milliseconds:31.25|#ModelName:densenet161,Level:Model|#hostname:GPU-EU-West,requestID:2fa258a7-85fa-4d80-8fda-cf51ee79549a,timestamp:1676369285
2023-02-14T10:08:05,031 [INFO ] W-9000-densenet161_1.0 ACCESS_LOG - /127.0.0.1:52870 "PUT /predictions/densenet161 HTTP/1.1" 200 45
2023-02-14T10:08:05,034 [INFO ] W-9000-densenet161_1.0 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:GPU-EU-West,timestamp:1676368987
2023-02-14T10:08:05,034 [DEBUG] W-9000-densenet161_1.0 org.pytorch.serve.job.Job - Waiting time ns: 83200, Backend time ns: 45748100
2023-02-14T10:08:05,034 [INFO ] W-9000-densenet161_1.0 TS_METRICS - QueueTime.ms:0|#Level:Host|#hostname:GPU-EU-West,timestamp:1676369285
2023-02-14T10:08:05,034 [INFO ] W-9000-densenet161_1.0 TS_METRICS - WorkerThreadTime.ms:17|#Level:Host|#hostname:GPU-EU-West,timestamp:1676369285

So PredictionTime.Milliseconds:31.25 is way faster and prooves GPU is used imho!

Last issue is the torchserve server prints this every 1 minute:

2023-02-14T10:08:52,685 [WARN ] pool-3-thread-2 org.pytorch.serve.metrics.MetricCollector - Parse metrics failed: NumExpr defaulting to 4 threads.
2023-02-14T10:08:53,076 [ERROR] Thread-7 org.pytorch.serve.metrics.MetricCollector - Traceback (most recent call last):
  File "C:\Users\3dverse\anaconda3\Lib\site-packages\ts\metrics\metric_collector.py", line 27, in <module>
    system_metrics.collect_all(sys.modules['ts.metrics.system_metrics'], arguments.gpu)
  File "C:\Users\3dverse\anaconda3\lib\site-packages\ts\metrics\system_metrics.py", line 119, in collect_all
    value(num_of_gpu)
  File "C:\Users\3dverse\anaconda3\lib\site-packages\ts\metrics\system_metrics.py", line 71, in gpu_utilization
    info = nvgpu.gpu_info()
  File "C:\Users\3dverse\anaconda3\lib\site-packages\nvgpu\__init__.py", line 15, in gpu_info
    mem_used, mem_total = [int(m.strip().replace('MiB', '')) for m in
  File "C:\Users\3dverse\anaconda3\lib\site-packages\nvgpu\__init__.py", line 15, in <listcomp>
    mem_used, mem_total = [int(m.strip().replace('MiB', '')) for m in
ValueError: invalid literal for int() with base 10: '00000001:00:00.0 Off'

So I re-installed torchserve dependencies with python ./ts_scripts/install_dependencies.py --environment=prod --cuda=cu117, but the previous exception remains. This error is not a big deal and may disappear if I properly re-install torchserve and all its python dependencies.

My guess is the issue was about the NVIDIA display driver which was old 451.82 but it's the one recommended by the Standard NC4as T4 v3 Azure documentation. However the display driver available from NVIDIA portal for CUDA 11.7 is 517.88 which is even more recent than the one installed by the CUDA 11.7 toolkit installer (display driver 516.01).

N.B: the display driver available from NVIDIA portal for CUDA 10.2 is 443.66 which is older than the one I installed in the first place following Azure recommandations (451.82).

Thanks for your help @agunapal.

agunapal commented 1 year ago

@khelkun Great. Glad it worked. I am also curious about how you installed TorchServe initially.

If possible do you mind pasting the logs of the install dependencies script , python ./ts_scripts/install_dependencies.py --environment=prod --cuda=cu102 with a fresh env.

The logs you first posted don't look right. It seemed you had PyTorch with cu117 which should not happen. I am trying to figure out if there is a bug.

khelkun commented 1 year ago

@agunapal I'll do a fresh install asap and get back to you.

I think there's no bug, I'm pretty sure I ran pip3 install numpy --pre torch[dynamo] --force-reinstall --extra-index-url before pasting the logs (sorry about that, but because GPU was not detected I messed up around).
Actually serving the model does not work after this "dynamo/inductor" installation because it complains about incompatibility between the torch version & torchvision version. So I had to re-install torch==1.13.1, and the following log came back:

2023-02-10T15:34:43,369 [INFO ] W-9000-coral_best_0.1-stdout MODEL_LOG - dynamo/inductor are not installed. 
2023-02-10T15:34:43,369 [INFO ] W-9000-coral_best_0.1-stdout MODEL_LOG -  For GPU please run pip3 install numpy --pre torch[dynamo] --force-reinstall --extra-index-url https://download.pytorch.org/whl/nightly/cu117 
2023-02-10T15:34:43,369 [INFO ] W-9000-coral_best_0.1-stdout MODEL_LOG -  for CPU please run pip3 install --pre torch --extra-index-url https://download.pytorch.org/whl/nightly/cpu