pytorch / serve

Serve, optimize and scale PyTorch models in production
https://pytorch.org/serve/
Apache License 2.0
4.19k stars 858 forks source link

Custom pip package installation failed. #1104

Closed gautam-girotra closed 3 years ago

gautam-girotra commented 3 years ago

Please have a look at FAQ's and Troubleshooting guide, your query may be already addressed.

Your issue may already be reported! Please search on the issue tracker before creating one.

Context

I am trying to install custom dependencies. All solutions have been docker based. I am not using docker. That's why submitting new issue. * torchserve version: 0.4.0 * torch-model-archiver version: 0.4.0 * torch version: 1.7.1+cpu * java version: 11.0.2 * Operating System and version: Windows10 x64 ## Your Environment

Expected Behavior

## Current Behavior

My custom handler uses opencv library. I used --requirements-file parameter in torch-model-archiver.

Possible Solution

Steps to Reproduce

torchserve.config is inference_address=http://0.0.0.0:8080 default_workers_per_model=1 install_py_dep_per_model=true

model_requirements.txt is: model_requirements.txt

torchserve --start --ncs --model-store model_store --models check.mar --ts-config ./torchserve.config

...

Failure Logs [if any]

(deploy) PS C:\WINDOWS\system32\serve> 2021-06-01 23:08:18,263 [INFO ] main org.pytorch.serve.servingsdk.impl.PluginsManager - Initializing plugins manager... 2021-06-01 23:08:18,599 [INFO ] main org.pytorch.serve.ModelServer - Torchserve version: 0.4.0 TS Home: C:\Users\gautam\anaconda3\envs\deploy\lib\site-packages Current directory: C:\Windows\System32\serve Temp directory: C:\Users\gautam\AppData\Local\Temp Number of GPUs: 1 Number of CPUs: 8 Max heap size: 2014 M Python executable: c:\users\gautam\anaconda3\envs\deploy\python.exe Config file: ./torchserve.config Inference address: http://0.0.0.0:8080 Management address: http://127.0.0.1:8081 Metrics address: http://127.0.0.1:8082 Model Store: C:\Windows\System32\serve\model_store Initial Models: check.mar Log dir: C:\Windows\System32\serve\logs Metrics dir: C:\Windows\System32\serve\logs Netty threads: 0 Netty client threads: 0 Default workers per model: 1 Blacklist Regex: N/A Maximum Response Size: 6553500 Maximum Request Size: 6553500 Prefer direct buffer: false Allowed Urls: [file://.|http(s)?://.] Custom python dependency for model allowed: true Metrics report format: prometheus Enable metrics API: true Workflow Store: C:\Windows\System32\serve\model_store 2021-06-01 23:08:18,610 [INFO ] main org.pytorch.serve.servingsdk.impl.PluginsManager - Loading snapshot serializer plugin... 2021-06-01 23:08:18,640 [INFO ] main org.pytorch.serve.ModelServer - Loading initial models: check.mar 2021-06-01 23:08:22,556 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model check 2021-06-01 23:08:23,617 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model check 2021-06-01 23:08:23,617 [INFO ] main org.pytorch.serve.wlm.ModelManager - Model check loaded. 2021-06-01 23:08:29,517 [WARN ] main org.pytorch.serve.ModelServer - Failed to load model: check.mar org.pytorch.serve.archive.model.ModelException: Custom pip package installation failed for check at org.pytorch.serve.wlm.ModelManager.setupModelDependencies(ModelManager.java:224) at org.pytorch.serve.wlm.ModelManager.registerModel(ModelManager.java:150) at org.pytorch.serve.ModelServer.initModelStore(ModelServer.java:234) at org.pytorch.serve.ModelServer.startRESTserver(ModelServer.java:340) at org.pytorch.serve.ModelServer.startAndWait(ModelServer.java:117) at org.pytorch.serve.ModelServer.main(ModelServer.java:98) 2021-06-01 23:08:29,524 [INFO ] main org.pytorch.serve.ModelServer - Initialize Inference server with: NioServerSocketChannel. 2021-06-01 23:08:29,917 [INFO ] main org.pytorch.serve.ModelServer - Inference API bind to: http://0.0.0.0:8080 2021-06-01 23:08:29,917 [INFO ] main org.pytorch.serve.ModelServer - Initialize Management server with: NioServerSocketChannel. 2021-06-01 23:08:29,918 [INFO ] main org.pytorch.serve.ModelServer - Management API bind to: http://127.0.0.1:8081 2021-06-01 23:08:29,918 [INFO ] main org.pytorch.serve.ModelServer - Initialize Metrics server with: NioServerSocketChannel. 2021-06-01 23:08:29,919 [INFO ] main org.pytorch.serve.ModelServer - Metrics API bind to: http://127.0.0.1:8082 Model server started. 2021-06-01 23:08:30,580 [INFO ] pool-2-thread-1 TS_METRICS - CPUUtilization.Percent:0.0|#Level:Host|#hostname:DESKTOP-EFH8DJT,timestamp:1622569110 2021-06-01 23:08:30,582 [INFO ] pool-2-thread-1 TS_METRICS - DiskAvailable.Gigabytes:33.02763366699219|#Level:Host|#hostname:DESKTOP-EFH8DJT,timestamp:1622569110 2021-06-01 23:08:30,583 [INFO ] pool-2-thread-1 TS_METRICS - DiskUsage.Gigabytes:134.09443283081055|#Level:Host|#hostname:DESKTOP-EFH8DJT,timestamp:1622569110 2021-06-01 23:08:30,584 [INFO ] pool-2-thread-1 TS_METRICS - DiskUtilization.Percent:80.2|#Level:Host|#hostname:DESKTOP-EFH8DJT,timestamp:1622569110 2021-06-01 23:08:30,585 [INFO ] pool-2-thread-1 TS_METRICS - MemoryAvailable.Megabytes:2705.8984375|#Level:Host|#hostname:DESKTOP-EFH8DJT,timestamp:1622569110 2021-06-01 23:08:30,585 [INFO ] pool-2-thread-1 TS_METRICS - MemoryUsed.Megabytes:5346.5703125|#Level:Host|#hostname:DESKTOP-EFH8DJT,timestamp:1622569110 2021-06-01 23:08:30,586 [INFO ] pool-2-thread-1 TS_METRICS - MemoryUtilization.Percent:66.4|#Level:Host|#hostname:DESKTOP-EFH8DJT,timestamp:1622569110

maaquib commented 3 years ago

Same issue as #1086. Can you manually run the pip install command to verify that it works

 c:\users\gautam\anaconda3\envs\deploy\python.exe -m pip install -U -t C:\Windows\System32\serve\model_store -r requirements.txt
gautam-girotra commented 3 years ago

The issue was with the requirements file.