pytorch / serve

Serve, optimize and scale PyTorch models in production
https://pytorch.org/serve/
Apache License 2.0
4.22k stars 863 forks source link

TorchServe does not read correct default_response_timeout #1591

Open KilianMichiels opened 2 years ago

KilianMichiels commented 2 years ago

Context

I'm trying to increase the configuration parameter default_response_timeout to a value higher than 120 seconds but it seems neither the default_response_timeout parameter nor the environment variable TS_DEFAULT_RESPONSE_TIMEOUT are read correctly.

Your Environment

Expected Behavior

I set the parameter default_response_timeout in the config.properties file to a high value. The worker does not timeout before this value is reached.

Current Behavior

I set the parameter default_response_timeout in the config.properties file to any different value. The worker still uses the default timeout value of 120 seconds.

Possible Solution

I do not have enough knowledge on the internal workings of TorchServe to provide a solution at this time. This is as far as I got to tracing down where the variable is read: https://github.com/pytorch/serve/blob/30f83500b0850e26ec55581f48a9307b1986f9f9/frontend/server/src/main/java/org/pytorch/serve/util/ConfigManager.java#L62

Steps to Reproduce

  1. Get any model.mar file using the examples (so first git clone this repo).

    $ git clone https://github.com/pytorch/serve.git
    $ cd serve/
    $ wget https://download.pytorch.org/models/alexnet-owt-7be5be79.pth
    $ torch-model-archiver --model-name alexnet --version 1.0 --model-file ./examples/image_classifier/alexnet/model.py --serialized-file alexnet-owt-7be5be79.pth --handler image_classifier --extra-files ./examples/image_classifier/index_to_name.json
    $ mv alexnet.mar /tmp/
  2. Create a simple handler dummy.py

import logging
import time

def start(row, context):
    """
    Dummy function which sleeps for a longer time than 120 seconds.
    """
    time_to_sleep = 200  # Seconds
    logging.info("Going to sleep")
    for i in range(time_to_sleep):
        logging.info(f"Sleeping {i}")
        time.sleep(1)
    logging.info("Waking up!")
    return {}

def stop(row, context):
    """
    Another simple function
    """
    logging.info("Done!")
    return {}
  1. Create a workflow specification file dummy.yaml that uses this dummy.py handler.
models:
  simple_model:
    url: alexnet.mar

dag:
  start:
    - stop
  1. Archive workflow

    $ torch-workflow-archiver --workflow-name dummy --spec-file dummy.yaml --handler dummy.py --export-path /tmp/ -f
  2. Create the config.properties file

# Addresses
inference_address=http://0.0.0.0:8080
management_address=http://0.0.0.0:8081
metrics_address=http://0.0.0.0:8082

# Management
default_response_timeout=600 # In seconds - Note: This seems to be buggy
unregister_model_timeout=300 # In seconds
install_py_dep_per_model=true

# Metrics
enable_metrics_api=true

# Performance
default_workers_per_model=1
number_of_netty_threads=1
netty_client_threads=1
prefer_direct_buffer=true
  1. Start TorchServe
$ torchserve --start  --ncs --model-store /tmp/ --foreground --workflow-store /tmp/ --ts-config config.properties
  1. Register the workflow and wait for the timeout at 120 seconds instead of no timeout and the prediction being done after +-200 seconds.
    $ curl -X POST http://localhost:8081/workflows?url=tmp/dummy.war

Failure Logs [if any]

WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
2022-04-28T17:54:02,335 [INFO ] main org.pytorch.serve.servingsdk.impl.PluginsManager - Initializing plugins manager...
2022-04-28T17:54:02,387 [INFO ] main org.pytorch.serve.ModelServer - 
Torchserve version: 0.5.3
TS Home: XXX/lib/python3.8/site-packages
Current directory: XXX
Temp directory: /tmp
Number of GPUs: 1
Number of CPUs: 12
Max heap size: 3910 M
Python executable: XXX/bin/python
Config file: config.properties
Inference address: http://0.0.0.0:8080
Management address: http://0.0.0.0:8081
Metrics address: http://0.0.0.0:8082
Model Store: /tmp/
Initial Models: N/A
Log dir: logs
Metrics dir: logs
Netty threads: 1
Netty client threads: 1
Default workers per model: 1
Blacklist Regex: N/A
Maximum Response Size: 6553500
Maximum Request Size: 6553500
Limit Maximum Image Pixels: true
Prefer direct buffer: true
Allowed Urls: [file://.*|http(s)?://.*]
Custom python dependency for model allowed: true
Metrics report format: prometheus
Enable metrics API: true
Workflow Store: /tmp
Model config: N/A
2022-04-28T17:54:02,391 [INFO ] main org.pytorch.serve.servingsdk.impl.PluginsManager -  Loading snapshot serializer plugin...
2022-04-28T17:54:02,406 [INFO ] main org.pytorch.serve.ModelServer - Initialize Inference server with: EpollServerSocketChannel.
2022-04-28T17:54:02,446 [INFO ] main org.pytorch.serve.ModelServer - Inference API bind to: http://0.0.0.0:8080
2022-04-28T17:54:02,446 [INFO ] main org.pytorch.serve.ModelServer - Initialize Management server with: EpollServerSocketChannel.
2022-04-28T17:54:02,447 [INFO ] main org.pytorch.serve.ModelServer - Management API bind to: http://0.0.0.0:8081
2022-04-28T17:54:02,447 [INFO ] main org.pytorch.serve.ModelServer - Initialize Metrics server with: EpollServerSocketChannel.
2022-04-28T17:54:02,448 [INFO ] main org.pytorch.serve.ModelServer - Metrics API bind to: http://0.0.0.0:8082
Model server started.
2022-04-28T17:54:07,799 [DEBUG] pool-6-thread-2 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model dummy__start
2022-04-28T17:54:07,799 [DEBUG] pool-6-thread-1 org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model dummy__stop
2022-04-28T17:54:07,799 [DEBUG] pool-6-thread-2 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model dummy__start
2022-04-28T17:54:07,799 [DEBUG] pool-6-thread-1 org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model dummy__stop
2022-04-28T17:54:07,799 [INFO ] pool-6-thread-2 org.pytorch.serve.wlm.ModelManager - Model dummy__start loaded.
2022-04-28T17:54:07,799 [INFO ] pool-6-thread-1 org.pytorch.serve.wlm.ModelManager - Model dummy__stop loaded.
2022-04-28T17:54:07,800 [DEBUG] pool-6-thread-2 org.pytorch.serve.wlm.ModelManager - updateModel: dummy__start, count: 1
2022-04-28T17:54:07,800 [DEBUG] pool-6-thread-1 org.pytorch.serve.wlm.ModelManager - updateModel: dummy__stop, count: 1
2022-04-28T17:54:07,805 [DEBUG] W-9000-dummy__stop_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [XXX/bin/python, XXX/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9000]
2022-04-28T17:54:07,805 [DEBUG] W-9001-dummy__start_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [XXX/bin/python, XXX/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /tmp/.ts.sock.9001]
2022-04-28T17:54:08,382 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9001
2022-04-28T17:54:08,383 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - [PID]225001
2022-04-28T17:54:08,383 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Torch worker started.
2022-04-28T17:54:08,384 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Python runtime: 3.8.10
2022-04-28T17:54:08,384 [DEBUG] W-9001-dummy__start_1.0 org.pytorch.serve.wlm.WorkerThread - W-9001-dummy__start_1.0 State change null -> WORKER_STARTED
2022-04-28T17:54:08,386 [INFO ] W-9001-dummy__start_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9001
2022-04-28T17:54:08,391 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9001.
2022-04-28T17:54:08,393 [INFO ] W-9001-dummy__start_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1651161248393
2022-04-28T17:54:08,400 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - model_name: dummy__start, batchSize: 1
2022-04-28T17:54:08,401 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Going to sleep
2022-04-28T17:54:08,401 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 0
2022-04-28T17:54:08,415 [INFO ] W-9000-dummy__stop_1.0-stdout MODEL_LOG - Listening on port: /tmp/.ts.sock.9000
2022-04-28T17:54:08,415 [INFO ] W-9000-dummy__stop_1.0-stdout MODEL_LOG - [PID]225000
2022-04-28T17:54:08,416 [INFO ] W-9000-dummy__stop_1.0-stdout MODEL_LOG - Torch worker started.
2022-04-28T17:54:08,416 [DEBUG] W-9000-dummy__stop_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-dummy__stop_1.0 State change null -> WORKER_STARTED
2022-04-28T17:54:08,416 [INFO ] W-9000-dummy__stop_1.0-stdout MODEL_LOG - Python runtime: 3.8.10
2022-04-28T17:54:08,416 [INFO ] W-9000-dummy__stop_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /tmp/.ts.sock.9000
2022-04-28T17:54:08,417 [INFO ] W-9000-dummy__stop_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1651161248417
2022-04-28T17:54:08,417 [INFO ] W-9000-dummy__stop_1.0-stdout MODEL_LOG - Connection accepted: /tmp/.ts.sock.9000.
2022-04-28T17:54:08,418 [INFO ] W-9000-dummy__stop_1.0-stdout MODEL_LOG - model_name: dummy__stop, batchSize: 1
2022-04-28T17:54:08,418 [INFO ] W-9000-dummy__stop_1.0-stdout MODEL_LOG - Done!
2022-04-28T17:54:08,419 [INFO ] W-9000-dummy__stop_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 2
2022-04-28T17:54:08,419 [DEBUG] W-9000-dummy__stop_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-dummy__stop_1.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED
2022-04-28T17:54:09,402 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 1
2022-04-28T17:54:10,403 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 2
2022-04-28T17:54:11,405 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 3
2022-04-28T17:54:12,406 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 4
2022-04-28T17:54:13,408 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 5
2022-04-28T17:54:14,410 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 6
2022-04-28T17:54:15,410 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 7
2022-04-28T17:54:16,411 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 8
2022-04-28T17:54:17,413 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 9
2022-04-28T17:54:18,414 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 10
2022-04-28T17:54:19,415 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 11
2022-04-28T17:54:20,417 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 12
2022-04-28T17:54:21,418 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 13
2022-04-28T17:54:22,420 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 14
2022-04-28T17:54:23,421 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 15
2022-04-28T17:54:24,422 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 16
2022-04-28T17:54:25,424 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 17
2022-04-28T17:54:26,425 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 18
2022-04-28T17:54:27,427 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 19
2022-04-28T17:54:28,428 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 20
2022-04-28T17:54:29,430 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 21
2022-04-28T17:54:30,431 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 22
2022-04-28T17:54:31,433 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 23
2022-04-28T17:54:32,434 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 24
2022-04-28T17:54:33,436 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 25
2022-04-28T17:54:34,437 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 26
2022-04-28T17:54:35,439 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 27
2022-04-28T17:54:36,440 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 28
2022-04-28T17:54:37,442 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 29
2022-04-28T17:54:38,444 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 30
2022-04-28T17:54:39,445 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 31
2022-04-28T17:54:40,447 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 32
2022-04-28T17:54:41,449 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 33
2022-04-28T17:54:42,450 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 34
2022-04-28T17:54:43,452 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 35
2022-04-28T17:54:44,454 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 36
2022-04-28T17:54:45,455 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 37
2022-04-28T17:54:46,457 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 38
2022-04-28T17:54:47,458 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 39
2022-04-28T17:54:48,460 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 40
2022-04-28T17:54:49,461 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 41
2022-04-28T17:54:50,463 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 42
2022-04-28T17:54:51,464 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 43
2022-04-28T17:54:52,465 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 44
2022-04-28T17:54:53,467 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 45
2022-04-28T17:54:54,468 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 46
2022-04-28T17:54:55,470 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 47
2022-04-28T17:54:56,471 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 48
2022-04-28T17:54:57,473 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 49
2022-04-28T17:54:58,475 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 50
2022-04-28T17:54:59,476 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 51
2022-04-28T17:55:00,478 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 52
2022-04-28T17:55:01,479 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 53
2022-04-28T17:55:02,480 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 54
2022-04-28T17:55:03,482 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 55
2022-04-28T17:55:04,483 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 56
2022-04-28T17:55:05,485 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 57
2022-04-28T17:55:06,486 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 58
2022-04-28T17:55:07,486 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 59
2022-04-28T17:55:08,487 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 60
2022-04-28T17:55:09,489 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 61
2022-04-28T17:55:10,490 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 62
2022-04-28T17:55:11,491 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 63
2022-04-28T17:55:12,492 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 64
2022-04-28T17:55:13,494 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 65
2022-04-28T17:55:14,494 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 66
2022-04-28T17:55:15,496 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 67
2022-04-28T17:55:16,497 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 68
2022-04-28T17:55:17,498 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 69
2022-04-28T17:55:18,500 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 70
2022-04-28T17:55:19,502 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 71
2022-04-28T17:55:20,503 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 72
2022-04-28T17:55:21,505 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 73
2022-04-28T17:55:22,506 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 74
2022-04-28T17:55:23,508 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 75
2022-04-28T17:55:24,508 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 76
2022-04-28T17:55:25,509 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 77
2022-04-28T17:55:26,510 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 78
2022-04-28T17:55:27,511 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 79
2022-04-28T17:55:28,513 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 80
2022-04-28T17:55:29,514 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 81
2022-04-28T17:55:30,516 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 82
2022-04-28T17:55:31,517 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 83
2022-04-28T17:55:32,518 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 84
2022-04-28T17:55:33,520 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 85
2022-04-28T17:55:34,521 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 86
2022-04-28T17:55:35,522 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 87
2022-04-28T17:55:36,524 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 88
2022-04-28T17:55:37,526 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 89
2022-04-28T17:55:38,527 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 90
2022-04-28T17:55:39,529 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 91
2022-04-28T17:55:40,530 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 92
2022-04-28T17:55:41,532 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 93
2022-04-28T17:55:42,534 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 94
2022-04-28T17:55:43,535 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 95
2022-04-28T17:55:44,536 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 96
2022-04-28T17:55:45,538 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 97
2022-04-28T17:55:46,540 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 98
2022-04-28T17:55:47,541 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 99
2022-04-28T17:55:48,543 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 100
2022-04-28T17:55:49,544 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 101
2022-04-28T17:55:50,546 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 102
2022-04-28T17:55:51,548 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 103
2022-04-28T17:55:52,549 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 104
2022-04-28T17:55:53,551 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 105
2022-04-28T17:55:54,552 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 106
2022-04-28T17:55:55,554 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 107
2022-04-28T17:55:56,555 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 108
2022-04-28T17:55:57,557 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 109
2022-04-28T17:55:58,558 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 110
2022-04-28T17:55:59,560 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 111
2022-04-28T17:56:00,562 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 112
2022-04-28T17:56:01,563 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 113
2022-04-28T17:56:02,565 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 114
2022-04-28T17:56:03,566 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 115
2022-04-28T17:56:04,568 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 116
2022-04-28T17:56:05,569 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 117
2022-04-28T17:56:06,570 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 118
2022-04-28T17:56:07,571 [INFO ] W-9001-dummy__start_1.0-stdout MODEL_LOG - Sleeping 119
2022-04-28T17:56:08,400 [INFO ] W-9001-dummy__start_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 120000
2022-04-28T17:56:08,400 [ERROR] W-9001-dummy__start_1.0 org.pytorch.serve.wlm.WorkerThread - Number or consecutive unsuccessful inference 1
2022-04-28T17:56:08,400 [ERROR] W-9001-dummy__start_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker error
org.pytorch.serve.wlm.WorkerInitializationException: Backend worker did not respond in given time
    at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:199) [model-server.jar:?]
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
    at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
    at java.lang.Thread.run(Thread.java:829) [?:?]
2022-04-28T17:56:08,408 [WARN ] W-9001-dummy__start_1.0 org.pytorch.serve.wlm.BatchAggregator - Load model failed: dummy__start, error: Worker died.
2022-04-28T17:56:08,408 [DEBUG] W-9001-dummy__start_1.0 org.pytorch.serve.wlm.ModelVersionedRefs - Removed model: dummy__start version: 1.0
2022-04-28T17:56:08,408 [DEBUG] W-9001-dummy__start_1.0 org.pytorch.serve.wlm.WorkerThread - W-9001-dummy__start_1.0 State change WORKER_STARTED -> WORKER_SCALED_DOWN
2022-04-28T17:56:08,408 [WARN ] W-9001-dummy__start_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9001-dummy__start_1.0-stderr
2022-04-28T17:56:08,408 [WARN ] W-9001-dummy__start_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9001-dummy__start_1.0-stdout
2022-04-28T17:56:08,408 [INFO ] epollEventLoopGroup-5-1 org.pytorch.serve.wlm.WorkerThread - 9001 Worker disconnected. WORKER_STARTED
2022-04-28T17:56:08,410 [WARN ] W-9001-dummy__start_1.0 org.pytorch.serve.wlm.ModelManager - Model dummy__start version 1.0 not found.
2022-04-28T17:56:08,410 [DEBUG] W-9001-dummy__start_1.0 org.pytorch.serve.wlm.WorkerThread - W-9001-dummy__start_1.0 State change WORKER_SCALED_DOWN -> WORKER_STOPPED
2022-04-28T17:56:08,410 [WARN ] W-9001-dummy__start_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9001-dummy__start_1.0-stderr
2022-04-28T17:56:08,410 [WARN ] W-9001-dummy__start_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9001-dummy__start_1.0-stdout
2022-04-28T17:56:08,410 [DEBUG] W-9001-dummy__start_1.0 org.pytorch.serve.wlm.WorkerThread - Worker terminated due to scale-down call.
2022-04-28T17:56:08,411 [WARN ] pool-7-thread-2 org.pytorch.serve.wlm.ModelManager - Model dummy__start version 1.0 not found.
2022-04-28T17:56:08,411 [DEBUG] pool-7-thread-1 org.pytorch.serve.wlm.ModelVersionedRefs - Removed model: dummy__stop version: 1.0
2022-04-28T17:56:08,411 [DEBUG] pool-7-thread-1 org.pytorch.serve.wlm.WorkerThread - W-9000-dummy__stop_1.0 State change WORKER_MODEL_LOADED -> WORKER_SCALED_DOWN
2022-04-28T17:56:08,411 [WARN ] pool-7-thread-1 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9000-dummy__stop_1.0-stderr
2022-04-28T17:56:08,412 [WARN ] pool-7-thread-1 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9000-dummy__stop_1.0-stdout
2022-04-28T17:56:08,412 [INFO ] epollEventLoopGroup-5-1 org.pytorch.serve.wlm.WorkerThread - 9000 Worker disconnected. WORKER_SCALED_DOWN
2022-04-28T17:56:08,412 [INFO ] W-9000-dummy__stop_1.0-stdout MODEL_LOG - Frontend disconnected.
2022-04-28T17:56:08,412 [DEBUG] W-9000-dummy__stop_1.0 org.pytorch.serve.wlm.WorkerThread - System state is : WORKER_SCALED_DOWN
2022-04-28T17:56:08,412 [DEBUG] W-9000-dummy__stop_1.0 org.pytorch.serve.wlm.WorkerThread - Shutting down the thread .. Scaling down.
2022-04-28T17:56:08,412 [INFO ] W-9000-dummy__stop_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9000-dummy__stop_1.0-stdout
2022-04-28T17:56:08,412 [DEBUG] W-9000-dummy__stop_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-dummy__stop_1.0 State change WORKER_SCALED_DOWN -> WORKER_STOPPED
2022-04-28T17:56:08,413 [WARN ] W-9000-dummy__stop_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9000-dummy__stop_1.0-stderr
2022-04-28T17:56:08,413 [WARN ] W-9000-dummy__stop_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9000-dummy__stop_1.0-stdout
2022-04-28T17:56:08,413 [DEBUG] W-9000-dummy__stop_1.0 org.pytorch.serve.wlm.WorkerThread - Worker terminated due to scale-down call.
2022-04-28T17:56:08,421 [INFO ] W-9001-dummy__start_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9001-dummy__start_1.0-stdout
msaroufim commented 2 years ago

Per #1615

alexgaskell10 commented 2 years ago

I also am having this problem. My model sometimes loads correctly, sometimes doesn't. When it does load, the response time is c. 119s so I need to increase this threshold to load the model reliably. When I set default_response_timeout and/or TS_DEFAULT_RESPONSE_TIMEOUT nothing is different.

@msaroufim Is there a plan to fix this? This is a big issue for me. Thanks!

The logs below show an example of when model loading fails:

org.pytorch.serve.wlm.WorkerThread - Backend worker error
org.pytorch.serve.wlm.WorkerInitializationException: Backend worker did not respond in given time
    at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:199) ~[model-server.jar:?]
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?]
    at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
    at java.lang.Thread.run(Thread.java:829) [?:?]
Backend worker error
org.pytorch.serve.wlm.WorkerInitializationException: Backend worker did not respond in given time
    at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:199) ~[model-server.jar:?]
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?]
    at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
    at java.lang.Thread.run(Thread.java:829) [?:?]
KilianMichiels commented 2 years ago

Hi @alexgaskell10,

As this received the wontfix label, and I did not get the impression this was high priority, I started looking a bit more into the Java server code.

It turns out the relevant parameter setting the 120s can be found here:

public RegisterWorkflowRequest(QueryStringDecoder decoder) {
        workflowName = NettyUtils.getParameter(decoder, "workflow_name", null);
        responseTimeout = NettyUtils.getIntParameter(decoder, "response_timeout", 120);  // This is the one!
        workflowUrl = NettyUtils.getParameter(decoder, "url", null);
        s3SseKms = Boolean.parseBoolean(NettyUtils.getParameter(decoder, "s3_sse_kms", "false"));
    }

So as a temporary fix you could replace this with your own desired value and build TorchServe from source:

# Assuming you cloned the repo.
cd /serve/

# Replace the complete line using sed
sed -i 's/getIntParameter(decoder, "response_timeout", 120)/getIntParameter(decoder, "response_timeout", 300)/' frontend/server/src/main/java/org/pytorch/serve/workflow/messages/RegisterWorkflowRequest.java

# Build from source
python3.8 ts_scripts/install_dependencies.py --cuda=cu113 --environment=dev
python3.8 ts_scripts/install_from_src.py

Note: I ended up using v0.6.0 as the scripts in older versions seemed to fail for my application.

Hope this helps! Good luck!

alexgaskell10 commented 2 years ago

@Michielskilian ah excellent thanks, I'll do this!

Edit: To add to this, the 'default_response_timeout' in config.properties does seem to be working fine actually.

KilianMichiels commented 2 years ago

Hi @alexgaskell10,

I tried the simple example I described above with v0.6.0 but the bug still occurs. The timeout remains at 120s, I checked if the config.properties is read properly by changing the ports for the management and inference URLs, and it does seem to pick up these changes.

Could you provide some more details on your setup so I can see what the differences are?

foabodo commented 1 year ago

@Michielskilian, I also experience this bug in v0.6.1.