Open mdeihim opened 1 year ago
@mdeihim have you tried running just the gender model? From the logs I see that oracledb
package is missing.
2022-12-27T18:10:03,599 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - import oracledb
2022-12-27T18:10:03,599 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - import oracledb
2022-12-27T18:10:03,599 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - import oracledb
2022-12-27T18:10:03,599 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - ModuleNotFoundError: No module named 'oracledb'
2022-12-27T18:10:03,599 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - ModuleNotFoundError: No module named 'oracledb'
2022-12-27T18:10:03,599 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - ModuleNotFoundError: No module named 'oracledb'
@mdeihim have you tried running just the gender model? From the logs I see that
oracledb
package is missing.2022-12-27T18:10:03,599 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - import oracledb 2022-12-27T18:10:03,599 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - import oracledb 2022-12-27T18:10:03,599 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - import oracledb 2022-12-27T18:10:03,599 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - ModuleNotFoundError: No module named 'oracledb' 2022-12-27T18:10:03,599 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - ModuleNotFoundError: No module named 'oracledb' 2022-12-27T18:10:03,599 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - ModuleNotFoundError: No module named 'oracledb'
Yes,the gender model and all of its dependencies load fine, including oracledb. For some reason when I serve multiple models thats where the problems start.
Can you try adding install_py_dep_per_model=true
to your config.properties
Can you try adding
install_py_dep_per_model=true
to yourconfig.properties
That is already in the config.properties file
Just read your logs more carefully again, if you want to share requirements.txt then you should set install_py_dep_per_model=false
I see 2 meaningful errors in your logs
2022-12-27T18:10:03,599 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - ModuleNotFoundError: No module named 'oracledb'
ModuleNotFoundError: No module named 'ts.torch_handler.Gender_Handler'
My suspicion is the problem is 2, unless you rebuilt torchserve from source that is expected to fail. Regardless this might be easiest to solve if you can share a minimal repro as a docker image
🐛 Describe the bug
I have 2 model archive files in my model store, gender_model.mar and age_model.mar. Each one of these works for inferencing individually with torchserve. Individually I start torchserve and register one model like this.
I get correct outputs from input images using this for both models.
When I try to serve the models simultaneously I get errors loading the models. A couple of ways I've tried to run this is:
and:
Error logs
model-server@b010c8d0c57c:~$ WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance. 2022-12-27T18:06:25,100 [INFO ] main org.pytorch.serve.servingsdk.impl.PluginsManager - Initializing plugins manager... 2022-12-27T18:06:25,180 [INFO ] main org.pytorch.serve.ModelServer - Torchserve version: 0.6.0 TS Home: /home/venv/lib/python3.8/site-packages Current directory: /home/model-server Temp directory: /home/model-server/tmp Number of GPUs: 0 Number of CPUs: 8 Max heap size: 498 M Python executable: /home/venv/bin/python Config file: Image-Based-Market-Segmentation/model-store/config.properties Inference address: http://0.0.0.0:8080 Management address: http://0.0.0.0:8081 Metrics address: http://0.0.0.0:8082 Model Store: /home/model-server/Image-Based-Market-Segmentation/model-store Initial Models: age_model=Image-Based-Market-Segmentation/model-store/age_model.mar,gender_model=Image-Based-Market-Segmentation/model-store/gender_model.mar Log dir: /home/model-server/logs Metrics dir: /home/model-server/logs Netty threads: 32 Netty client threads: 0 Default workers per model: 8 Blacklist Regex: N/A Maximum Response Size: 6553500 Maximum Request Size: 6553500 Limit Maximum Image Pixels: true Prefer direct buffer: false Allowed Urls: [file://.|http(s)?://.] Custom python dependency for model allowed: true Metrics report format: prometheus Enable metrics API: true Workflow Store: /home/model-server/Image-Based-Market-Segmentation/model-store Model config: N/A 2022-12-27T18:06:25,190 [INFO ] main org.pytorch.serve.servingsdk.impl.PluginsManager - Loading snapshot serializer plugin... 2022-12-27T18:06:25,214 [INFO ] main org.pytorch.serve.ModelServer - Loading initial models: Image-Based-Market-Segmentation/model-store/age_model.mar 2022-12-27T18:06:26,323 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 2.0 for model age_model 2022-12-27T18:06:26,323 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 2.0 for model age_model model-server@b010c8d0c57c:~$ 2022-12-27T18:07:30,090 [INFO ] main org.pytorch.serve.wlm.ModelManager - Model age_model loaded. 2022-12-27T18:07:30,147 [DEBUG] main org.pytorch.serve.wlm.ModelManager - updateModel: age_model, count: 8 2022-12-27T18:07:30,477 [INFO ] main org.pytorch.serve.ModelServer - Loading initial models: Image-Based-Market-Segmentation/model-store/gender_model.mar 2022-12-27T18:07:30,487 [DEBUG] W-9001-age_model_2.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/home/venv/bin/python, /home/venv/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /home/model-server/tmp/.ts.sock.9001] 2022-12-27T18:07:30,487 [DEBUG] W-9006-age_model_2.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/home/venv/bin/python, /home/venv/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /home/model-server/tmp/.ts.sock.9006] 2022-12-27T18:07:30,487 [DEBUG] W-9002-age_model_2.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/home/venv/bin/python, /home/venv/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /home/model-server/tmp/.ts.sock.9002] 2022-12-27T18:07:30,487 [DEBUG] W-9000-age_model_2.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/home/venv/bin/python, /home/venv/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /home/model-server/tmp/.ts.sock.9000] 2022-12-27T18:07:30,487 [DEBUG] W-9005-age_model_2.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/home/venv/bin/python, /home/venv/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /home/model-server/tmp/.ts.sock.9005] 2022-12-27T18:07:30,487 [DEBUG] W-9007-age_model_2.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/home/venv/bin/python, /home/venv/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /home/model-server/tmp/.ts.sock.9007] 2022-12-27T18:07:30,487 [DEBUG] W-9004-age_model_2.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/home/venv/bin/python, /home/venv/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /home/model-server/tmp/.ts.sock.9004] 2022-12-27T18:07:30,487 [DEBUG] W-9003-age_model_2.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/home/venv/bin/python, /home/venv/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /home/model-server/tmp/.ts.sock.9003] 2022-12-27T18:07:31,693 [INFO ] W-9002-age_model_2.0-stdout MODEL_LOG - Listening on port: /home/model-server/tmp/.ts.sock.9002 2022-12-27T18:07:31,696 [INFO ] W-9002-age_model_2.0-stdout MODEL_LOG - [PID]74 2022-12-27T18:07:31,696 [INFO ] W-9002-age_model_2.0-stdout MODEL_LOG - Torch worker started. 2022-12-27T18:07:31,696 [INFO ] W-9002-age_model_2.0-stdout MODEL_LOG - Python runtime: 3.8.0 2022-12-27T18:07:31,703 [DEBUG] W-9002-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - W-9002-age_model_2.0 State change null -> WORKER_STARTED 2022-12-27T18:07:31,732 [INFO ] W-9000-age_model_2.0-stdout MODEL_LOG - Listening on port: /home/model-server/tmp/.ts.sock.9000 2022-12-27T18:07:31,739 [INFO ] W-9007-age_model_2.0-stdout MODEL_LOG - Listening on port: /home/model-server/tmp/.ts.sock.9007 2022-12-27T18:07:31,741 [INFO ] W-9000-age_model_2.0-stdout MODEL_LOG - [PID]68 2022-12-27T18:07:31,741 [INFO ] W-9000-age_model_2.0-stdout MODEL_LOG - Torch worker started. 2022-12-27T18:07:31,742 [INFO ] W-9007-age_model_2.0-stdout MODEL_LOG - [PID]73 2022-12-27T18:07:31,742 [INFO ] W-9000-age_model_2.0-stdout MODEL_LOG - Python runtime: 3.8.0 2022-12-27T18:07:31,741 [DEBUG] W-9000-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - W-9000-age_model_2.0 State change null -> WORKER_STARTED 2022-12-27T18:07:31,744 [DEBUG] W-9007-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - W-9007-age_model_2.0 State change null -> WORKER_STARTED 2022-12-27T18:07:31,744 [INFO ] W-9007-age_model_2.0-stdout MODEL_LOG - Torch worker started. 2022-12-27T18:07:31,744 [INFO ] W-9007-age_model_2.0-stdout MODEL_LOG - Python runtime: 3.8.0 2022-12-27T18:07:31,745 [INFO ] W-9006-age_model_2.0-stdout MODEL_LOG - Listening on port: /home/model-server/tmp/.ts.sock.9006 2022-12-27T18:07:31,748 [INFO ] W-9006-age_model_2.0-stdout MODEL_LOG - [PID]70 2022-12-27T18:07:31,748 [INFO ] W-9006-age_model_2.0-stdout MODEL_LOG - Torch worker started. 2022-12-27T18:07:31,748 [DEBUG] W-9006-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - W-9006-age_model_2.0 State change null -> WORKER_STARTED 2022-12-27T18:07:31,748 [INFO ] W-9006-age_model_2.0-stdout MODEL_LOG - Python runtime: 3.8.0 2022-12-27T18:07:31,751 [INFO ] W-9004-age_model_2.0-stdout MODEL_LOG - Listening on port: /home/model-server/tmp/.ts.sock.9004 2022-12-27T18:07:31,754 [INFO ] W-9004-age_model_2.0-stdout MODEL_LOG - [PID]67 2022-12-27T18:07:31,754 [INFO ] W-9004-age_model_2.0-stdout MODEL_LOG - Torch worker started. 2022-12-27T18:07:31,755 [INFO ] W-9004-age_model_2.0-stdout MODEL_LOG - Python runtime: 3.8.0 2022-12-27T18:07:31,755 [DEBUG] W-9004-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - W-9004-age_model_2.0 State change null -> WORKER_STARTED 2022-12-27T18:07:31,765 [INFO ] W-9005-age_model_2.0-stdout MODEL_LOG - Listening on port: /home/model-server/tmp/.ts.sock.9005 2022-12-27T18:07:31,768 [INFO ] W-9005-age_model_2.0-stdout MODEL_LOG - [PID]71 2022-12-27T18:07:31,768 [INFO ] W-9005-age_model_2.0-stdout MODEL_LOG - Torch worker started. 2022-12-27T18:07:31,768 [DEBUG] W-9005-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - W-9005-age_model_2.0 State change null -> WORKER_STARTED 2022-12-27T18:07:31,768 [INFO ] W-9005-age_model_2.0-stdout MODEL_LOG - Python runtime: 3.8.0 2022-12-27T18:07:31,785 [INFO ] W-9000-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.ts.sock.9000 2022-12-27T18:07:31,785 [INFO ] W-9007-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.ts.sock.9007 2022-12-27T18:07:31,785 [INFO ] W-9004-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.ts.sock.9004 2022-12-27T18:07:31,785 [INFO ] W-9006-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.ts.sock.9006 2022-12-27T18:07:31,785 [INFO ] W-9005-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.ts.sock.9005 2022-12-27T18:07:31,786 [INFO ] W-9002-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.ts.sock.9002 2022-12-27T18:07:31,793 [INFO ] W-9003-age_model_2.0-stdout MODEL_LOG - Listening on port: /home/model-server/tmp/.ts.sock.9003 2022-12-27T18:07:31,796 [INFO ] W-9003-age_model_2.0-stdout MODEL_LOG - [PID]69 2022-12-27T18:07:31,797 [INFO ] W-9003-age_model_2.0-stdout MODEL_LOG - Torch worker started. 2022-12-27T18:07:31,797 [DEBUG] W-9003-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - W-9003-age_model_2.0 State change null -> WORKER_STARTED 2022-12-27T18:07:31,797 [INFO ] W-9003-age_model_2.0-stdout MODEL_LOG - Python runtime: 3.8.0 2022-12-27T18:07:31,797 [INFO ] W-9003-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.ts.sock.9003 2022-12-27T18:07:31,797 [INFO ] W-9001-age_model_2.0-stdout MODEL_LOG - Listening on port: /home/model-server/tmp/.ts.sock.9001 2022-12-27T18:07:31,801 [INFO ] W-9001-age_model_2.0-stdout MODEL_LOG - [PID]72 2022-12-27T18:07:31,801 [INFO ] W-9001-age_model_2.0-stdout MODEL_LOG - Torch worker started. 2022-12-27T18:07:31,801 [DEBUG] W-9001-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - W-9001-age_model_2.0 State change null -> WORKER_STARTED 2022-12-27T18:07:31,801 [INFO ] W-9001-age_model_2.0-stdout MODEL_LOG - Python runtime: 3.8.0 2022-12-27T18:07:31,801 [INFO ] W-9001-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.ts.sock.9001 2022-12-27T18:07:32,044 [INFO ] W-9004-age_model_2.0-stdout MODEL_LOG - Connection accepted: /home/model-server/tmp/.ts.sock.9004. 2022-12-27T18:07:32,044 [INFO ] W-9007-age_model_2.0-stdout MODEL_LOG - Connection accepted: /home/model-server/tmp/.ts.sock.9007. 2022-12-27T18:07:32,044 [INFO ] W-9005-age_model_2.0-stdout MODEL_LOG - Connection accepted: /home/model-server/tmp/.ts.sock.9005. 2022-12-27T18:07:32,044 [INFO ] W-9001-age_model_2.0-stdout MODEL_LOG - Connection accepted: /home/model-server/tmp/.ts.sock.9001. 2022-12-27T18:07:32,044 [INFO ] W-9006-age_model_2.0-stdout MODEL_LOG - Connection accepted: /home/model-server/tmp/.ts.sock.9006. 2022-12-27T18:07:32,044 [INFO ] W-9003-age_model_2.0-stdout MODEL_LOG - Connection accepted: /home/model-server/tmp/.ts.sock.9003. 2022-12-27T18:07:32,044 [INFO ] W-9002-age_model_2.0-stdout MODEL_LOG - Connection accepted: /home/model-server/tmp/.ts.sock.9002. 2022-12-27T18:07:32,044 [INFO ] W-9000-age_model_2.0-stdout MODEL_LOG - Connection accepted: /home/model-server/tmp/.ts.sock.9000. 2022-12-27T18:07:32,055 [INFO ] W-9004-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1672164452055 2022-12-27T18:07:32,055 [INFO ] W-9005-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1672164452055 2022-12-27T18:07:32,055 [INFO ] W-9000-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1672164452055 2022-12-27T18:07:32,055 [INFO ] W-9001-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1672164452055 2022-12-27T18:07:32,055 [INFO ] W-9007-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1672164452055 2022-12-27T18:07:32,055 [INFO ] W-9002-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1672164452055 2022-12-27T18:07:32,055 [INFO ] W-9003-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1672164452055 2022-12-27T18:07:32,055 [INFO ] W-9006-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1672164452055 2022-12-27T18:07:32,181 [INFO ] W-9005-age_model_2.0-stdout MODEL_LOG - model_name: age_model, batchSize: 1 2022-12-27T18:07:32,181 [INFO ] W-9006-age_model_2.0-stdout MODEL_LOG - model_name: age_model, batchSize: 1 2022-12-27T18:07:32,181 [INFO ] W-9000-age_model_2.0-stdout MODEL_LOG - model_name: age_model, batchSize: 1 2022-12-27T18:07:32,181 [INFO ] W-9003-age_model_2.0-stdout MODEL_LOG - model_name: age_model, batchSize: 1 2022-12-27T18:07:32,181 [INFO ] W-9002-age_model_2.0-stdout MODEL_LOG - model_name: age_model, batchSize: 1 2022-12-27T18:07:32,181 [INFO ] W-9007-age_model_2.0-stdout MODEL_LOG - model_name: age_model, batchSize: 1 2022-12-27T18:07:32,181 [INFO ] W-9001-age_model_2.0-stdout MODEL_LOG - model_name: age_model, batchSize: 1 2022-12-27T18:07:32,182 [INFO ] W-9004-age_model_2.0-stdout MODEL_LOG - model_name: age_model, batchSize: 1 2022-12-27T18:07:32,579 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model gender_model 2022-12-27T18:07:32,580 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model gender_model 2022-12-27T18:07:32,801 [INFO ] W-9001-age_model_2.0-stdout MODEL_LOG - generated new fontManager 2022-12-27T18:07:32,805 [INFO ] W-9006-age_model_2.0-stdout MODEL_LOG - generated new fontManager 2022-12-27T18:07:32,809 [INFO ] W-9000-age_model_2.0-stdout MODEL_LOG - generated new fontManager 2022-12-27T18:07:32,814 [INFO ] W-9004-age_model_2.0-stdout MODEL_LOG - generated new fontManager 2022-12-27T18:07:32,901 [INFO ] W-9003-age_model_2.0-stdout MODEL_LOG - generated new fontManager 2022-12-27T18:07:32,908 [INFO ] W-9002-age_model_2.0-stdout MODEL_LOG - generated new fontManager 2022-12-27T18:07:32,911 [INFO ] W-9005-age_model_2.0-stdout MODEL_LOG - generated new fontManager 2022-12-27T18:07:33,004 [INFO ] W-9007-age_model_2.0-stdout MODEL_LOG - generated new fontManager 2022-12-27T18:07:34,586 [INFO ] main org.pytorch.serve.wlm.ModelManager - Dependency installation stdout: Collecting torch 2022-12-27T18:07:34,604 [ERROR] main org.pytorch.serve.wlm.ModelManager - Dependency installation stderr:
2022-12-27T18:07:34,628 [INFO ] W-9005-age_model_2.0-stdout MODEL_LOG - Missing the index_to_name.json file. Inference output will not include class name. 2022-12-27T18:07:34,628 [INFO ] W-9000-age_model_2.0-stdout MODEL_LOG - Missing the index_to_name.json file. Inference output will not include class name. 2022-12-27T18:07:34,628 [INFO ] W-9001-age_model_2.0-stdout MODEL_LOG - Missing the index_to_name.json file. Inference output will not include class name. 2022-12-27T18:07:34,628 [INFO ] W-9003-age_model_2.0-stdout MODEL_LOG - Missing the index_to_name.json file. Inference output will not include class name. 2022-12-27T18:07:34,628 [INFO ] W-9002-age_model_2.0-stdout MODEL_LOG - Missing the index_to_name.json file. Inference output will not include class name. 2022-12-27T18:07:34,665 [INFO ] W-9004-age_model_2.0-stdout MODEL_LOG - Missing the index_to_name.json file. Inference output will not include class name. 2022-12-27T18:07:34,675 [INFO ] W-9006-age_model_2.0-stdout MODEL_LOG - Missing the index_to_name.json file. Inference output will not include class name. 2022-12-27T18:07:34,792 [INFO ] W-9003-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 2608 2022-12-27T18:07:34,792 [INFO ] W-9000-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 2607 2022-12-27T18:07:34,792 [INFO ] W-9005-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 2607 2022-12-27T18:07:34,792 [INFO ] W-9004-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 2607 2022-12-27T18:07:34,792 [INFO ] W-9001-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 2605 2022-12-27T18:07:34,792 [INFO ] W-9002-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 2607 2022-12-27T18:07:34,792 [INFO ] W-9006-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 2606 2022-12-27T18:07:34,797 [DEBUG] W-9000-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - W-9000-age_model_2.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2022-12-27T18:07:34,797 [DEBUG] W-9003-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - W-9003-age_model_2.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2022-12-27T18:07:34,797 [DEBUG] W-9002-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - W-9002-age_model_2.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2022-12-27T18:07:34,797 [DEBUG] W-9004-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - W-9004-age_model_2.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2022-12-27T18:07:34,797 [DEBUG] W-9005-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - W-9005-age_model_2.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2022-12-27T18:07:34,797 [DEBUG] W-9006-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - W-9006-age_model_2.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2022-12-27T18:07:34,797 [DEBUG] W-9001-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - W-9001-age_model_2.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2022-12-27T18:07:34,614 [WARN ] main org.pytorch.serve.ModelServer - Failed to load model: Image-Based-Market-Segmentation/model-store/gender_model.mar org.pytorch.serve.archive.model.ModelException: Custom pip package installation failed for gender_model at org.pytorch.serve.wlm.ModelManager.setupModelDependencies(ModelManager.java:256) ~[model-server.jar:?] at org.pytorch.serve.wlm.ModelManager.registerModel(ModelManager.java:150) ~[model-server.jar:?] at org.pytorch.serve.ModelServer.initModelStore(ModelServer.java:242) [model-server.jar:?] at org.pytorch.serve.ModelServer.startRESTserver(ModelServer.java:356) [model-server.jar:?] at org.pytorch.serve.ModelServer.startAndWait(ModelServer.java:117) [model-server.jar:?] at org.pytorch.serve.ModelServer.main(ModelServer.java:98) [model-server.jar:?] 2022-12-27T18:07:34,812 [INFO ] main org.pytorch.serve.ModelServer - Initialize Inference server with: EpollServerSocketChannel. 2022-12-27T18:07:34,799 [INFO ] W-9001-age_model_2.0 TS_METRICS - W-9001-age_model_2.0.ms:4340|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164454 2022-12-27T18:07:34,800 [INFO ] W-9000-age_model_2.0 TS_METRICS - W-9000-age_model_2.0.ms:4385|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164454 2022-12-27T18:07:34,799 [INFO ] W-9004-age_model_2.0 TS_METRICS - W-9004-age_model_2.0.ms:4332|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164454 2022-12-27T18:07:34,799 [INFO ] W-9006-age_model_2.0 TS_METRICS - W-9006-age_model_2.0.ms:4326|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164454 2022-12-27T18:07:34,813 [INFO ] W-9004-age_model_2.0 TS_METRICS - WorkerThreadTime.ms:151|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164454 2022-12-27T18:07:34,799 [INFO ] W-9002-age_model_2.0 TS_METRICS - W-9002-age_model_2.0.ms:4339|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164454 2022-12-27T18:07:34,799 [INFO ] W-9005-age_model_2.0 TS_METRICS - W-9005-age_model_2.0.ms:4327|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164454 2022-12-27T18:07:34,813 [INFO ] W-9001-age_model_2.0 TS_METRICS - WorkerThreadTime.ms:153|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164454 2022-12-27T18:07:34,813 [INFO ] W-9002-age_model_2.0 TS_METRICS - WorkerThreadTime.ms:151|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164454 2022-12-27T18:07:34,813 [INFO ] W-9000-age_model_2.0 TS_METRICS - WorkerThreadTime.ms:151|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164454 2022-12-27T18:07:34,799 [INFO ] W-9003-age_model_2.0 TS_METRICS - W-9003-age_model_2.0.ms:4337|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164454 2022-12-27T18:07:34,814 [INFO ] W-9003-age_model_2.0 TS_METRICS - WorkerThreadTime.ms:151|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164454 2022-12-27T18:07:34,814 [INFO ] W-9005-age_model_2.0 TS_METRICS - WorkerThreadTime.ms:152|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164454 2022-12-27T18:07:34,813 [INFO ] W-9006-age_model_2.0 TS_METRICS - WorkerThreadTime.ms:152|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164454 2022-12-27T18:07:34,944 [INFO ] main org.pytorch.serve.ModelServer - Inference API bind to: http://0.0.0.0:8080 2022-12-27T18:07:34,944 [INFO ] main org.pytorch.serve.ModelServer - Initialize Management server with: EpollServerSocketChannel. 2022-12-27T18:07:34,952 [INFO ] main org.pytorch.serve.ModelServer - Management API bind to: http://0.0.0.0:8081 2022-12-27T18:07:34,952 [INFO ] main org.pytorch.serve.ModelServer - Initialize Metrics server with: EpollServerSocketChannel. 2022-12-27T18:07:34,953 [INFO ] main org.pytorch.serve.ModelServer - Metrics API bind to: http://0.0.0.0:8082 2022-12-27T18:07:35,031 [INFO ] W-9007-age_model_2.0-stdout MODEL_LOG - Missing the index_to_name.json file. Inference output will not include class name. 2022-12-27T18:07:35,039 [INFO ] W-9007-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 2854 2022-12-27T18:07:35,039 [DEBUG] W-9007-age_model_2.0 org.pytorch.serve.wlm.WorkerThread - W-9007-age_model_2.0 State change WORKER_STARTED -> WORKER_MODEL_LOADED 2022-12-27T18:07:35,040 [INFO ] W-9007-age_model_2.0 TS_METRICS - W-9007-age_model_2.0.ms:4567|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164455 2022-12-27T18:07:35,041 [INFO ] W-9007-age_model_2.0 TS_METRICS - WorkerThreadTime.ms:132|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164455 Model server started. 2022-12-27T18:07:35,618 [INFO ] pool-3-thread-1 TS_METRICS - CPUUtilization.Percent:0.0|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164455 2022-12-27T18:07:35,619 [INFO ] pool-3-thread-1 TS_METRICS - DiskAvailable.Gigabytes:51.24575424194336|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164455 2022-12-27T18:07:35,619 [INFO ] pool-3-thread-1 TS_METRICS - DiskUsage.Gigabytes:8.283340454101562|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164455 2022-12-27T18:07:35,619 [INFO ] pool-3-thread-1 TS_METRICS - DiskUtilization.Percent:13.9|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164455 2022-12-27T18:07:35,619 [INFO ] pool-3-thread-1 TS_METRICS - MemoryAvailable.Megabytes:229.06640625|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164455 2022-12-27T18:07:35,620 [INFO ] pool-3-thread-1 TS_METRICS - MemoryUsed.Megabytes:1610.52734375|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164455 2022-12-27T18:07:35,620 [INFO ] pool-3-thread-1 TS_METRICS - MemoryUtilization.Percent:88.5|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164455 2022-12-27T18:08:35,573 [INFO ] pool-3-thread-1 TS_METRICS - CPUUtilization.Percent:0.0|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164515 2022-12-27T18:08:35,573 [INFO ] pool-3-thread-1 TS_METRICS - DiskAvailable.Gigabytes:51.245750427246094|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164515 2022-12-27T18:08:35,573 [INFO ] pool-3-thread-1 TS_METRICS - DiskUsage.Gigabytes:8.283344268798828|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164515 2022-12-27T18:08:35,573 [INFO ] pool-3-thread-1 TS_METRICS - DiskUtilization.Percent:13.9|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164515 2022-12-27T18:08:35,573 [INFO ] pool-3-thread-1 TS_METRICS - MemoryAvailable.Megabytes:208.01171875|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164515 2022-12-27T18:08:35,574 [INFO ] pool-3-thread-1 TS_METRICS - MemoryUsed.Megabytes:1632.59375|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164515 2022-12-27T18:08:35,574 [INFO ] pool-3-thread-1 TS_METRICS - MemoryUtilization.Percent:89.5|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164515 2022-12-27T18:09:03,801 [INFO ] epollEventLoopGroup-3-1 ACCESS_LOG - /172.17.0.1:62954 "GET /models/age_model HTTP/1.1" 200 6 2022-12-27T18:09:03,801 [INFO ] epollEventLoopGroup-3-1 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164543 2022-12-27T18:09:21,318 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - /172.17.0.1:62956 "GET /models HTTP/1.1" 200 1 2022-12-27T18:09:21,319 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164543 2022-12-27T18:09:35,570 [INFO ] pool-3-thread-1 TS_METRICS - CPUUtilization.Percent:0.0|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164575 2022-12-27T18:09:35,570 [INFO ] pool-3-thread-1 TS_METRICS - DiskAvailable.Gigabytes:51.24574279785156|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164575 2022-12-27T18:09:35,570 [INFO ] pool-3-thread-1 TS_METRICS - DiskUsage.Gigabytes:8.28335189819336|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164575 2022-12-27T18:09:35,570 [INFO ] pool-3-thread-1 TS_METRICS - DiskUtilization.Percent:13.9|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164575 2022-12-27T18:09:35,571 [INFO ] pool-3-thread-1 TS_METRICS - MemoryAvailable.Megabytes:209.3203125|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164575 2022-12-27T18:09:35,571 [INFO ] pool-3-thread-1 TS_METRICS - MemoryUsed.Megabytes:1631.31640625|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164575 2022-12-27T18:09:35,571 [INFO ] pool-3-thread-1 TS_METRICS - MemoryUtilization.Percent:89.5|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164575 2022-12-27T18:09:37,735 [INFO ] epollEventLoopGroup-3-3 ACCESS_LOG - /172.17.0.1:62958 "GET /models/gender_model HTTP/1.1" 200 1 2022-12-27T18:09:37,735 [INFO ] epollEventLoopGroup-3-3 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164543 2022-12-27T18:09:49,617 [INFO ] epollEventLoopGroup-3-4 ACCESS_LOG - /172.17.0.1:62960 "PUT /models/gender_model_4?min_worker=3 HTTP/1.1" 404 1 2022-12-27T18:09:49,617 [INFO ] epollEventLoopGroup-3-4 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164543 2022-12-27T18:10:01,907 [DEBUG] epollEventLoopGroup-3-5 org.pytorch.serve.wlm.ModelManager - updateModel: gender_model, count: 3 2022-12-27T18:10:01,910 [DEBUG] W-9009-gender_model_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/home/venv/bin/python, /home/venv/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /home/model-server/tmp/.ts.sock.9009] 2022-12-27T18:10:01,910 [DEBUG] W-9008-gender_model_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/home/venv/bin/python, /home/venv/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /home/model-server/tmp/.ts.sock.9008] 2022-12-27T18:10:01,910 [DEBUG] W-9010-gender_model_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - Worker cmdline: [/home/venv/bin/python, /home/venv/lib/python3.8/site-packages/ts/model_service_worker.py, --sock-type, unix, --sock-name, /home/model-server/tmp/.ts.sock.9010] 2022-12-27T18:10:01,914 [INFO ] epollEventLoopGroup-3-5 ACCESS_LOG - /172.17.0.1:62962 "PUT /models/gender_model?min_worker=3 HTTP/1.1" 202 7 2022-12-27T18:10:01,914 [INFO ] epollEventLoopGroup-3-5 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:b010c8d0c57c,timestamp:1672164543 2022-12-27T18:10:02,768 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - Listening on port: /home/model-server/tmp/.ts.sock.9010 2022-12-27T18:10:02,768 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - Listening on port: /home/model-server/tmp/.ts.sock.9008 2022-12-27T18:10:02,768 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - Listening on port: /home/model-server/tmp/.ts.sock.9009 2022-12-27T18:10:02,776 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - [PID]219 2022-12-27T18:10:02,776 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - [PID]218 2022-12-27T18:10:02,776 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - [PID]225 2022-12-27T18:10:02,777 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - Torch worker started. 2022-12-27T18:10:02,777 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - Torch worker started. 2022-12-27T18:10:02,777 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - Torch worker started. 2022-12-27T18:10:02,777 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - Python runtime: 3.8.0 2022-12-27T18:10:02,777 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - Python runtime: 3.8.0 2022-12-27T18:10:02,777 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - Python runtime: 3.8.0 2022-12-27T18:10:02,777 [DEBUG] W-9009-gender_model_1.0 org.pytorch.serve.wlm.WorkerThread - W-9009-gender_model_1.0 State change null -> WORKER_STARTED 2022-12-27T18:10:02,777 [DEBUG] W-9010-gender_model_1.0 org.pytorch.serve.wlm.WorkerThread - W-9010-gender_model_1.0 State change null -> WORKER_STARTED 2022-12-27T18:10:02,777 [DEBUG] W-9008-gender_model_1.0 org.pytorch.serve.wlm.WorkerThread - W-9008-gender_model_1.0 State change null -> WORKER_STARTED 2022-12-27T18:10:02,779 [INFO ] W-9009-gender_model_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.ts.sock.9009 2022-12-27T18:10:02,779 [INFO ] W-9010-gender_model_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.ts.sock.9010 2022-12-27T18:10:02,779 [INFO ] W-9008-gender_model_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.ts.sock.9008 2022-12-27T18:10:02,789 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - Connection accepted: /home/model-server/tmp/.ts.sock.9008. 2022-12-27T18:10:02,789 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - Connection accepted: /home/model-server/tmp/.ts.sock.9009. 2022-12-27T18:10:02,789 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - Connection accepted: /home/model-server/tmp/.ts.sock.9010. 2022-12-27T18:10:02,789 [INFO ] W-9008-gender_model_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1672164602789 2022-12-27T18:10:02,789 [INFO ] W-9010-gender_model_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1672164602789 2022-12-27T18:10:02,789 [INFO ] W-9009-gender_model_1.0 org.pytorch.serve.wlm.WorkerThread - Flushing req. to backend at: 1672164602789 2022-12-27T18:10:02,791 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - model_name: gender_model, batchSize: 1 2022-12-27T18:10:02,791 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - model_name: gender_model, batchSize: 1 2022-12-27T18:10:02,791 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - model_name: gender_model, batchSize: 1 2022-12-27T18:10:03,595 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - Backend worker process died. 2022-12-27T18:10:03,595 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - Backend worker process died. 2022-12-27T18:10:03,595 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - Backend worker process died. 2022-12-27T18:10:03,596 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - Traceback (most recent call last): 2022-12-27T18:10:03,596 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - Traceback (most recent call last): 2022-12-27T18:10:03,596 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - Traceback (most recent call last): 2022-12-27T18:10:03,596 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - File "/home/venv/lib/python3.8/site-packages/ts/model_loader.py", line 100, in load 2022-12-27T18:10:03,596 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - File "/home/venv/lib/python3.8/site-packages/ts/model_loader.py", line 100, in load 2022-12-27T18:10:03,597 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - module, function_name = self._load_handler_file(handler) 2022-12-27T18:10:03,597 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - module, function_name = self._load_handler_file(handler) 2022-12-27T18:10:03,597 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - File "/home/venv/lib/python3.8/site-packages/ts/model_loader.py", line 162, in _load_handler_file 2022-12-27T18:10:03,597 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - File "/home/venv/lib/python3.8/site-packages/ts/model_loader.py", line 100, in load 2022-12-27T18:10:03,597 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - File "/home/venv/lib/python3.8/site-packages/ts/model_loader.py", line 162, in _load_handler_file 2022-12-27T18:10:03,597 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - module = importlib.import_module(module_name) 2022-12-27T18:10:03,597 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - module, function_name = self._load_handler_file(handler) 2022-12-27T18:10:03,597 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - File "/usr/lib/python3.8/importlib/init.py", line 127, in import_module 2022-12-27T18:10:03,597 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - module = importlib.import_module(module_name) 2022-12-27T18:10:03,597 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - return _bootstrap._gcd_import(name[level:], package, level) 2022-12-27T18:10:03,597 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - File "/usr/lib/python3.8/importlib/init.py", line 127, in import_module 2022-12-27T18:10:03,597 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - File "", line 1014, in _gcd_import
2022-12-27T18:10:03,597 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - File "/home/venv/lib/python3.8/site-packages/ts/model_loader.py", line 162, in _load_handler_file
2022-12-27T18:10:03,597 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - return _bootstrap._gcd_import(name[level:], package, level)
2022-12-27T18:10:03,597 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - module = importlib.import_module(module_name)
2022-12-27T18:10:03,597 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - File "", line 1014, in _gcd_import
2022-12-27T18:10:03,597 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - File "/usr/lib/python3.8/importlib/init.py", line 127, in import_module
2022-12-27T18:10:03,597 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - File "", line 991, in _find_and_load
2022-12-27T18:10:03,598 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - return _bootstrap._gcd_import(name[level:], package, level)
2022-12-27T18:10:03,598 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - File "", line 975, in _find_and_load_unlocked
2022-12-27T18:10:03,598 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - File "", line 1014, in _gcd_import
2022-12-27T18:10:03,598 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - File "", line 991, in _find_and_load
2022-12-27T18:10:03,598 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - File "", line 671, in _load_unlocked
2022-12-27T18:10:03,598 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - File "", line 991, in _find_and_load
2022-12-27T18:10:03,598 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - File "", line 975, in _find_and_load_unlocked
2022-12-27T18:10:03,598 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - File "", line 975, in _find_and_load_unlocked
2022-12-27T18:10:03,598 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - File "", line 671, in _load_unlocked
2022-12-27T18:10:03,598 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - File "", line 671, in _load_unlocked
2022-12-27T18:10:03,598 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - File "", line 783, in exec_module
2022-12-27T18:10:03,598 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - File "", line 783, in exec_module
2022-12-27T18:10:03,598 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - File "", line 783, in exec_module
2022-12-27T18:10:03,599 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - File "", line 219, in _call_with_frames_removed
2022-12-27T18:10:03,599 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - File "", line 219, in _call_with_frames_removed
2022-12-27T18:10:03,599 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - File "", line 219, in _call_with_frames_removed
2022-12-27T18:10:03,599 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - File "/home/model-server/tmp/models/02a0273602644a07913e6cf05ba671e3/Gender_Handler.py", line 7, in
2022-12-27T18:10:03,599 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - File "/home/model-server/tmp/models/02a0273602644a07913e6cf05ba671e3/Gender_Handler.py", line 7, in
2022-12-27T18:10:03,599 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - File "/home/model-server/tmp/models/02a0273602644a07913e6cf05ba671e3/Gender_Handler.py", line 7, in
2022-12-27T18:10:03,599 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - import oracledb
2022-12-27T18:10:03,599 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - import oracledb
2022-12-27T18:10:03,599 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - import oracledb
2022-12-27T18:10:03,599 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - ModuleNotFoundError: No module named 'oracledb'
2022-12-27T18:10:03,599 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - ModuleNotFoundError: No module named 'oracledb'
2022-12-27T18:10:03,599 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - ModuleNotFoundError: No module named 'oracledb'
2022-12-27T18:10:03,599 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG -
2022-12-27T18:10:03,599 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG -
2022-12-27T18:10:03,599 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG -
2022-12-27T18:10:03,599 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - During handling of the above exception, another exception occurred:
2022-12-27T18:10:03,599 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - During handling of the above exception, another exception occurred:
2022-12-27T18:10:03,600 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - During handling of the above exception, another exception occurred:
2022-12-27T18:10:03,600 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG -
2022-12-27T18:10:03,600 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - Traceback (most recent call last):
2022-12-27T18:10:03,600 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG -
2022-12-27T18:10:03,600 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG -
2022-12-27T18:10:03,600 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - File "/home/venv/lib/python3.8/site-packages/ts/model_service_worker.py", line 210, in
2022-12-27T18:10:03,600 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - Traceback (most recent call last):
2022-12-27T18:10:03,600 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - Traceback (most recent call last):
2022-12-27T18:10:03,600 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - worker.run_server()
2022-12-27T18:10:03,600 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - File "/home/venv/lib/python3.8/site-packages/ts/model_service_worker.py", line 210, in
2022-12-27T18:10:03,600 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - File "/home/venv/lib/python3.8/site-packages/ts/model_service_worker.py", line 210, in
2022-12-27T18:10:03,600 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - File "/home/venv/lib/python3.8/site-packages/ts/model_service_worker.py", line 181, in run_server
2022-12-27T18:10:03,600 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - worker.run_server()
2022-12-27T18:10:03,600 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - self.handle_connection(cl_socket)
2022-12-27T18:10:03,600 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - File "/home/venv/lib/python3.8/site-packages/ts/model_service_worker.py", line 181, in run_server
2022-12-27T18:10:03,600 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - File "/home/venv/lib/python3.8/site-packages/ts/model_service_worker.py", line 139, in handle_connection
2022-12-27T18:10:03,601 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - self.handle_connection(cl_socket)
2022-12-27T18:10:03,601 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - worker.run_server()
2022-12-27T18:10:03,601 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - File "/home/venv/lib/python3.8/site-packages/ts/model_service_worker.py", line 139, in handle_connection
2022-12-27T18:10:03,601 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - service, result, code = self.load_model(msg)
2022-12-27T18:10:03,601 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - service, result, code = self.load_model(msg)
2022-12-27T18:10:03,601 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - File "/home/venv/lib/python3.8/site-packages/ts/model_service_worker.py", line 181, in run_server
2022-12-27T18:10:03,601 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - File "/home/venv/lib/python3.8/site-packages/ts/model_service_worker.py", line 104, in load_model
2022-12-27T18:10:03,601 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - File "/home/venv/lib/python3.8/site-packages/ts/model_service_worker.py", line 104, in load_model
2022-12-27T18:10:03,601 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - service = model_loader.load(
2022-12-27T18:10:03,601 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - self.handle_connection(cl_socket)
2022-12-27T18:10:03,601 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - service = model_loader.load(
2022-12-27T18:10:03,601 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - File "/home/venv/lib/python3.8/site-packages/ts/model_service_worker.py", line 139, in handle_connection
2022-12-27T18:10:03,602 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - File "/home/venv/lib/python3.8/site-packages/ts/model_loader.py", line 102, in load
2022-12-27T18:10:03,602 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - module = self._load_default_handler(handler)
2022-12-27T18:10:03,602 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - File "/home/venv/lib/python3.8/site-packages/ts/model_loader.py", line 167, in _load_default_handler
2022-12-27T18:10:03,602 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - module = importlib.import_module(module_name, "ts.torch_handler")
2022-12-27T18:10:03,602 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - service, result, code = self.load_model(msg)
2022-12-27T18:10:03,602 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - File "/usr/lib/python3.8/importlib/init.py", line 127, in import_module
2022-12-27T18:10:03,602 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - File "/home/venv/lib/python3.8/site-packages/ts/model_service_worker.py", line 104, in load_model
2022-12-27T18:10:03,602 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - return _bootstrap._gcd_import(name[level:], package, level)
2022-12-27T18:10:03,602 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - File "", line 1014, in _gcd_import
2022-12-27T18:10:03,602 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - service = model_loader.load(
2022-12-27T18:10:03,602 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - File "/home/venv/lib/python3.8/site-packages/ts/model_loader.py", line 102, in load
2022-12-27T18:10:03,602 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - module = self._load_default_handler(handler)
2022-12-27T18:10:03,603 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - File "/home/venv/lib/python3.8/site-packages/ts/model_loader.py", line 102, in load
2022-12-27T18:10:03,602 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - File "", line 991, in _find_and_load
2022-12-27T18:10:03,603 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - module = self._load_default_handler(handler)
2022-12-27T18:10:03,603 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - File "", line 961, in _find_and_load_unlocked
2022-12-27T18:10:03,603 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - File "/home/venv/lib/python3.8/site-packages/ts/model_loader.py", line 167, in _load_default_handler
2022-12-27T18:10:03,603 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - File "/home/venv/lib/python3.8/site-packages/ts/model_loader.py", line 167, in _load_default_handler
2022-12-27T18:10:03,603 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - File "", line 219, in _call_with_frames_removed
2022-12-27T18:10:03,603 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - module = importlib.import_module(module_name, "ts.torch_handler")
2022-12-27T18:10:03,603 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - module = importlib.import_module(module_name, "ts.torch_handler")
2022-12-27T18:10:03,603 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - File "", line 1014, in _gcd_import
2022-12-27T18:10:03,603 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - File "/usr/lib/python3.8/importlib/init.py", line 127, in import_module
2022-12-27T18:10:03,603 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - File "/usr/lib/python3.8/importlib/init.py", line 127, in import_module
2022-12-27T18:10:03,603 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - return _bootstrap._gcd_import(name[level:], package, level)
2022-12-27T18:10:03,603 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - return _bootstrap._gcd_import(name[level:], package, level)
2022-12-27T18:10:03,603 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - File "", line 1014, in _gcd_import
2022-12-27T18:10:03,603 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - File "", line 991, in _find_and_load
2022-12-27T18:10:03,603 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - File "", line 961, in _find_and_load_unlocked
2022-12-27T18:10:03,604 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - File "", line 219, in _call_with_frames_removed
2022-12-27T18:10:03,604 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - File "", line 1014, in _gcd_import
2022-12-27T18:10:03,604 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - File "", line 991, in _find_and_load
2022-12-27T18:10:03,603 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - File "", line 991, in _find_and_load
2022-12-27T18:10:03,604 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - File "", line 973, in _find_and_load_unlocked
2022-12-27T18:10:03,603 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - File "", line 1014, in _gcd_import
2022-12-27T18:10:03,604 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - File "", line 973, in _find_and_load_unlocked
2022-12-27T18:10:03,604 [INFO ] W-9010-gender_model_1.0-stdout MODEL_LOG - ModuleNotFoundError: No module named 'ts.torch_handler.Gender_Handler'
2022-12-27T18:10:03,604 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - File "", line 991, in _find_and_load
2022-12-27T18:10:03,604 [INFO ] W-9009-gender_model_1.0-stdout MODEL_LOG - ModuleNotFoundError: No module named 'ts.torch_handler.Gender_Handler'
2022-12-27T18:10:03,604 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - File "", line 961, in _find_and_load_unlocked
2022-12-27T18:10:03,605 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - File "", line 219, in _call_with_frames_removed
2022-12-27T18:10:03,605 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - File "", line 1014, in _gcd_import
2022-12-27T18:10:03,605 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - File "", line 991, in _find_and_load
2022-12-27T18:10:03,605 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - File "", line 973, in _find_and_load_unlocked
2022-12-27T18:10:03,605 [INFO ] W-9008-gender_model_1.0-stdout MODEL_LOG - ModuleNotFoundError: No module named 'ts.torch_handler.Gender_Handler'
2022-12-27T18:10:03,597 [INFO ] epollEventLoopGroup-5-9 org.pytorch.serve.wlm.WorkerThread - 9009 Worker disconnected. WORKER_STARTED
2022-12-27T18:10:03,597 [INFO ] epollEventLoopGroup-5-11 org.pytorch.serve.wlm.WorkerThread - 9010 Worker disconnected. WORKER_STARTED
2022-12-27T18:10:03,597 [INFO ] epollEventLoopGroup-5-10 org.pytorch.serve.wlm.WorkerThread - 9008 Worker disconnected. WORKER_STARTED
2022-12-27T18:10:03,607 [DEBUG] W-9009-gender_model_1.0 org.pytorch.serve.wlm.WorkerThread - System state is : WORKER_STARTED
2022-12-27T18:10:03,607 [DEBUG] W-9008-gender_model_1.0 org.pytorch.serve.wlm.WorkerThread - System state is : WORKER_STARTED
2022-12-27T18:10:03,607 [DEBUG] W-9010-gender_model_1.0 org.pytorch.serve.wlm.WorkerThread - System state is : WORKER_STARTED
2022-12-27T18:10:03,608 [DEBUG] W-9008-gender_model_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died.
java.lang.InterruptedException: null
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2056) ~[?:?]
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2133) ~[?:?]
at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:432) ~[?:?]
at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:189) [model-server.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
2022-12-27T18:10:03,608 [DEBUG] W-9010-gender_model_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died.
java.lang.InterruptedException: null
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2056) ~[?:?]
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2133) ~[?:?]
at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:432) ~[?:?]
at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:189) [model-server.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
2022-12-27T18:10:03,608 [DEBUG] W-9009-gender_model_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died.
java.lang.InterruptedException: null
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2056) ~[?:?]
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2133) ~[?:?]
at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:432) ~[?:?]
at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:189) [model-server.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
Installation instructions
Are you using Docker Image: Yes
Model Packaing
My model-store has 2 subfolders containing the contents used in each one of the mar files. The actual mar files are located in the parent model-store, rather than the subfolders.
config.properties
inference_address=http://0.0.0.0:8080 management_address=http://0.0.0.0:8081 metrics_address=http://0.0.0.0:8082 number_of_netty_threads=32 job_queue_size=1000 model_store=/home/model-server/model-store workflow_store=/home/model-server/wf-store install_py_dep_per_model=true
Versions
pytorch/torchserve:latest
Repro instructions
Start Docker container, mount local directory, and expose ports:
docker run --rm -it --expose 1521 -p 8080:8080 -p 8081:8081 --expose 1521 --volume //c/Users/moo/Desktop/Image-Based-Market-Segmentation pytorch/torchserve:latest bash
To create the mar file for gender model with State Dict + model class:
torch-model-archiver \ --model-name gender_model \ --version 1.0 \ --model-file Image-Based-Market-Segmentation/model-store/gender_modelstore/gender_model_architecture.py \ --serialized-file Image-Based-Market-Segmentation/model-store/gender_modelstore/gender_checkpoint_statedict_3.pt \ --handler Image-Based-Market-Segmentation/model-store/gender_modelstore/Gender_Handler.py \ --requirements-file Image-Based-Market-Segmentation/model-store/gender_modelstore/gender_requirements.txt \ --extra-files "Image-Based-Market-Segmentation/model-store/gender_modelstore/index_to_name.json,Image-Based-Market-Segmentation/model-store/gender_modelstore/sql_config.py" \ --export-path Image-Based-Market-Segmentation/model-store -f
To create the mar file for age model using the traced model:
torch-model-archiver \ --model-name age_model \ --version 2.0 \ --serialized-file Image-Based-Market-Segmentation/model-store/age_modelstore/scripted_resnet_fc4.pt \ --handler Image-Based-Market-Segmentation/model-store/age_modelstore/Age_Handler.py \ --requirements-file Image-Based-Market-Segmentation/model-store/age_modelstore/age_requirements.txt \ --extra-files Image-Based-Market-Segmentation/model-store/age_modelstore/sql_config.py \ --export-path Image-Based-Market-Segmentation/model-store -f
Once the .mar files are in the model store I tried serving them using:
or:
This is where the errors start happening.
I have also tried loading a single model first, and then register the second model later using the management api.
When I try this way, the first model that is loaded works perfectly fine, but I get an error message when I try to curl the second model that is registered using the management api.
First model served using:
torchserve --start --model-store Image-Based-Market-Segmentation/model-store \ --models gender_model=Image-Based-Market-Segmentation/model-store/gender_model.mar \ --ts-config Image-Based-Market-Segmentation/model-store/config.properties \ --ncs
Second model registered through management api using:
curl -X POST "http://localhost:8081/models?url=C:\Users\Moo\Desktop\Image-Based-Market-Segmentation\model-store\age_model.mar"
I get the error message:
{ "code": 400, "type": "ModelException", "message": "Custom pip package installation failed for age_model" }
Possible Solution
Both models have the exact same dependencies, so I'm not sure why there is an issue installing dependencies when there are two models but there is no error when installing dependencies for a single model. I've tried removing one of the requirement files and only including one, however it gave the same results. My last thought is that I'm missing something in my config.properties file that enables a shared requirements file for multiple models, however I don't see many examples of multi model serving to address this.