Open danial880 opened 7 months ago
Can you try curl http://127.0.0.1:8080/ping
? It will return Status equals Healthy if your service run smoothly, else return Unhealthy
This is my config.properties
file, you can refer to it:
inference_address=http://0.0.0.0:8080
management_address=http://0.0.0.0:8081
metrics_address=http://0.0.0.0:8082
load_models=all
install_py_dep_per_model=true
model_store=model-store
models={\
"FaceDetection": {\
"1.0": {\
"defaultVersion": true,\
"marName": "FaceDetection.mar",\
"minWorkers": 1,\
"maxWorkers": 1,\
"batchSize": 256,\
"maxBatchDelay": 100,\
"responseTimeout": 120\
}\
},\
"FaceExpression": {\
"1.0": {\
"defaultVersion": true,\
"marName": "FaceExpression.mar",\
"minWorkers": 1,\
"maxWorkers": 4,\
"batchSize": 256,\
"maxBatchDelay": 100,\
"responseTimeout": 120\
}\
},\
"FaceRecognition": {\
"1.0": {\
"defaultVersion": true,\
"marName": "FaceRecognition.mar",\
"minWorkers": 1,\
"maxWorkers": 4,\
"batchSize": 256,\
"maxBatchDelay": 100,\
"responseTimeout": 120\
}\
},\
"HumanPose": {\
"1.0": {\
"defaultVersion": true,\
"marName": "HumanPose.mar",\
"minWorkers": 1,\
"maxWorkers": 4,\
"batchSize": 256,\
"maxBatchDelay": 100,\
"responseTimeout": 120\
}\
},\
"ActionRecognition": {\
"1.0": {\
"defaultVersion": true,\
"marName": "ActionRecognition.mar",\
"minWorkers": 1,\
"maxWorkers": 4,\
"batchSize": 256,\
"maxBatchDelay": 100,\
"responseTimeout": 120\
}\
}\
}
@danial880 Once you've started torchserve and loaded the model could you please try the following API to check that the model has been successfully loaded and the workers have started successfully:
curl http://127.0.0.1:8081/models/facex
Also, when you run curl -X POST http://127.0.0.1:8080/predictions/facex -T 0294.png
what happens?
Does curl hang or show any error?
Upon posting image: curl command get executed with no error
@danial880 The posted log does not include the information about inference. can you please post the full log?
I have asked this question on stackoverflow but got no answer. The posted image with curl is not received on the local server and no errors are logging. Here is the code:
Handler.py
command for creating .mar
command for running server
command for inference
config.properties
ts_log.log