Closed BorisPolonsky closed 4 years ago
@BorisPolonsky: TorchServe only calls the handle
function, which is the default entry point for your model handler which in turn should call the pre-process function. Since you are overriding the default behavior of handle function from BaseHandler
, you will need to take care of the function call.
Also, it is recommended that you should not overwrite the handle
function from BaseHandler
unless you want to change the default handling like adding another function call in your handle pipeline.
@BorisPolonsky: TorchServe only calls the
handle
function, which is the default entry point for your model handler which in turn should call the pre-process function. Since you are overriding the default behavior of handle function fromBaseHandler
, you will need to take care of the function call.Also, it is recommended that you should not overwrite the
handle
function fromBaseHandler
unless you want to change the default handling like adding another function call in your handle pipeline.
Thanks for the clarification. Apparently this is not a bug. I'll close this issue.
Your issue may already be reported! Please search on the issue tracker before creating one.
Context
pytorch/torchserve:0.2.0-cuda10.1-cudnn7-runtime
)Your Environment
Expected Behavior
the custom
preprocess
method should be called for custom handler when model server receives the request.Current Behavior
Method
preprocess
in my handler is never called. To prove this I addsys.stdout.write
andexit(-1)
in methodpreprocess
andhandle
respectively, I can only see messages to be written to stdout defined withinhandler
method made is way to the terminal, while thepreprocess
counterpart didn't. Same thing for theexit
statement, which only takes effect in methodhandle
, which suggest that thepreprocess
method is never called.Possible Solution
Steps to Reproduce
torch-model-archiver --model-name <model_name> --version 0.1 --serialized-file ./script-module-20200817-172517.pt --handler ./ts_handler.py
docker run --rm -it -e LANG=C.UTF-8 --name torchserve -v /directory/containing/the/mar/archive/:/home/model-server/model-store:ro -p 8080:8080 -p 8081:8081 --gpus all pytorch/torchserve:0.2.0-cuda10.1-cudnn7-runtime torchserve --start --ts-config /home/model-server/config.properties --models msra_ner=/home/model-server/model-store/<model-name>.mar
curl localhost:8080/predictions/<model_name> -T test.txt
The custome handler ts_handler.py is defined as
Failure Logs [if any]
Note that
===========Preprocessing==========
never made its way to the log