pytorch / serve

Serve, optimize and scale PyTorch models in production
https://pytorch.org/serve/
Apache License 2.0
4.17k stars 846 forks source link

How to get the URL parameters within the custom inference handler? #989

Open neoragex2002 opened 3 years ago

neoragex2002 commented 3 years ago

Hi guys, recently I'm writing an custom service handler for yolov5. However, I have no idea about how to get the URL parameters in my inference handler.

For example:

curl -XPOST http://localhost:8080/predictions/yolo?my_parameter=123 -T@sample.jpg

How can I get the value of my_parameter in my custom service handler?

I know that I could pass the parameters within the multipart/form-data or json body to my service handler. But I can't, because the API signature is by-design. Passing the parameter with URL is the only choice of mine.

Any suggestions would be appreciated!

dhanainme commented 3 years ago

Currently only BODY of the request is passed to the CustomHandler. The URL params of the Model server's inference endpoint are static and not configurable.

neoragex2002 commented 3 years ago

Currently only BODY of the request is passed to the CustomHandler. The URL params of the Model server's inference endpoint are static and not configurable.

Hi dhanainme, Thanks for your reply.

In the end I had to use an nginx proxy (njs script) to translate my calls from GET/URL style to POST/BODY style.... It's very inconvenient, but I had no choices...

LEONHWH commented 10 months ago

Currently only BODY of the request is passed to the CustomHandler. The URL params of the Model server's inference endpoint are static and not configurable.

Hi dhanainme, Thanks for your reply.

In the end I had to use an nginx proxy (njs script) to translate my calls from GET/URL style to POST/BODY style.... It's very inconvenient, but I had no choices...

Hey,bro. How to deal with it. Can you offer a link about this solution