Closed ericy51 closed 1 year ago
I was finally able to get this working with the help of this documentation:
https://www.tensorflow.org/tfx/serving/api_rest
For my specific setup this is the structure of the payload that got me results:
{"signature_name": "excl_10", "inputs":{"id":["am_183827917602"] ,"beauty": [0.85] ,"fashion": [0.04] ,"wellness": [0.0] , "home":[0.09]}}
I believe I've been able to successfully leverage the signature parameter when I save my retrieval model and am using this to filter out certain recommendations which I'm planning to use for different page contexts on an ecommerce site.
I'm able to reference these signatures locally when I load my model
However I want to be able to serve these results via an inference endpoint in Sagemaker. Is there a way to reference saved model signatures with custom headers or any other parameters via an HTTP request? Or possibly this is the wrong approach altogether? Below is the request code for Sagemaker but it doesn't look like there's any ability to add additional parameters?