aws / sagemaker-inference-toolkit

Serve machine learning models within a 🐳 Docker container using 🧠 Amazon SageMaker.
Apache License 2.0
385 stars 82 forks source link

Default output function encodes results to JSON and that seems to add to response latency. #84

Open vdantu opened 3 years ago

vdantu commented 3 years ago

Describe the bug By default the accept type in inference container seems to be application/json. The default encoder which converts results to JSON seems to add significantly to the response latencies. Is there a way to reduce the default response's latencies?

Pawel842 commented 10 months ago

up, did you find the solution?