A fast, easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
Inference_sdk will forcibly encode image as jpeg (and then convert to base64 string) before sending request to inference server, this is not necessary and not expected.
We have the requirement from Duolingo that they want to keep the original image quality, but jpeg encoding will degrade image quality and affect the inference results.
Type of change
Please delete options that are not relevant.
[x] Bug fix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds functionality)
[ ] This change requires a documentation update
How has this change been tested, please provide a testcase or example of how you tested the change?
Description
Inference_sdk will forcibly encode image as jpeg (and then convert to base64 string) before sending request to inference server, this is not necessary and not expected. We have the requirement from Duolingo that they want to keep the original image quality, but jpeg encoding will degrade image quality and affect the inference results.
Type of change
Please delete options that are not relevant.
How has this change been tested, please provide a testcase or example of how you tested the change?
Tested locally.