A fast, easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
There's an issue with the inference_id changing for the same request between when the inference is resulted and when the ActiveLearning middleware runs and creates the batch. This is resulting in images created by AL having an inferenceId that doesn't map to an inference in our model monitoring database. This changes ensures that we reuse the same id if it exists
Type of change
Please delete options that are not relevant.
[X] Bug fix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds functionality)
[ ] This change requires a documentation update
How has this change been tested, please provide a testcase or example of how you tested the change?
Need to check whether this interferes with the parallel request processing
Any specific deployment considerations
For example, documentation changes, usability, usage/costs, secrets, etc.
Docs
[ ] Docs updated? What were the changes:
To see the specific tasks where the Asana app for GitHub is being used, see below:
Description
Type of change
Please delete options that are not relevant.
How has this change been tested, please provide a testcase or example of how you tested the change?
Any specific deployment considerations
For example, documentation changes, usability, usage/costs, secrets, etc.
Docs